From tony at bakeyournoodle.com Wed Aug 1 00:09:28 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 1 Aug 2018 10:09:28 +1000 Subject: [openstack-dev] [all][election] PTL voting underway Message-ID: <20180801000928.GD15918@thor.bakeyournoodle.com> Hi folks, Polls for PTL elections are now open and will remain open for you to cast your vote until Aug 07, 2018 23:45 UTC. We are having elections for Senlin and Tacker. If you are a Foundation individual member and had a commit in one of the program's projects[0] over the Aug 11, 2017 00:00 UTC to Jul 24, 2018 00:00 UTC timeframe (Queens to Rocky) then you are eligible to vote. You should find your email with a link to the Condorcet page to cast your vote in the inbox of your gerrit preferred email[1]. What to do if you don't see the email and have a commit in at least one of the programs having an election: * check the trash or spam folders of your gerrit Preferred Email address, in case it went into trash or spam * wait a bit and check again, in case your email server is a bit slow * find the sha of at least one commit from the program project repos[0] and email the election officials. If we can confirm that you are entitled to vote, we will add you to the voters list for the appropriate election. Our democratic process is important to the health of OpenStack, please exercise your right to vote! Candidate statements/platforms can be found linked to Candidate names on this page: http://governance.openstack.org/election/#stein-ptl-candidates Happy voting, [0] The list of the program projects eligible for electoral status: https://git.openstack.org/cgit/openstack/governance/plain/reference/projects.yaml?id=aug-2018-elections [1] Sign into review.openstack.org: Go to Settings > Contact Information. Look at the email listed as your Preferred Email. That is where the ballot has been sent. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Wed Aug 1 00:32:50 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 1 Aug 2018 10:32:50 +1000 Subject: [openstack-dev] [all][election][tc] Lederless projects. In-Reply-To: <20180731235512.GB15918@thor.bakeyournoodle.com> References: <20180731235512.GB15918@thor.bakeyournoodle.com> Message-ID: <20180801003249.GE15918@thor.bakeyournoodle.com> On Wed, Aug 01, 2018 at 09:55:13AM +1000, Tony Breeds wrote: > > Hello all, > The PTL Nomination period is now over. The official candidate list > is available on the election website[0]. > > There are 8 projects without candidates, so according to this > resolution[1], the TC will have to decide how the following > projects will proceed: Dragonflow, Freezer, Loci, Packaging_Rpm, > RefStack, Searchlight, Trove and Winstackers. Hello TC, A few extra details[1]: --------------------------------------------------- Projects[1] : 65 Projects with candidates : 57 ( 87.69%) Projects with election : 2 ( 3.08%) --------------------------------------------------- Need election : 2 (Senlin Tacker) Need appointment : 8 (Dragonflow Freezer Loci Packaging_Rpm RefStack Searchlight Trove Winstackers) =================================================== Stats gathered @ 2018-08-01 00:11:59 UTC Of the 8 projects that can be considered leaderless, Trove did have a candidate[2] that doesn't meet the ATC criteria in that they do not have a merged change. I also excluded Security due to the governance review[3] to remove it as a project and the companion email discussion[4] Yours Tony. [1] http://paste.openstack.org/show/727002 [2] https://review.openstack.org/587333 [3] https://review.openstack.org/586896 [4] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132595.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From forrest.zhao at intel.com Wed Aug 1 02:30:22 2018 From: forrest.zhao at intel.com (Zhao, Forrest) Date: Wed, 1 Aug 2018 02:30:22 +0000 Subject: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to VF mirroring In-Reply-To: References: <6345119E91D5C843A93D64F498ACFA136999ECF2@SHSMSX101.ccr.corp.intel.com> Message-ID: <6345119E91D5C843A93D64F498ACFA136999F023@SHSMSX101.ccr.corp.intel.com> Hi Miguel, I just notice that you’re also in the reviewer list of #57447 ☺ Look forward to having more discussion on the design details next week. We also plan to propose 5~6 new features (which originate from StarlingX project [1]) to Stein. So far 2 specs have been uploaded for review: [2] and [3]. BTW. Do you know when the PTG etherpad will be created? We’ll first propose our specs to PTG etherpad. And hopefully we can have opportunity to attend PTG in Denver to have face-to-face discussion with you and other key stakeholders in community. Thanks, Forrest [1] https://wiki.openstack.org/wiki/StarlingX [2] https://review.openstack.org/579410/ [3] https://review.openstack.org/579411 From: Miguel Lavalle [mailto:miguel at mlavalle.com] Sent: Wednesday, August 1, 2018 1:26 AM To: Zhao, Forrest Cc: OpenStack Development Mailing List Subject: Re: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to VF mirroring Hi Forrest, Yes, in my email, I was precisely referring to the work around https://review.openstack.org/#/c/574477. Now that we are wrapping up Rocky, I wanted to raise the visibility of this spec. I am glad you noticed. This week we are going to cut our RC-1 and I don't anticipate that we will will have a RC-2 for Rocky. So starting next week, let's go back to the spec and refine it, so we can start implementing in Stein as soon as possible. Depending on how much progress we make in the spec, we may need to schedule a discussion during the PTG in Denver, September 10 - 14, in case face to face time is needed to reach an agreement. I know that Manjeet is going to attend the PTG and he has already talked to me about this spec in the recent past. So maybe Manjeet could be the conduit to represent this spec in Denver, in case we need to talk about it there Best regards Miguel On Tue, Jul 31, 2018 at 4:12 AM, Zhao, Forrest > wrote: Hi Miguel, In your mail “PTL candidacy for the Stein cycle”, it mentioned that “port mirroring for SR-IOV VF to VF mirroring” is within Stein goal. Could you tell where is the place to discuss the design for this feature? Mailing list, IRC channel, weekly meeting or others? I was involved in its spec review at https://review.openstack.org/#/c/574477/; but it has not been updated for a while. Thanks, Forrest -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Wed Aug 1 05:45:02 2018 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 1 Aug 2018 15:45:02 +1000 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels Message-ID: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> Hello, It seems freenode is currently receiving a lot of unsolicited traffic across all channels. The freenode team are aware [1] and doing their best. There are not really a lot of options. We can set "+r" on channels which means only nickserv registered users can join channels. We have traditionally avoided this, because it is yet one more barrier to communication when many are already unfamiliar with IRC access. However, having channels filled with irrelevant messages is also not very accessible. This is temporarily enabled in #openstack-infra for the time being, so we can co-ordinate without interruption. Thankfully AFAIK we have not needed an abuse policy on this before; but I guess we are the point we need some sort of coordinated response. I'd suggest to start, people with an interest in a channel can request +r from an IRC admin in #openstack-infra and we track it at [2] Longer term ... suggestions welcome? :) -i [1] https://freenode.net/news/spambot-attack [2] https://etherpad.openstack.org/p/freenode-plus-r-08-2018 From yamamoto at midokura.com Wed Aug 1 06:25:20 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Wed, 1 Aug 2018 15:25:20 +0900 Subject: [openstack-dev] [taas] cancel the weekly meeting officially Message-ID: hi, please see https://review.openstack.org/#/c/578328/ and comment on it if you have any opinions. thank you. From aschadin at sbcloud.ru Wed Aug 1 07:03:50 2018 From: aschadin at sbcloud.ru (=?utf-8?B?0KfQsNC00LjQvSDQkNC70LXQutGB0LDQvdC00YAg0KHQtdGA0LPQtdC10LI=?= =?utf-8?B?0LjRhw==?=) Date: Wed, 1 Aug 2018 07:03:50 +0000 Subject: [openstack-dev] [watcher] no weekly meeting today Message-ID: Hi folks, I’m not able to run weekly meeting today because of vacation. Let’s meet in the next week on Wednesday. Best regards, Alex From jean-philippe at evrard.me Wed Aug 1 08:13:51 2018 From: jean-philippe at evrard.me (=?utf-8?q?jean-philippe=40evrard=2Eme?=) Date: Wed, 01 Aug 2018 10:13:51 +0200 Subject: [openstack-dev] =?utf-8?q?=5Bopenstack-ansible=5D_Change_in_our_I?= =?utf-8?q?RC_channel?= Message-ID: <725f-5b616b80-11-7f7c7a00@249916277> Hello everyone, Due to a continuously increasing spam [0] on our IRC channels, I have decided to make our channel (#openstack-ansible on freenode) only joinable by Freenode's nickserv registered users. I am sorry for the inconvenience, as it will now be harder to reach us (but it's not that hard to register! [1]). The conversations will be easier to follow though. You can still contact us on the mailing lists too. Regards, Jean-Philippe Evrard (evrardjp) [0]: https://freenode.net/news/spambot-attack [1]: https://freenode.net/kb/answer/registration From ltoscano at redhat.com Wed Aug 1 08:17:14 2018 From: ltoscano at redhat.com (Luigi Toscano) Date: Wed, 01 Aug 2018 10:17:14 +0200 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> Message-ID: <7688214.nxzmNDHn8i@whitebase.usersys.redhat.com> On Wednesday, 1 August 2018 07:45:02 CEST Ian Wienand wrote: > Hello, > > It seems freenode is currently receiving a lot of unsolicited traffic > across all channels. The freenode team are aware [1] and doing their > best. > > There are not really a lot of options. We can set "+r" on channels > which means only nickserv registered users can join channels. We have > traditionally avoided this, because it is yet one more barrier to > communication when many are already unfamiliar with IRC access. > However, having channels filled with irrelevant messages is also not > very accessible. What about inviting Sigyn, the anti-spam bot of the freenode admins, to all channels? Unless it does not trigger some global limit in Sigyn, but people on #freenode should be able to help. Ciao -- Luigi From skaplons at redhat.com Wed Aug 1 09:15:39 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Wed, 1 Aug 2018 11:15:39 +0200 Subject: [openstack-dev] [openstack-ansible] Change in our IRC channel In-Reply-To: <725f-5b616b80-11-7f7c7a00@249916277> References: <725f-5b616b80-11-7f7c7a00@249916277> Message-ID: Maybe such change should be considered to be done globally on all OpenStack channels? > Wiadomość napisana przez jean-philippe at evrard.me w dniu 01.08.2018, o godz. 10:13: > > Hello everyone, > > Due to a continuously increasing spam [0] on our IRC channels, I have decided to make our channel (#openstack-ansible on freenode) only joinable by Freenode's nickserv registered users. > > I am sorry for the inconvenience, as it will now be harder to reach us (but it's not that hard to register! [1]). The conversations will be easier to follow though. > > You can still contact us on the mailing lists too. > > Regards, > Jean-Philippe Evrard (evrardjp) > > [0]: https://freenode.net/news/spambot-attack > [1]: https://freenode.net/kb/answer/registration > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From dougal at redhat.com Wed Aug 1 10:30:04 2018 From: dougal at redhat.com (Dougal Matthews) Date: Wed, 1 Aug 2018 11:30:04 +0100 Subject: [openstack-dev] [mistral] Mistral Monthly August 2018 Message-ID: Hey everyone! It is time to wrap-up the last month and have a look and see what has happened. # General News We are now close to the end of the Rocky release cycle! It has been a pleasure serving at the PTL and it looks like you are stuck with me for Stein :) - https://governance.openstack.org/election/ I would be interested to hear about high levels goals and themes for Stein. What do you think the project should do and focus on? Let me know! # Releases - The finaly Rocky releases were made for mistral-lib and python-mistralclient - https://docs.openstack.org/releasenotes/python-mistralclient/rocky.html - Next week we will release the RC1 for mistral, mistral-dashboard and mistral-extra - We skipped RC3 for the above repos, due to CI issues and other delays. The changes can wait for the RC1 release # Notable Changes and Additions - 17 new actions for OpenStack Tacker were added - A new policy was added to control which users can create and update actions and workflows - A new configuration option (oslo_rpc_executor) was added to change the oslo RPC executor. It can now be eventlet, blocking or threading - A new keystone configuration group was added, replacing the keystone_authtoken group (which is now deprecated, but not removed) - 166 new actions were added for OpenStack Manila - Workbooks now support namespaces, previously only Workflows had support - The Mistral development container now supports Keycloak Lots of other small changes and bug fixes! It has been a busy month :) # Milestones, Reviews, Bugs and Blueprints - 55 commits and 251 reviews - 100 Open bugs (Down from 105, yay!) - Rocky-3 numbers: Blueprints: 1 Unknown, 4 Not started, 3 Started, 1 Slow progress, 2 Implemented Bugs: 1 Incomplete, 3 Invalid, 20 Confirmed, 7 Triaged, 8 In Progress, 21 Fix Released Lots of bug fixes landed, good work! Thanks all, Dougal -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed Aug 1 10:45:18 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 1 Aug 2018 12:45:18 +0200 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> Message-ID: <941199b2-4deb-b9cd-20aa-88fe04fdb6ed@redhat.com> Is it possible to ignore message or kick users by keywords? It seems that most messages are more or less the same and include a few URLs that are unlikely to appear in a normal conversation. On 08/01/2018 07:45 AM, Ian Wienand wrote: > Hello, > > It seems freenode is currently receiving a lot of unsolicited traffic > across all channels.  The freenode team are aware [1] and doing their > best. > > There are not really a lot of options.  We can set "+r" on channels > which means only nickserv registered users can join channels.  We have > traditionally avoided this, because it is yet one more barrier to > communication when many are already unfamiliar with IRC access. > However, having channels filled with irrelevant messages is also not > very accessible. > > This is temporarily enabled in #openstack-infra for the time being, so > we can co-ordinate without interruption. > > Thankfully AFAIK we have not needed an abuse policy on this before; > but I guess we are the point we need some sort of coordinated > response. > > I'd suggest to start, people with an interest in a channel can request > +r from an IRC admin in #openstack-infra and we track it at [2] > > Longer term ... suggestions welcome? :) > > -i > > [1] https://freenode.net/news/spambot-attack > [2] https://etherpad.openstack.org/p/freenode-plus-r-08-2018 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From andr.kurilin at gmail.com Wed Aug 1 10:49:13 2018 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Wed, 1 Aug 2018 13:49:13 +0300 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> Message-ID: Hey Ian and stackers! ср, 1 авг. 2018 г. в 8:45, Ian Wienand : > Hello, > > It seems freenode is currently receiving a lot of unsolicited traffic > across all channels. The freenode team are aware [1] and doing their > best. > > There are not really a lot of options. We can set "+r" on channels > which means only nickserv registered users can join channels. We have > traditionally avoided this, because it is yet one more barrier to > communication when many are already unfamiliar with IRC access. > However, having channels filled with irrelevant messages is also not > very accessible. > > This is temporarily enabled in #openstack-infra for the time being, so > we can co-ordinate without interruption. > > Thankfully AFAIK we have not needed an abuse policy on this before; > but I guess we are the point we need some sort of coordinated > response. > > I'd suggest to start, people with an interest in a channel can request > +r from an IRC admin in #openstack-infra and we track it at [2] > > Longer term ... suggestions welcome? :) > > Move to Slack? We can provide auto-sending to emails invitations for joining by clicking the button on some page at openstack.org. It will not add more berrier for new contributors and, at the same time, this way will give some base filtering by emails at least. > -i > > [1] https://freenode.net/news/spambot-attack > [2] https://etherpad.openstack.org/p/freenode-plus-r-08-2018 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed Aug 1 11:10:44 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 1 Aug 2018 13:10:44 +0200 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> Message-ID: <8f78caa3-6f84-0a41-b880-2f0e8a61eb4a@redhat.com> On 08/01/2018 12:49 PM, Andrey Kurilin wrote: > Hey Ian and stackers! > > ср, 1 авг. 2018 г. в 8:45, Ian Wienand >: > > Hello, > > It seems freenode is currently receiving a lot of unsolicited traffic > across all channels.  The freenode team are aware [1] and doing their > best. > > There are not really a lot of options.  We can set "+r" on channels > which means only nickserv registered users can join channels.  We have > traditionally avoided this, because it is yet one more barrier to > communication when many are already unfamiliar with IRC access. > However, having channels filled with irrelevant messages is also not > very accessible. > > This is temporarily enabled in #openstack-infra for the time being, so > we can co-ordinate without interruption. > > Thankfully AFAIK we have not needed an abuse policy on this before; > but I guess we are the point we need some sort of coordinated > response. > > I'd suggest to start, people with an interest in a channel can request > +r from an IRC admin in #openstack-infra and we track it at [2] > > Longer term ... suggestions welcome? :) > > > Move to Slack? We can provide auto-sending to emails invitations for joining by > clicking the button on some page at openstack.org . It > will not add more berrier for new contributors and, at the same time, this way > will give some base filtering by emails at least. A few potential barriers with slack or similar solutions: lack of FOSS desktop clients (correct me if I'm wrong), complete lack of any console clients (ditto), serious limits on free (as in beer) tariff plans. > > -i > > [1] https://freenode.net/news/spambot-attack > [2] https://etherpad.openstack.org/p/freenode-plus-r-08-2018 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Best regards, > Andrey Kurilin. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From allprog at gmail.com Wed Aug 1 11:12:49 2018 From: allprog at gmail.com (=?UTF-8?B?QW5kcsOhcyBLw7Z2aQ==?=) Date: Wed, 1 Aug 2018 13:12:49 +0200 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> Message-ID: These are not just spam messages. At least the ones hitting Mistral are pedophile content. This must be reported. Can someone help me where? THX, A Andrey Kurilin ezt írta (időpont: 2018. aug. 1., Sze, 12:49): > Hey Ian and stackers! > > ср, 1 авг. 2018 г. в 8:45, Ian Wienand : > >> Hello, >> >> It seems freenode is currently receiving a lot of unsolicited traffic >> across all channels. The freenode team are aware [1] and doing their >> best. >> >> There are not really a lot of options. We can set "+r" on channels >> which means only nickserv registered users can join channels. We have >> traditionally avoided this, because it is yet one more barrier to >> communication when many are already unfamiliar with IRC access. >> However, having channels filled with irrelevant messages is also not >> very accessible. >> >> This is temporarily enabled in #openstack-infra for the time being, so >> we can co-ordinate without interruption. >> >> Thankfully AFAIK we have not needed an abuse policy on this before; >> but I guess we are the point we need some sort of coordinated >> response. >> >> I'd suggest to start, people with an interest in a channel can request >> +r from an IRC admin in #openstack-infra and we track it at [2] >> >> Longer term ... suggestions welcome? :) >> >> > Move to Slack? We can provide auto-sending to emails invitations for > joining by clicking the button on some page at openstack.org. It will not > add more berrier for new contributors and, at the same time, this way will > give some base filtering by emails at least. > > >> -i >> >> [1] https://freenode.net/news/spambot-attack >> [2] https://etherpad.openstack.org/p/freenode-plus-r-08-2018 >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > Best regards, > Andrey Kurilin. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Wed Aug 1 11:22:00 2018 From: ltoscano at redhat.com (Luigi Toscano) Date: Wed, 01 Aug 2018 13:22:00 +0200 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> Message-ID: <2805093.59XxL7INj6@whitebase.usersys.redhat.com> On Wednesday, 1 August 2018 12:49:13 CEST Andrey Kurilin wrote: > Hey Ian and stackers! > > ср, 1 авг. 2018 г. в 8:45, Ian Wienand : > > Hello, > > > > It seems freenode is currently receiving a lot of unsolicited traffic > > across all channels. The freenode team are aware [1] and doing their > > best. > > > > There are not really a lot of options. We can set "+r" on channels > > which means only nickserv registered users can join channels. We have > > traditionally avoided this, because it is yet one more barrier to > > communication when many are already unfamiliar with IRC access. > > However, having channels filled with irrelevant messages is also not > > very accessible. > > > > This is temporarily enabled in #openstack-infra for the time being, so > > we can co-ordinate without interruption. > > > > Thankfully AFAIK we have not needed an abuse policy on this before; > > but I guess we are the point we need some sort of coordinated > > response. > > > > I'd suggest to start, people with an interest in a channel can request > > +r from an IRC admin in #openstack-infra and we track it at [2] > > > > Longer term ... suggestions welcome? :) > > Move to Slack? We can provide auto-sending to emails invitations for > joining by clicking the button on some page at openstack.org. It will not > add more berrier for new contributors and, at the same time, this way will > give some base filtering by emails at least. No, please no. If we need to move to another service, better go to a FLOSS one, like Matrix.org, or others. Ciao -- Luigi From gfidente at redhat.com Wed Aug 1 11:31:40 2018 From: gfidente at redhat.com (Giulio Fidente) Date: Wed, 1 Aug 2018 13:31:40 +0200 Subject: [openstack-dev] [tripleo] Proposing Lukas Bezdicka core on TripleO Message-ID: Hi, I would like to propose Lukas Bezdicka core on TripleO. Lukas did a lot work in our tripleoclient, tripleo-common and tripleo-heat-templates repos to make FFU possible. FFU, which is meant to permit upgrades from Newton to Queens, requires in depth understanding of many TripleO components (for example Heat, Mistral and the TripleO client) but also of specific TripleO features which were added during the course of the three releases (for example config-download and upgrade tasks). I believe his FFU work to have been very challenging. Given his broad understanding, more recently Lukas started helping doing reviews in other areas. I am so sure he'll be a great addition to our group that I am not even looking for comments, just votes :D -- Giulio Fidente GPG KEY: 08D733BA From superuser151093 at gmail.com Wed Aug 1 11:37:49 2018 From: superuser151093 at gmail.com (super user) Date: Wed, 1 Aug 2018 20:37:49 +0900 Subject: [openstack-dev] [Tacker] - TACKER + NETWORKING_SFC + NSH In-Reply-To: References: Message-ID: Currently, Tacker does not support NSH. This feature will implement in the future. On Wed, Aug 1, 2018 at 3:17 AM william sales wrote: > Hello guys, > > is there any version of Tacker that allows the use of networking_sfc with > NSH? > > Thankful. > > William Sales > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johfulto at redhat.com Wed Aug 1 11:35:29 2018 From: johfulto at redhat.com (John Fulton) Date: Wed, 1 Aug 2018 07:35:29 -0400 Subject: [openstack-dev] [tripleo] Proposing Lukas Bezdicka core on TripleO In-Reply-To: References: Message-ID: +1 I thought he was already core. On Wed, Aug 1, 2018 at 7:33 AM Giulio Fidente wrote: > > Hi, > > I would like to propose Lukas Bezdicka core on TripleO. > > Lukas did a lot work in our tripleoclient, tripleo-common and > tripleo-heat-templates repos to make FFU possible. > > FFU, which is meant to permit upgrades from Newton to Queens, requires > in depth understanding of many TripleO components (for example Heat, > Mistral and the TripleO client) but also of specific TripleO features > which were added during the course of the three releases (for example > config-download and upgrade tasks). I believe his FFU work to have been > very challenging. > > Given his broad understanding, more recently Lukas started helping doing > reviews in other areas. > > I am so sure he'll be a great addition to our group that I am not even > looking for comments, just votes :D > -- > Giulio Fidente > GPG KEY: 08D733BA > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ltoscano at redhat.com Wed Aug 1 11:58:17 2018 From: ltoscano at redhat.com (Luigi Toscano) Date: Wed, 01 Aug 2018 13:58:17 +0200 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> Message-ID: <1931903.vLDXZrFW0L@whitebase.usersys.redhat.com> On Wednesday, 1 August 2018 13:12:49 CEST András Kövi wrote: > These are not just spam messages. At least the ones hitting Mistral are > pedophile content. This must be reported. Can someone help me where? > THX, > A They are already known: https://freenode.net/news/spambot-attack -- Luigi From jistr at redhat.com Wed Aug 1 11:59:02 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Wed, 1 Aug 2018 13:59:02 +0200 Subject: [openstack-dev] [tripleo] Proposing Lukas Bezdicka core on TripleO In-Reply-To: References: Message-ID: <9b3b24e2-fcf8-d03a-d251-9ec3ac789951@redhat.com> +1! On 1.8.2018 13:31, Giulio Fidente wrote: > Hi, > > I would like to propose Lukas Bezdicka core on TripleO. > > Lukas did a lot work in our tripleoclient, tripleo-common and > tripleo-heat-templates repos to make FFU possible. > > FFU, which is meant to permit upgrades from Newton to Queens, requires > in depth understanding of many TripleO components (for example Heat, > Mistral and the TripleO client) but also of specific TripleO features > which were added during the course of the three releases (for example > config-download and upgrade tasks). I believe his FFU work to have been > very challenging. > > Given his broad understanding, more recently Lukas started helping doing > reviews in other areas. > > I am so sure he'll be a great addition to our group that I am not even > looking for comments, just votes :D > From andr.kurilin at gmail.com Wed Aug 1 12:21:37 2018 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Wed, 1 Aug 2018 15:21:37 +0300 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: <8f78caa3-6f84-0a41-b880-2f0e8a61eb4a@redhat.com> References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> <8f78caa3-6f84-0a41-b880-2f0e8a61eb4a@redhat.com> Message-ID: ср, 1 авг. 2018 г. в 14:11, Dmitry Tantsur : > On 08/01/2018 12:49 PM, Andrey Kurilin wrote: > > Hey Ian and stackers! > > > > ср, 1 авг. 2018 г. в 8:45, Ian Wienand > >: > > > > Hello, > > > > It seems freenode is currently receiving a lot of unsolicited traffic > > across all channels. The freenode team are aware [1] and doing their > > best. > > > > There are not really a lot of options. We can set "+r" on channels > > which means only nickserv registered users can join channels. We > have > > traditionally avoided this, because it is yet one more barrier to > > communication when many are already unfamiliar with IRC access. > > However, having channels filled with irrelevant messages is also not > > very accessible. > > > > This is temporarily enabled in #openstack-infra for the time being, > so > > we can co-ordinate without interruption. > > > > Thankfully AFAIK we have not needed an abuse policy on this before; > > but I guess we are the point we need some sort of coordinated > > response. > > > > I'd suggest to start, people with an interest in a channel can > request > > +r from an IRC admin in #openstack-infra and we track it at [2] > > > > Longer term ... suggestions welcome? :) > > > > > > Move to Slack? We can provide auto-sending to emails invitations for > joining by > > clicking the button on some page at openstack.org . > It > > will not add more berrier for new contributors and, at the same time, > this way > > will give some base filtering by emails at least. > > A few potential barriers with slack or similar solutions: lack of FOSS > desktop > clients (correct me if I'm wrong), The second link from google search gives an opensource client written in python https://github.com/raelgc/scudcloud . Also, there is something which is written in golang. > complete lack of any console clients (ditto), > Again, google gives several ones as first results - https://github.com/evanyeung/terminal-slack https://github.com/erroneousboat/slack-term serious limits on free (as in beer) tariff plans. > > I can make an assumption that for marketing reasons, Slack Inc can propose extended Free plan. But anyway, even with default one the only thing which can limit us is `10,000 searchable messages` which is bigger than 0 (freenode doesn't store messages). Why I like slack? because a lot of people are familar with it (a lot of companies use it as like some opensource communities, like k8s ) PS: I realize that OpenStack Community will never go away from Freenode and IRC, but I do not want to stay silent. > > > -i > > > > [1] https://freenode.net/news/spambot-attack > > [2] https://etherpad.openstack.org/p/freenode-plus-r-08-2018 > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > < > http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > -- > > Best regards, > > Andrey Kurilin. > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Wed Aug 1 12:26:07 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 1 Aug 2018 08:26:07 -0400 Subject: [openstack-dev] [tripleo] Proposing Lukas Bezdicka core on TripleO In-Reply-To: References: Message-ID: On Wed, Aug 1, 2018 at 7:33 AM Giulio Fidente wrote: > I would like to propose Lukas Bezdicka core on TripleO. > Thanks Giulio for proposing him. I agree Lukas's technical level has been quite impactful in the Fast-Forward-Upgrades effort, and upgrades in general. Also his strong experience with TripleO testing over the last years will make him a great core reviewer, careful to not break upgrades and maintain code consistency across the project. Thanks Lukas for your efforts, keep going! -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Wed Aug 1 12:37:25 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 1 Aug 2018 07:37:25 -0500 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: <2805093.59XxL7INj6@whitebase.usersys.redhat.com> References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> <2805093.59XxL7INj6@whitebase.usersys.redhat.com> Message-ID: <39149cb0-09f2-f081-fcb1-4044d251fc7a@inaugust.com> On 08/01/2018 06:22 AM, Luigi Toscano wrote: > On Wednesday, 1 August 2018 12:49:13 CEST Andrey Kurilin wrote: >> Hey Ian and stackers! >> >> ср, 1 авг. 2018 г. в 8:45, Ian Wienand : >>> Hello, >>> >>> It seems freenode is currently receiving a lot of unsolicited traffic >>> across all channels. The freenode team are aware [1] and doing their >>> best. >>> >>> There are not really a lot of options. We can set "+r" on channels >>> which means only nickserv registered users can join channels. We have >>> traditionally avoided this, because it is yet one more barrier to >>> communication when many are already unfamiliar with IRC access. >>> However, having channels filled with irrelevant messages is also not >>> very accessible. >>> >>> This is temporarily enabled in #openstack-infra for the time being, so >>> we can co-ordinate without interruption. >>> >>> Thankfully AFAIK we have not needed an abuse policy on this before; >>> but I guess we are the point we need some sort of coordinated >>> response. >>> >>> I'd suggest to start, people with an interest in a channel can request >>> +r from an IRC admin in #openstack-infra and we track it at [2] >>> >>> Longer term ... suggestions welcome? :) >> >> Move to Slack? We can provide auto-sending to emails invitations for >> joining by clicking the button on some page at openstack.org. It will not >> add more berrier for new contributors and, at the same time, this way will >> give some base filtering by emails at least. slack is pretty unworkable for many reasons. The biggest of them is that it is not Open Source and we don't require OpenStack developers to use proprietary software to work on OpenStack. The quality of slack that makes it effective at fighting spam is also the quality that makes it toxic as a community platform - the need for an invitation and being structured as silos. Even if we were to decide to abandon our Open Source principles and leave behind those in our contributor base who believe that Free Software Needs Free Tools [1] - moving to slack would be a GIANT undertaking. As such, it would not be a very effective way to deal with this current spam storm. > No, please no. If we need to move to another service, better go to a FLOSS > one, like Matrix.org, or others. We had some discussion in Vancouver about investigating the use of Matrix. We are a VERY large community, so we need to do scale and viability testing before it's even a worthy topic to raise with the TC and the community for consideration. If we did, we'd aim to run our own home server. However, it's worth noting that matrix is not immune to spam. As an open federated protocol, it's a target as well. Running our own home server might give us some additional tools - but it might not, and we might be in the same scenario except now we're running another service and we had the pain of moving. All that to say though, matrix seems like the best potential option available that meets the largest number of desires from our user base. Once we've checked it out for viability it might be worth discussing. As above, any effort there is a pretty giant one that will require a large amount of planning, a pretty sizeable amount of technical preparation and would be disruptive at the least, I don't think that'll help us with the current spam storm though. Monty [1] https://mako.cc/writing/hill-free_tools.html From doug at doughellmann.com Wed Aug 1 12:44:29 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 01 Aug 2018 08:44:29 -0400 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> <8f78caa3-6f84-0a41-b880-2f0e8a61eb4a@redhat.com> Message-ID: <1533127260-sup-4269@lrrr.local> Excerpts from Andrey Kurilin's message of 2018-08-01 15:21:37 +0300: > ср, 1 авг. 2018 г. в 14:11, Dmitry Tantsur : > > > On 08/01/2018 12:49 PM, Andrey Kurilin wrote: > > > Hey Ian and stackers! > > > > > > ср, 1 авг. 2018 г. в 8:45, Ian Wienand > > >: > > > > > > Hello, > > > > > > It seems freenode is currently receiving a lot of unsolicited traffic > > > across all channels. The freenode team are aware [1] and doing their > > > best. > > > > > > There are not really a lot of options. We can set "+r" on channels > > > which means only nickserv registered users can join channels. We > > have > > > traditionally avoided this, because it is yet one more barrier to > > > communication when many are already unfamiliar with IRC access. > > > However, having channels filled with irrelevant messages is also not > > > very accessible. > > > > > > This is temporarily enabled in #openstack-infra for the time being, > > so > > > we can co-ordinate without interruption. > > > > > > Thankfully AFAIK we have not needed an abuse policy on this before; > > > but I guess we are the point we need some sort of coordinated > > > response. > > > > > > I'd suggest to start, people with an interest in a channel can > > request > > > +r from an IRC admin in #openstack-infra and we track it at [2] > > > > > > Longer term ... suggestions welcome? :) > > > > > > > > > Move to Slack? We can provide auto-sending to emails invitations for > > joining by > > > clicking the button on some page at openstack.org . > > It > > > will not add more berrier for new contributors and, at the same time, > > this way > > > will give some base filtering by emails at least. > > > > A few potential barriers with slack or similar solutions: lack of FOSS > > desktop > > clients (correct me if I'm wrong), > > > The second link from google search gives an opensource client written in > python https://github.com/raelgc/scudcloud . Also, there is something which > is written in golang. > > > complete lack of any console clients (ditto), > > > > Again, google gives several ones as first results - > https://github.com/evanyeung/terminal-slack > https://github.com/erroneousboat/slack-term > > serious limits on free (as in beer) tariff plans. > > > > > I can make an assumption that for marketing reasons, Slack Inc can propose > extended Free plan. > But anyway, even with default one the only thing which can limit us is > `10,000 searchable messages` which is bigger than 0 (freenode doesn't store > messages). > > > Why I like slack? because a lot of people are familar with it (a lot of > companies use it as like some opensource communities, like k8s ) > > PS: I realize that OpenStack Community will never go away from Freenode and > IRC, but I do not want to stay silent. We are unlikely to select slack because the platform itself is proprietary, even if there are OSS clients. That said, there have been some discussions about platforms such as Matrix, which is similar to slack and also OSS. I think the main thing that is blocking any such move right now is the fact that we're lacking someone with time to evaluate the tool to see what it would take for us to run it. If you're interested in this, maybe you can work with the infrastructure team to plan and implement that evaluation? Doug From michele at acksyn.org Wed Aug 1 12:47:51 2018 From: michele at acksyn.org (Michele Baldessari) Date: Wed, 1 Aug 2018 14:47:51 +0200 Subject: [openstack-dev] [tripleo] Proposing Lukas Bezdicka core on TripleO In-Reply-To: References: Message-ID: <20180801124751.GC18494@palahniuk.int.rhx> +1 On Wed, Aug 01, 2018 at 01:31:40PM +0200, Giulio Fidente wrote: > Hi, > > I would like to propose Lukas Bezdicka core on TripleO. > > Lukas did a lot work in our tripleoclient, tripleo-common and > tripleo-heat-templates repos to make FFU possible. > > FFU, which is meant to permit upgrades from Newton to Queens, requires > in depth understanding of many TripleO components (for example Heat, > Mistral and the TripleO client) but also of specific TripleO features > which were added during the course of the three releases (for example > config-download and upgrade tasks). I believe his FFU work to have been > very challenging. > > Given his broad understanding, more recently Lukas started helping doing > reviews in other areas. > > I am so sure he'll be a great addition to our group that I am not even > looking for comments, just votes :D > -- > Giulio Fidente > GPG KEY: 08D733BA > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Michele Baldessari C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D From dtantsur at redhat.com Wed Aug 1 12:50:43 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 1 Aug 2018 14:50:43 +0200 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> <8f78caa3-6f84-0a41-b880-2f0e8a61eb4a@redhat.com> Message-ID: <2d6694ea-f4b1-81b9-f7da-14b277ff4ac8@redhat.com> On 08/01/2018 02:21 PM, Andrey Kurilin wrote: > > > ср, 1 авг. 2018 г. в 14:11, Dmitry Tantsur >: > > On 08/01/2018 12:49 PM, Andrey Kurilin wrote: > > Hey Ian and stackers! > > > > ср, 1 авг. 2018 г. в 8:45, Ian Wienand > > >>: > > > >     Hello, > > > >     It seems freenode is currently receiving a lot of unsolicited traffic > >     across all channels.  The freenode team are aware [1] and doing their > >     best. > > > >     There are not really a lot of options.  We can set "+r" on channels > >     which means only nickserv registered users can join channels.  We have > >     traditionally avoided this, because it is yet one more barrier to > >     communication when many are already unfamiliar with IRC access. > >     However, having channels filled with irrelevant messages is also not > >     very accessible. > > > >     This is temporarily enabled in #openstack-infra for the time being, so > >     we can co-ordinate without interruption. > > > >     Thankfully AFAIK we have not needed an abuse policy on this before; > >     but I guess we are the point we need some sort of coordinated > >     response. > > > >     I'd suggest to start, people with an interest in a channel can request > >     +r from an IRC admin in #openstack-infra and we track it at [2] > > > >     Longer term ... suggestions welcome? :) > > > > > > Move to Slack? We can provide auto-sending to emails invitations for > joining by > > clicking the button on some page at openstack.org > . It > > will not add more berrier for new contributors and, at the same time, > this way > > will give some base filtering by emails at least. > > A few potential barriers with slack or similar solutions: lack of FOSS desktop > clients (correct me if I'm wrong), > > > The second link from google search gives an opensource client written in python > https://github.com/raelgc/scudcloud . Also, there is something which is written > in golang. The bad thing about non-official clients is that they come and go. An even worse thing is that Slack can (in theory) prevent them from operating or make it illegal (remember ICQ's attempts to ban unofficial clients?). And I agree with Doug that non-free server part can be an issue as well. As the very least, we end being locked into their service. > > complete lack of any console clients (ditto), > > > Again, google gives several ones as first results - > https://github.com/evanyeung/terminal-slack > https://github.com/erroneousboat/slack-term Okay, I stand corrected here. > > serious limits on free (as in beer) tariff plans. > > > I can make an assumption that for marketing reasons, Slack Inc can propose > extended Free plan. Are there precedents of them doing such a thing? Otherwise I would not count on it. Especially if they don't commit to providing it for free forever. > But anyway, even with default one the only thing which can limit us is `10,000 > searchable messages` which is bigger than 0 (freenode doesn't store messages). Well, my IRC bouncer has messages for years :) I understand it's not a comparable solution, but I do have a way to find a message that happened a year ago. Not with slack. > > Why I like slack? because a lot of people are familar with it (a lot of > companies use it as like some opensource communities, like k8s ) > > PS: I realize that OpenStack Community will never go away from Freenode and IRC, > but I do not want to stay silent. I'd not mind at all to move to a more modern *FOSS* system. If we consider paying for Slack, we can consider hosting Matrix/Rocket/whatever as well. Dmitry > > > > >     -i > > > >     [1] https://freenode.net/news/spambot-attack > >     [2] https://etherpad.openstack.org/p/freenode-plus-r-08-2018 > > > > >  __________________________________________________________________________ > >     OpenStack Development Mailing List (not for usage questions) > >     Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >      > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > -- > > Best regards, > > Andrey Kurilin. > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Best regards, > Andrey Kurilin. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gr at ham.ie Wed Aug 1 12:51:47 2018 From: gr at ham.ie (Graham Hayes) Date: Wed, 1 Aug 2018 13:51:47 +0100 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> <8f78caa3-6f84-0a41-b880-2f0e8a61eb4a@redhat.com> Message-ID: On 01/08/2018 13:21, Andrey Kurilin wrote: > > > The second link from google search gives an opensource client written in > python https://github.com/raelgc/scudcloud . Also, there is something > which is written in golang. >   > > complete lack of any console clients (ditto), > > > Again, google gives several ones as first results - > https://github.com/evanyeung/terminal-slack > https://github.com/erroneousboat/slack-term > Any unoffical slack client needs to use "Legacy Tokens"[1] > You're reading this because you're looking for info on legacy > custom integrations - an outdated way for teams to integrate with > Slack. These integrations lack newer features and they will be > deprecated and possibly removed in the future. *We do not recommend > their use.* Legacy tokens can go away (just like the XMPP and IRC gateway did) at any point, and we will be back in the same situation. 1 - https://api.slack.com/custom-integrations/legacy-tokens -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From mordred at inaugust.com Wed Aug 1 12:54:10 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 1 Aug 2018 07:54:10 -0500 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: <1533127260-sup-4269@lrrr.local> References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> <8f78caa3-6f84-0a41-b880-2f0e8a61eb4a@redhat.com> <1533127260-sup-4269@lrrr.local> Message-ID: On 08/01/2018 07:44 AM, Doug Hellmann wrote: > Excerpts from Andrey Kurilin's message of 2018-08-01 15:21:37 +0300: >> ср, 1 авг. 2018 г. в 14:11, Dmitry Tantsur : >> >>> On 08/01/2018 12:49 PM, Andrey Kurilin wrote: >>>> Hey Ian and stackers! >>>> >>>> ср, 1 авг. 2018 г. в 8:45, Ian Wienand >>> >: >>>> >>>> Hello, >>>> >>>> It seems freenode is currently receiving a lot of unsolicited traffic >>>> across all channels. The freenode team are aware [1] and doing their >>>> best. >>>> >>>> There are not really a lot of options. We can set "+r" on channels >>>> which means only nickserv registered users can join channels. We >>> have >>>> traditionally avoided this, because it is yet one more barrier to >>>> communication when many are already unfamiliar with IRC access. >>>> However, having channels filled with irrelevant messages is also not >>>> very accessible. >>>> >>>> This is temporarily enabled in #openstack-infra for the time being, >>> so >>>> we can co-ordinate without interruption. >>>> >>>> Thankfully AFAIK we have not needed an abuse policy on this before; >>>> but I guess we are the point we need some sort of coordinated >>>> response. >>>> >>>> I'd suggest to start, people with an interest in a channel can >>> request >>>> +r from an IRC admin in #openstack-infra and we track it at [2] >>>> >>>> Longer term ... suggestions welcome? :) >>>> >>>> >>>> Move to Slack? We can provide auto-sending to emails invitations for >>> joining by >>>> clicking the button on some page at openstack.org . >>> It >>>> will not add more berrier for new contributors and, at the same time, >>> this way >>>> will give some base filtering by emails at least. >>> >>> A few potential barriers with slack or similar solutions: lack of FOSS >>> desktop >>> clients (correct me if I'm wrong), >> >> >> The second link from google search gives an opensource client written in >> python https://github.com/raelgc/scudcloud . Also, there is something which >> is written in golang. >> >>> complete lack of any console clients (ditto), >>> >> >> Again, google gives several ones as first results - >> https://github.com/evanyeung/terminal-slack >> https://github.com/erroneousboat/slack-term >> >> serious limits on free (as in beer) tariff plans. >>> >>> >> I can make an assumption that for marketing reasons, Slack Inc can propose >> extended Free plan. >> But anyway, even with default one the only thing which can limit us is >> `10,000 searchable messages` which is bigger than 0 (freenode doesn't store >> messages). >> >> >> Why I like slack? because a lot of people are familar with it (a lot of >> companies use it as like some opensource communities, like k8s ) >> >> PS: I realize that OpenStack Community will never go away from Freenode and >> IRC, but I do not want to stay silent. > > We are unlikely to select slack because the platform itself is > proprietary, even if there are OSS clients. That said, there have > been some discussions about platforms such as Matrix, which is > similar to slack and also OSS. > > I think the main thing that is blocking any such move right now is > the fact that we're lacking someone with time to evaluate the tool > to see what it would take for us to run it. If you're interested in > this, maybe you can work with the infrastructure team to plan and > implement that evaluation? In Vancouver I signed up to work on this - but so far it has been lower in priority than other tasks. I'll circle around with people today and see what we think about relative priorities. That said - Doug's invitation is quite valid - help would be welcome and I'd be happy to connect with someone who has time to help with this. From rafaelweingartner at gmail.com Wed Aug 1 12:58:48 2018 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Wed, 1 Aug 2018 09:58:48 -0300 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> <8f78caa3-6f84-0a41-b880-2f0e8a61eb4a@redhat.com> <1533127260-sup-4269@lrrr.local> Message-ID: What about Rocket chat instead of Slack? It is open source. https://github.com/RocketChat/Rocket.Chat Monty, what kind of evaluation would you guys need? I might be able to help. On Wed, Aug 1, 2018 at 9:54 AM, Monty Taylor wrote: > On 08/01/2018 07:44 AM, Doug Hellmann wrote: > >> Excerpts from Andrey Kurilin's message of 2018-08-01 15:21:37 +0300: >> >>> ср, 1 авг. 2018 г. в 14:11, Dmitry Tantsur : >>> >>> On 08/01/2018 12:49 PM, Andrey Kurilin wrote: >>>> >>>>> Hey Ian and stackers! >>>>> >>>>> ср, 1 авг. 2018 г. в 8:45, Ian Wienand >>>> >: >>>>> >>>>> Hello, >>>>> >>>>> It seems freenode is currently receiving a lot of unsolicited >>>>> traffic >>>>> across all channels. The freenode team are aware [1] and doing >>>>> their >>>>> best. >>>>> >>>>> There are not really a lot of options. We can set "+r" on >>>>> channels >>>>> which means only nickserv registered users can join channels. We >>>>> >>>> have >>>> >>>>> traditionally avoided this, because it is yet one more barrier to >>>>> communication when many are already unfamiliar with IRC access. >>>>> However, having channels filled with irrelevant messages is also >>>>> not >>>>> very accessible. >>>>> >>>>> This is temporarily enabled in #openstack-infra for the time >>>>> being, >>>>> >>>> so >>>> >>>>> we can co-ordinate without interruption. >>>>> >>>>> Thankfully AFAIK we have not needed an abuse policy on this >>>>> before; >>>>> but I guess we are the point we need some sort of coordinated >>>>> response. >>>>> >>>>> I'd suggest to start, people with an interest in a channel can >>>>> >>>> request >>>> >>>>> +r from an IRC admin in #openstack-infra and we track it at [2] >>>>> >>>>> Longer term ... suggestions welcome? :) >>>>> >>>>> >>>>> Move to Slack? We can provide auto-sending to emails invitations for >>>>> >>>> joining by >>>> >>>>> clicking the button on some page at openstack.org < >>>>> http://openstack.org>. >>>>> >>>> It >>>> >>>>> will not add more berrier for new contributors and, at the same time, >>>>> >>>> this way >>>> >>>>> will give some base filtering by emails at least. >>>>> >>>> >>>> A few potential barriers with slack or similar solutions: lack of FOSS >>>> desktop >>>> clients (correct me if I'm wrong), >>>> >>> >>> >>> The second link from google search gives an opensource client written in >>> python https://github.com/raelgc/scudcloud . Also, there is something >>> which >>> is written in golang. >>> >>> complete lack of any console clients (ditto), >>>> >>>> >>> Again, google gives several ones as first results - >>> https://github.com/evanyeung/terminal-slack >>> https://github.com/erroneousboat/slack-term >>> >>> serious limits on free (as in beer) tariff plans. >>> >>>> >>>> >>>> I can make an assumption that for marketing reasons, Slack Inc can >>> propose >>> extended Free plan. >>> But anyway, even with default one the only thing which can limit us is >>> `10,000 searchable messages` which is bigger than 0 (freenode doesn't >>> store >>> messages). >>> >>> >>> Why I like slack? because a lot of people are familar with it (a lot of >>> companies use it as like some opensource communities, like k8s ) >>> >>> PS: I realize that OpenStack Community will never go away from Freenode >>> and >>> IRC, but I do not want to stay silent. >>> >> >> We are unlikely to select slack because the platform itself is >> proprietary, even if there are OSS clients. That said, there have >> been some discussions about platforms such as Matrix, which is >> similar to slack and also OSS. >> >> I think the main thing that is blocking any such move right now is >> the fact that we're lacking someone with time to evaluate the tool >> to see what it would take for us to run it. If you're interested in >> this, maybe you can work with the infrastructure team to plan and >> implement that evaluation? >> > > In Vancouver I signed up to work on this - but so far it has been lower in > priority than other tasks. I'll circle around with people today and see > what we think about relative priorities. > > That said - Doug's invitation is quite valid - help would be welcome and > I'd be happy to connect with someone who has time to help with this. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From andr.kurilin at gmail.com Wed Aug 1 13:17:47 2018 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Wed, 1 Aug 2018 16:17:47 +0300 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: <39149cb0-09f2-f081-fcb1-4044d251fc7a@inaugust.com> References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> <2805093.59XxL7INj6@whitebase.usersys.redhat.com> <39149cb0-09f2-f081-fcb1-4044d251fc7a@inaugust.com> Message-ID: ср, 1 авг. 2018 г. в 15:37, Monty Taylor : > On 08/01/2018 06:22 AM, Luigi Toscano wrote: > > On Wednesday, 1 August 2018 12:49:13 CEST Andrey Kurilin wrote: > >> Hey Ian and stackers! > >> > >> ср, 1 авг. 2018 г. в 8:45, Ian Wienand : > >>> Hello, > >>> > >>> It seems freenode is currently receiving a lot of unsolicited traffic > >>> across all channels. The freenode team are aware [1] and doing their > >>> best. > >>> > >>> There are not really a lot of options. We can set "+r" on channels > >>> which means only nickserv registered users can join channels. We have > >>> traditionally avoided this, because it is yet one more barrier to > >>> communication when many are already unfamiliar with IRC access. > >>> However, having channels filled with irrelevant messages is also not > >>> very accessible. > >>> > >>> This is temporarily enabled in #openstack-infra for the time being, so > >>> we can co-ordinate without interruption. > >>> > >>> Thankfully AFAIK we have not needed an abuse policy on this before; > >>> but I guess we are the point we need some sort of coordinated > >>> response. > >>> > >>> I'd suggest to start, people with an interest in a channel can request > >>> +r from an IRC admin in #openstack-infra and we track it at [2] >>> > >>> Longer term ... suggestions welcome? :) > >> > >> Move to Slack? We can provide auto-sending to emails invitations for > >> joining by clicking the button on some page at openstack.org. It will > not > >> add more berrier for new contributors and, at the same time, this way > will > >> give some base filtering by emails at least. > > slack is pretty unworkable for many reasons. The biggest of them is that > it is not Open Source and we don't require OpenStack developers to use > proprietary software to work on OpenStack. > > The quality of slack that makes it effective at fighting spam is also > the quality that makes it toxic as a community platform - the need for > an invitation and being structured as silos. > > Even if we were to decide to abandon our Open Source principles and > leave behind those in our contributor base who believe that Free > Software Needs Free Tools [1] - moving to slack would be a GIANT > undertaking. As such, it would not be a very effective way to deal with > this current spam storm. > > > No, please no. If we need to move to another service, better go to a > FLOSS > > one, like Matrix.org, or others. > > We had some discussion in Vancouver about investigating the use of > Matrix. We are a VERY large community, so we need to do scale and > viability testing before it's even a worthy topic to raise with the TC > and the community for consideration. If we did, we'd aim to run our own > home server. > The last paragraph is the best answer why we never switch from IRC. "we are a VERY large community" Looking back at migration to Zuul V3: the project which is written by folks who know potencial high-load and usage, the project which has a great background. Some issues appeared only after launching it in production. Fortunately, Zuul-community quickly fixed them and we have this great CI system now. As for the FOSS alternatives for the Slack aka modern IRC, I did not heard anything scalable for the size we need. Also, in case of any issues, they will not be fixed as quickly as it was with Zull V3 (thank you folks!). Another issue, the alternative should be popular, modern and usable. IRC is the thing which is used by a lot of communities (i.e. you do not need to install some no-name tool to communicate for one more topic), the same for Slack and I suppose some other tools havethe same popularity (but I do not have installed versions of them). If the alternative doesn't feet these criteria, a lot of people will stay at Freenode and migration will fail. > However, it's worth noting that matrix is not immune to spam. As an open > federated protocol, it's a target as well. Running our own home server > might give us some additional tools - but it might not, and we might be > in the same scenario except now we're running another service and we had > the pain of moving. > > All that to say though, matrix seems like the best potential option > available that meets the largest number of desires from our user base. > Once we've checked it out for viability it might be worth discussing. > > As above, any effort there is a pretty giant one that will require a > large amount of planning, a pretty sizeable amount of technical > preparation and would be disruptive at the least, I don't think that'll > help us with the current spam storm though. > > Monty > > [1] https://mako.cc/writing/hill-free_tools.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Wed Aug 1 13:24:45 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 1 Aug 2018 08:24:45 -0500 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> Message-ID: <5f2d90fc-b96f-2284-3b86-fb6e2c6fbcc1@inaugust.com> On 08/01/2018 12:45 AM, Ian Wienand wrote: > Hello, > > It seems freenode is currently receiving a lot of unsolicited traffic > across all channels.  The freenode team are aware [1] and doing their > best. > > There are not really a lot of options.  We can set "+r" on channels > which means only nickserv registered users can join channels.  We have > traditionally avoided this, because it is yet one more barrier to > communication when many are already unfamiliar with IRC access. > However, having channels filled with irrelevant messages is also not > very accessible. > > This is temporarily enabled in #openstack-infra for the time being, so > we can co-ordinate without interruption. > > Thankfully AFAIK we have not needed an abuse policy on this before; > but I guess we are the point we need some sort of coordinated > response. > > I'd suggest to start, people with an interest in a channel can request > +r from an IRC admin in #openstack-infra and we track it at [2] To mitigate the pain caused by +r - we have created a channel called #openstack-unregistered and have configured the channels with the +r flag to forward people to it. We have also set an entrymsg on #openstack-unregistered to: "Due to a prolonged SPAM attack on freenode, we had to configure OpenStack channels to require users to be registered. If you are here, you tried to join a channel without being logged in. Please see https://freenode.net/kb/answer/registration for instructions on registration with NickServ, and make sure you are logged in." So anyone attempting to join a channel with +r should get that message. From doug at doughellmann.com Wed Aug 1 13:27:09 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 01 Aug 2018 09:27:09 -0400 Subject: [openstack-dev] =?utf-8?q?=5Boslo=5D_proposing_Mois=C3=A9s_Guimar?= =?utf-8?q?=C3=A3es_for_oslo=2Econfig_core?= Message-ID: <1533129742-sup-2007@lrrr.local> Moisés Guimarães (moguimar) did quite a bit of work on oslo.config during the Rocky cycle to add driver support. Based on that work, and a discussion we have had since then about general cleanup needed in oslo.config, I think he would make a good addition to the oslo.config review team. Please indicate your approval or concerns with +1/-1. Doug From fungi at yuggoth.org Wed Aug 1 13:38:29 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 1 Aug 2018 13:38:29 +0000 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> <8f78caa3-6f84-0a41-b880-2f0e8a61eb4a@redhat.com> <1533127260-sup-4269@lrrr.local> Message-ID: <20180801133829.ihfvnbmmghlqgosg@yuggoth.org> On 2018-08-01 09:58:48 -0300 (-0300), Rafael Weingärtner wrote: > What about Rocket chat instead of Slack? It is open source. > https://github.com/RocketChat/Rocket.Chat > > Monty, what kind of evaluation would you guys need? I might be > able to help. Consider reading and possibly resurrecting the infra spec for it: https://review.openstack.org/319506 My main concern is how we'll go about authenticating and policing whatever gateway we set up. As soon as spammers and other abusers find out there's an open (or nearly so) proxy to a major IRC network, they'll use it to hide their origins from the IRC server operators and put us in the middle of the problem. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Wed Aug 1 13:44:35 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 1 Aug 2018 13:44:35 +0000 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> <2805093.59XxL7INj6@whitebase.usersys.redhat.com> <39149cb0-09f2-f081-fcb1-4044d251fc7a@inaugust.com> Message-ID: <20180801134434.7xpzkznzwqt23gur@yuggoth.org> On 2018-08-01 16:17:47 +0300 (+0300), Andrey Kurilin wrote: [...] > If the alternative doesn't feet these criteria, a lot of people > will stay at Freenode and migration will fail. [...] We've had discussions off and on for years about moving from Freenode to OFTC (whose ideals more closely reflect those of our community), but even with that our biggest fear was disruption and fracturing with some people holding conversations in one network and some in the other. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From thierry at openstack.org Wed Aug 1 13:49:16 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 1 Aug 2018 15:49:16 +0200 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: <39149cb0-09f2-f081-fcb1-4044d251fc7a@inaugust.com> References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> <2805093.59XxL7INj6@whitebase.usersys.redhat.com> <39149cb0-09f2-f081-fcb1-4044d251fc7a@inaugust.com> Message-ID: Monty Taylor wrote: > [...] > However, it's worth noting that matrix is not immune to spam. As an open > federated protocol, it's a target as well. Running our own home server > might give us some additional tools - but it might not, and we might be > in the same scenario except now we're running another service and we had > the pain of moving. > [...] Any open communication platform is subject to spam. As long as you let anonymous users join and post stuff, it will happen as soon as the platform reaches a certain critical mass. Slack is not immune to this: it has spam too, and the platform being outside of your control limits[1] your options. Freenode/IRC is a bit bad because it does not make it easy to /deal/ with spam. The protocol being designed at a time where it was costly to switch IPs, you can ignore people/hosts, but not messages based on key words. As we look into alternatives, we should evaluate their spam-filtering abilities... [1] https://www.reddit.com/r/Slack/comments/71bd1h/need_help_preventing_pm_spam/ -- Thierry Carrez (ttx) From hrybacki at redhat.com Wed Aug 1 14:03:41 2018 From: hrybacki at redhat.com (Harry Rybacki) Date: Wed, 1 Aug 2018 10:03:41 -0400 Subject: [openstack-dev] =?utf-8?q?=5Boslo=5D_proposing_Mois=C3=A9s_Guimar?= =?utf-8?q?=C3=A3es_for_oslo=2Econfig_core?= In-Reply-To: <1533129742-sup-2007@lrrr.local> References: <1533129742-sup-2007@lrrr.local> Message-ID: On Wed, Aug 1, 2018 at 9:28 AM Doug Hellmann wrote: > > Moisés Guimarães (moguimar) did quite a bit of work on oslo.config > during the Rocky cycle to add driver support. Based on that work, > and a discussion we have had since then about general cleanup needed > in oslo.config, I think he would make a good addition to the > oslo.config review team. > > Please indicate your approval or concerns with +1/-1. > +1! > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From openstack at nemebean.com Wed Aug 1 14:12:48 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 1 Aug 2018 09:12:48 -0500 Subject: [openstack-dev] [tripleo][ci][metrics] Stucked in the middle of work because of RDO CI In-Reply-To: References: Message-ID: <6010469f-609a-7494-2457-887874c61850@nemebean.com> On 07/31/2018 04:51 PM, Wesley Hayutin wrote: > > > On Tue, Jul 31, 2018 at 7:41 AM Sagi Shnaidman > wrote: > > Hi, Martin > > I see master OVB jobs are passing now [1], please recheck. > > [1] http://cistatus.tripleo.org/ > > > Things have improved and I see a lot of jobs passing however at the same > time I see too many jobs failing due to node_failures.  We are tracking > the data from [1].  Certainly the issue is NOT ideal for development and > we need to remain focused on improving the situation. I assume you're aware, but just to update the thread it looks like the OVB jobs are failing at a 50%+ rate again today (mostly unknown failures according to the tracking app). Even with only two jobs that means your odds of getting them both to pass are pretty bad. > > Thanks > > [1] https://softwarefactory-project.io/zuul/api/tenant/rdoproject.org/builds > > > > On Tue, Jul 31, 2018 at 12:24 PM, Martin Magr > wrote: > > Greetings guys, > >   it is pretty obvious that RDO CI jobs in TripleO projects are > broken [0]. Once Zuul CI jobs will pass would it be possible to > have AMQP/collectd patches ([1],[2],[3]) merged please even > though the negative result of RDO CI jobs? Half of the patches > for this feature is merged and the other half is stucked in this > situation, were nobody reviews these patches, because there is > red -1. Those patches passed Zuul jobs several times already and > were manually tested too. > > Thanks in advance for consideration of this situation, > Martin > > [0] > https://trello.com/c/hkvfxAdX/667-cixtripleoci-rdo-software-factory-3rd-party-jobs-failing-due-to-instance-nodefailure > [1] https://review.openstack.org/#/c/578749 > [2] https://review.openstack.org/#/c/576057/ > [3] https://review.openstack.org/#/c/572312/ > > -- > Martin Mágr > Senior Software Engineer > Red Hat Czech > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Best regards > Sagi Shnaidman > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > > Wes Hayutin > > Associate MANAGER > > Red Hat > > > > w hayutin at redhat.com >  T: +1919 4232509     IRC: > weshay > > > > > Viewmycalendar and check my availability for meetings HERE > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From neil at tigera.io Wed Aug 1 14:14:02 2018 From: neil at tigera.io (Neil Jerram) Date: Wed, 1 Aug 2018 15:14:02 +0100 Subject: [openstack-dev] [neutron] Credentials for a Keystone DB lookup from Neutron In-Reply-To: References: <19a9a4f8-adba-4120-a4f8-3401f74b6b25@gmail.com> Message-ID: Hoping I can leverage the wisdom of the ML for a follow-up point on this topic... I've coded this up now following Aditya's suggestion, but the issue now is about the Neutron user (or more generally, whatever is configured in neutron.conf's [keystone_authtoken] section) being authorized to do a keystone.projects.list(). IIUC that is considered to be an admin operation, and the Neutron user is not normally authorized to do it. However, it seems reasonable to me that the operator of a particular deployment can choose to allow that if they want to, and I see two approaches to doing that: 1. Add the "admin" role to the "neutron" user. I have code for this that seems to work, and append it below for interest and review [1]. 2. Modify Keystone's RBAC specifically to allow "neutron" to do "get_projects". I don't yet know how to do this, though. My questions are: Am I in the right ballpark here? Are there any other approaches to allowing this access, ideally as specifically as possible? And in case (2) is the preferred approach, can you point me to how to modify that RBAC? Many thanks, Neil [1] Apparently working code for adding the "admin" role to the "neutron" user: # Admin client setup: >>> auth_url="http://controller:35357/v3" >>> name = "admin" >>> password = "abcdef" >>> from keystoneauth1 import identity >>> auth = identity.Password(auth_url=auth_url, ... username=name, ... password=password, ... project_name=name, ... project_domain_id="default", ... user_domain_id="default") >>> from keystoneauth1 import session >>> session = session.Session(auth=auth) >>> from keystoneclient.v3.client import Client as KeystoneClient >>> keystone_client = KeystoneClient(session=session) # Identify which role is the "admin" one: >>> roles=keystone_client.roles.list() >>> roles[0] >>> roles[1] # Identify which project is the "service" one: >>> projects=keystone_client.projects.list() >>> projects[0] >>> projects[1] # Identify which user is the "neutron" one: >>> users=keystone_client.users.list() >>> users[0] # Grant "admin" role to "neutron": >>> keystone_client.roles.grant(roles[1], user=users[0], project=projects[1]) # And to revoke that again: >>> keystone_client.roles.revoke(roles[1], user=users[0], project=projects[1]) On Tue, Jul 17, 2018 at 7:17 PM Neil Jerram wrote: > Thanks Aditya, that looks like just what I need. > > Best wishes, > Neil > > > On Tue, Jul 17, 2018 at 5:48 PM Aditya Vaja > wrote: > >> hey neil, >> >> neutron.conf has a section called '[keystone_authtoken]’ which has >> credentials to query keystone as neutron. you can read the config as you’d >> typically do from the mechanism driver for any other property using >> oslo.config. >> >> you could then use python-keystoneclient with those creds to query the >> mapping. a sample is given in the keystoneclient repo [1]. >> >> via telegram >> >> [1] >> https://github.com/openstack/python-keystoneclient/blob/650716d0dd30a73ccabe3f0ec20eb722ca0d70d4/keystoneclient/v3/client.py#L102-L116 >> On Tue, Jul 17, 2018 at 9:58 PM, Neil Jerram wrote: >> >> On Tue, Jul 17, 2018 at 3:55 PM Jay Pipes wrote: >> >>> On 07/17/2018 03:36 AM, Neil Jerram wrote: >>> > Can someone help me with how to look up a project name (aka tenant >>> name) >>> > for a known project/tenant ID, from code (specifically a mechanism >>> > driver) running in the Neutron server? >>> > >>> > I believe that means I need to make a GET REST call as here: >>> > >>> https://developer.openstack.org/api-ref/identity/v3/index.html#projects. >>> But >>> > I don't yet understand how a piece of Neutron server code can ensure >>> > that it has the right credentials to do that. If someone happens to >>> > have actual code for doing this, I'm sure that would be very helpful. >>> > >>> > (I'm aware that whenever the Neutron server processes an API request, >>> > the project name for the project that generated that request is added >>> > into the request context. That is great when my code is running in an >>> > API request context. But there are other times when the code isn't in >>> a >>> > request context and still needs to map from a project ID to project >>> > name; hence the question here.) >>> >>> Hi Neil, >>> >>> You basically answered your own question above :) The neutron request >>> context gets built from oslo.context's Context.from_environ() [1] which >>> has this note in the implementation [2]: >>> >>> # Load a new context object from the environment variables set by >>> # auth_token middleware. See: >>> # >>> >>> https://docs.openstack.org/keystonemiddleware/latest/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service >>> >>> So, basically, simply look at the HTTP headers for HTTP_X_PROJECT_NAME. >>> If you don't have access to a HTTP headers, then you'll need to pass >>> some context object/struct to the code you're referring to. Might as >>> well pass the neutron RequestContext (derived from oslo_context.Context) >>> to the code you're referring to and you get all this for free. >>> >>> Best, >>> -jay >>> >>> [1] >>> >>> https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L424 >>> >>> [2] >>> >>> https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L433-L435 >> >> >> Many thanks for this reply, Jay. >> >> If I'm understanding fully, I believe it all works beautifully so long as >> the Neutron server is processing a specific API request, e.g. a port CRUD >> operation. Then, as you say, the RequestContext includes the name of the >> project/tenant that originated that request. >> >> I have an additional requirement, though, to do a occasional audit of >> standing resources in the Neutron DB, and to check that my mechanism >> driver's programming for them is correct. To do that, I have an independent >> eventlet thread that runs in admin context and occasionally queries Neutron >> resources, e.g. all the ports. For each port, the Neutron DB data includes >> the project_id, but not project_name, and I'd like at that point to be able >> to map from the project_id for each port to project_name. >> >> Do you have any thoughts on how I could do that? (E.g. perhaps there is >> some way of generating and looping round a request with the project_id, >> such that the middleware populates the project_name... but that sounds a >> bit baroque; I would hope that there would be a way of doing a simpler >> Keystone DB lookup.) >> >> Regards, >> Neil >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjturek at linux.vnet.ibm.com Wed Aug 1 14:21:07 2018 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Wed, 1 Aug 2018 10:21:07 -0400 Subject: [openstack-dev] [ironic] August bug day tomorrow! (August 2nd 13:00 UTC to 14:00 UTC) Message-ID: <6a96632e-8543-fa1c-0e0d-9962cbde6582@linux.vnet.ibm.com> Hey all, Welcome to August! Tomorrow is the first Thursday of the month so bug day is once again upon us. For details please see the etherpad [0]. If you have ideas for how we can improve Bug Day, or how the agenda should be structured, let me know! I hope to see you there tomorrow Thanks, Mike Turek [0] https://etherpad.openstack.org/p/ironic-bug-day-august-2018 From john.griffith8 at gmail.com Wed Aug 1 14:25:30 2018 From: john.griffith8 at gmail.com (John Griffith) Date: Wed, 1 Aug 2018 08:25:30 -0600 Subject: [openstack-dev] [cinder] about block device driver In-Reply-To: References: <20180716092027.pc43radmozdgndd5@localhost> Message-ID: On Fri, Jul 27, 2018 at 8:44 AM Matt Riedemann wrote: > On 7/16/2018 4:20 AM, Gorka Eguileor wrote: > > If I remember correctly the driver was deprecated because it had no > > maintainer or CI. In Cinder we require our drivers to have both, > > otherwise we can't guarantee that they actually work or that anyone will > > fix it if it gets broken. > > Would this really require 3rd party CI if it's just local block storage > on the compute node (in devstack)? We could do that with an upstream CI > job right? We already have upstream CI jobs for things like rbd and nfs. > The 3rd party CI requirements generally are for proprietary storage > backends. > > I'm only asking about the CI side of this, the other notes from Sean > about tweaking the LVM volume backend and feature parity are good > reasons for removal of the unmaintained driver. > > Another option is using the nova + libvirt + lvm image backend for local > (to the VM) ephemeral disk: > > > https://github.com/openstack/nova/blob/6be7f7248fb1c2bbb890a0a48a424e205e173c9c/nova/virt/libvirt/imagebackend.py#L653 > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev We've had this conversation multiple times, here were the results from past conversations and the reasons we deprecated: 1. Driver was not being tested at all (no CI, no upstream tests etc) 2. We sent out numerous requests trying to determine if anybody was using the driver, didn't receive much feedback 3. The driver didn't work for an entire release, this indicated that perhaps it wasn't that valuable 4. The driver is unable to implement a number of the required features for a Cinder Block Device 5. Digging deeper into performance tests most comparisons were doing things like a. Using the shared single nic that's used for all of the cluster communications (ie DB, APIs, Rabbit etc) b. Misconfigured deployment, ie using a 1Gig Nic for iSCSI connections (also see above) The decision was that raw-block was not by definition a "Cinder Device", and given that it wasn't really tested or maintained that it should be removed. LVM is actually quite good, we did some pretty extensive testing and even presented it as a session in Barcelona that showed perf within approximately 10%. I'm skeptical any time I see dramatic comparisons of 1/2 performance, but I could be completely wrong. I would be much more interested in putting efforts towards trying to figure out why you have such a large perf delta and see if we can address that as opposed to trying to bring back and maintain a driver that only half works. Or as Jay Pipes mentioned, don't use Cinder in your case. Thanks, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaosorior at gmail.com Wed Aug 1 14:33:21 2018 From: jaosorior at gmail.com (Juan Antonio Osorio) Date: Wed, 1 Aug 2018 17:33:21 +0300 Subject: [openstack-dev] =?utf-8?q?=5Boslo=5D_proposing_Mois=C3=A9s_Guimar?= =?utf-8?q?=C3=A3es_for_oslo=2Econfig_core?= In-Reply-To: <1533129742-sup-2007@lrrr.local> References: <1533129742-sup-2007@lrrr.local> Message-ID: Yeah! +1 Moisés has been doing a great job there On Wed, 1 Aug 2018, 16:27 Doug Hellmann, wrote: > Moisés Guimarães (moguimar) did quite a bit of work on oslo.config > during the Rocky cycle to add driver support. Based on that work, > and a discussion we have had since then about general cleanup needed > in oslo.config, I think he would make a good addition to the > oslo.config review team. > > Please indicate your approval or concerns with +1/-1. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davanum at gmail.com Wed Aug 1 14:43:01 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Wed, 1 Aug 2018 07:43:01 -0700 Subject: [openstack-dev] =?utf-8?q?=5Boslo=5D_proposing_Mois=C3=A9s_Guimar?= =?utf-8?q?=C3=A3es_for_oslo=2Econfig_core?= In-Reply-To: <1533129742-sup-2007@lrrr.local> References: <1533129742-sup-2007@lrrr.local> Message-ID: +1 from me! On Wed, Aug 1, 2018 at 6:27 AM Doug Hellmann wrote: > > Moisés Guimarães (moguimar) did quite a bit of work on oslo.config > during the Rocky cycle to add driver support. Based on that work, > and a discussion we have had since then about general cleanup needed > in oslo.config, I think he would make a good addition to the > oslo.config review team. > > Please indicate your approval or concerns with +1/-1. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims From openstack at nemebean.com Wed Aug 1 14:48:44 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 1 Aug 2018 09:48:44 -0500 Subject: [openstack-dev] =?utf-8?q?=5Boslo=5D_proposing_Mois=C3=A9s_Guimar?= =?utf-8?q?=C3=A3es_for_oslo=2Econfig_core?= In-Reply-To: <1533129742-sup-2007@lrrr.local> References: <1533129742-sup-2007@lrrr.local> Message-ID: <21b41ee4-9ea3-b67e-5bd9-319bc7276d79@nemebean.com> +1 On 08/01/2018 08:27 AM, Doug Hellmann wrote: > Moisés Guimarães (moguimar) did quite a bit of work on oslo.config > during the Rocky cycle to add driver support. Based on that work, > and a discussion we have had since then about general cleanup needed > in oslo.config, I think he would make a good addition to the > oslo.config review team. > > Please indicate your approval or concerns with +1/-1. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From rmascena at redhat.com Wed Aug 1 14:52:33 2018 From: rmascena at redhat.com (Raildo Mascena de Sousa Filho) Date: Wed, 1 Aug 2018 11:52:33 -0300 Subject: [openstack-dev] =?utf-8?q?=5Boslo=5D_proposing_Mois=C3=A9s_Guimar?= =?utf-8?q?=C3=A3es_for_oslo=2Econfig_core?= In-Reply-To: <21b41ee4-9ea3-b67e-5bd9-319bc7276d79@nemebean.com> References: <1533129742-sup-2007@lrrr.local> <21b41ee4-9ea3-b67e-5bd9-319bc7276d79@nemebean.com> Message-ID: +1 On Wed, Aug 1, 2018 at 11:49 AM Ben Nemec wrote: > +1 > > On 08/01/2018 08:27 AM, Doug Hellmann wrote: > > Moisés Guimarães (moguimar) did quite a bit of work on oslo.config > > during the Rocky cycle to add driver support. Based on that work, > > and a discussion we have had since then about general cleanup needed > > in oslo.config, I think he would make a good addition to the > > oslo.config review team. > > > > Please indicate your approval or concerns with +1/-1. > > > > Doug > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Raildo mascena Software Engineer, Identity Managment Red Hat TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Wed Aug 1 14:58:03 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 1 Aug 2018 09:58:03 -0500 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: <20180801133829.ihfvnbmmghlqgosg@yuggoth.org> References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> <8f78caa3-6f84-0a41-b880-2f0e8a61eb4a@redhat.com> <1533127260-sup-4269@lrrr.local> <20180801133829.ihfvnbmmghlqgosg@yuggoth.org> Message-ID: <6d7a88b7-8897-6732-9df3-b10ca95f0078@inaugust.com> On 08/01/2018 08:38 AM, Jeremy Stanley wrote: > On 2018-08-01 09:58:48 -0300 (-0300), Rafael Weingärtner wrote: >> What about Rocket chat instead of Slack? It is open source. >> https://github.com/RocketChat/Rocket.Chat >> >> Monty, what kind of evaluation would you guys need? I might be >> able to help. > > Consider reading and possibly resurrecting the infra spec for it: > > https://review.openstack.org/319506 > > My main concern is how we'll go about authenticating and policing > whatever gateway we set up. As soon as spammers and other abusers > find out there's an open (or nearly so) proxy to a major IRC > network, they'll use it to hide their origins from the IRC server > operators and put us in the middle of the problem. To be clear -- I was not suggesting running matrix and IRC. I was suggesting investigating running a matrix home server and the permanently moving all openstack channels to it. matrix synapse supports federated identity providers with saml and cas support implemented. I would imagine we'd want to configure it to federate to openstackid for logging in to the home server -so that might involve either adding saml support to openstackid or writing an openid-connect driver to synapse. From anteaya at anteaya.info Wed Aug 1 15:04:44 2018 From: anteaya at anteaya.info (Anita Kuno) Date: Wed, 1 Aug 2018 11:04:44 -0400 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: <5f2d90fc-b96f-2284-3b86-fb6e2c6fbcc1@inaugust.com> References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> <5f2d90fc-b96f-2284-3b86-fb6e2c6fbcc1@inaugust.com> Message-ID: <461fd76d-51cd-0d0d-d290-fa3005f8c880@anteaya.info> On 2018-08-01 09:24 AM, Monty Taylor wrote: > On 08/01/2018 12:45 AM, Ian Wienand wrote: >> Hello, >> >> It seems freenode is currently receiving a lot of unsolicited traffic >> across all channels.  The freenode team are aware [1] and doing their >> best. >> >> There are not really a lot of options.  We can set "+r" on channels >> which means only nickserv registered users can join channels.  We have >> traditionally avoided this, because it is yet one more barrier to >> communication when many are already unfamiliar with IRC access. >> However, having channels filled with irrelevant messages is also not >> very accessible. >> >> This is temporarily enabled in #openstack-infra for the time being, so >> we can co-ordinate without interruption. >> >> Thankfully AFAIK we have not needed an abuse policy on this before; >> but I guess we are the point we need some sort of coordinated >> response. >> >> I'd suggest to start, people with an interest in a channel can request >> +r from an IRC admin in #openstack-infra and we track it at [2] > > To mitigate the pain caused by +r - we have created a channel called > #openstack-unregistered and have configured the channels with the +r > flag to forward people to it. We have also set an entrymsg on > #openstack-unregistered to: > > "Due to a prolonged SPAM attack on freenode, we had to configure > OpenStack channels to require users to be registered. If you are here, > you tried to join a channel without being logged in. Please see > https://freenode.net/kb/answer/registration for instructions on > registration with NickServ, and make sure you are logged in." > > So anyone attempting to join a channel with +r should get that message. I can confirm this worked for me as advertised with my nick unregistered. Thank you, Anita > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From openstack at nemebean.com Wed Aug 1 15:15:40 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 1 Aug 2018 10:15:40 -0500 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement Message-ID: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> Hi, I'm having an issue with no valid host errors when starting instances and I'm struggling to figure out why. I thought the problem was disk space, but I changed the disk_allocation_ratio and I'm still getting no valid host. The host does have plenty of disk space free, so that shouldn't be a problem. However, I'm not even sure it's disk that's causing the failures because I can't find any information in the logs about why the no valid host is happening. All I get from the scheduler is: "Got no allocation candidates from the Placement API. This may be a temporary occurrence as compute nodes start up and begin reporting inventory to the Placement service." While in placement I see: 2018-08-01 15:02:22.062 20 DEBUG nova.api.openstack.placement.requestlog [req-0a830ce9-e2af-413a-86cb-b47ae129b676 fc44fe5cefef43f4b921b9123c95e694 b07e6dc2e6284b00ac7070aa3457c15e - default default] Starting request: 10.2.2.201 "GET /placement/allocation_candidates?limit=1000&resources=DISK_GB%3A20%2CMEMORY_MB%3A2048%2CVCPU%3A1" __call__ /usr/lib/python2.7/site-packages/nova/api/openstack/placement/requestlog.py:38 2018-08-01 15:02:22.103 20 INFO nova.api.openstack.placement.requestlog [req-0a830ce9-e2af-413a-86cb-b47ae129b676 fc44fe5cefef43f4b921b9123c95e694 b07e6dc2e6284b00ac7070aa3457c15e - default default] 10.2.2.201 "GET /placement/allocation_candidates?limit=1000&resources=DISK_GB%3A20%2CMEMORY_MB%3A2048%2CVCPU%3A1" status: 200 len: 53 microversion: 1.25 Basically it just seems to be logging that it got a request, but there's no information about what it did with that request. So where do I go from here? Is there somewhere else I can look to see why placement returned no candidates? Thanks. -Ben From e0ne at e0ne.info Wed Aug 1 15:18:16 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 1 Aug 2018 18:18:16 +0300 Subject: [openstack-dev] [horizon] PTL on vacation: what to do with meeting and RC1? Message-ID: Hi team, I'll be on PTO next week. Are we OK to cancel the next meeting? I remember that next week is a deadline for RC1, so I'm going to propose a patch to release it a bit later next week: Tuesday or Wednesday. In an emergency case, please, reach me via e-mail because I'll have a limited internet access so I'll be mostly offline in IRC. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdennis at redhat.com Wed Aug 1 15:38:26 2018 From: jdennis at redhat.com (John Dennis) Date: Wed, 1 Aug 2018 11:38:26 -0400 Subject: [openstack-dev] =?utf-8?q?=5Boslo=5D_proposing_Mois=C3=A9s_Guimar?= =?utf-8?q?=C3=A3es_for_oslo=2Econfig_core?= In-Reply-To: <1533129742-sup-2007@lrrr.local> References: <1533129742-sup-2007@lrrr.local> Message-ID: On 08/01/2018 09:27 AM, Doug Hellmann wrote: > Moisés Guimarães (moguimar) did quite a bit of work on oslo.config > during the Rocky cycle to add driver support. Based on that work, > and a discussion we have had since then about general cleanup needed > in oslo.config, I think he would make a good addition to the > oslo.config review team. > > Please indicate your approval or concerns with +1/-1. +1 -- John Dennis From corvus at inaugust.com Wed Aug 1 15:40:51 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 01 Aug 2018 08:40:51 -0700 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: <5f2d90fc-b96f-2284-3b86-fb6e2c6fbcc1@inaugust.com> (Monty Taylor's message of "Wed, 1 Aug 2018 08:24:45 -0500") References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> <5f2d90fc-b96f-2284-3b86-fb6e2c6fbcc1@inaugust.com> Message-ID: <87bmamayd8.fsf@meyer.lemoncheese.net> Monty Taylor writes: > On 08/01/2018 12:45 AM, Ian Wienand wrote: >> Hello, >> I'd suggest to start, people with an interest in a channel can request >> +r from an IRC admin in #openstack-infra and we track it at [2] > > To mitigate the pain caused by +r - we have created a channel called > #openstack-unregistered and have configured the channels with the +r > flag to forward people to it. We have also set an entrymsg on > #openstack-unregistered to: > > "Due to a prolonged SPAM attack on freenode, we had to configure > OpenStack channels to require users to be registered. If you are here, > you tried to join a channel without being logged in. Please see > https://freenode.net/kb/answer/registration for instructions on > registration with NickServ, and make sure you are logged in." > > So anyone attempting to join a channel with +r should get that message. It turns out this was a very popular option, so we've gone ahead and performed this for all channels registered with accessbot. If you're in a channel that still needs this, please add it to the accessbot channel list[1] and let us know in #openstack-infra. Also, if folks would be willing to lurk in #openstack-unregistered to help anyone who ends up there by surprise and is unfamiliar with how to register with nickserv, that would be great. -Jim [1] https://git.openstack.org/cgit/openstack-infra/project-config/tree/accessbot/channels.yaml From e0ne at e0ne.info Wed Aug 1 15:51:23 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 1 Aug 2018 18:51:23 +0300 Subject: [openstack-dev] [requirements][ffe] Critical bug found in python-cinderclient In-Reply-To: <1533066509-sup-992@lrrr.local> References: <20180731191507.GA4366@sm-workstation> <1533066509-sup-992@lrrr.local> Message-ID: Hi, I'm OK with this release both from Cinder and Horizon perspective. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Tue, Jul 31, 2018 at 10:50 PM, Doug Hellmann wrote: > Excerpts from Sean McGinnis's message of 2018-07-31 14:15:08 -0500: > > A critical bug has been found in python-cinderclient that is impacting > both > > horizon and python-openstackclient (at least). > > > > https://bugs.launchpad.net/cinder/+bug/1784703 > > > > tl;dr is, something new was added with a microversion, but support for > that was > > done incorrectly such that nothing less than that new microversion would > be > > allowed. This patch addresses the issue: > > > > https://review.openstack.org/587601 > > > > Once that lands we will need a new python-cinderclient release to unbreak > > clients. We may want to blacklist python-cinderclient 4.0.0, but I think > at > > least just raising the upper-constraints should get things working again. > > > > Sean > > > > Both adding the exclusion and changing the upper constraint makes sense, > since it will ensure that bad version never makes it back into the > constraints list. > > We don't need to sync the exclusion setting into all of the projects > that depend on the client, so we won't need a new release of any of the > downstream consumers. > > We could add the exclusion to OSC on master, just for accuracy's sake. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avolkov at mirantis.com Wed Aug 1 15:58:02 2018 From: avolkov at mirantis.com (Andrey Volkov) Date: Wed, 1 Aug 2018 18:58:02 +0300 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> Message-ID: Hi, It seems you need first to check what placement knows about resources of your cloud. This can be done either with REST API [1] or with osc-placement [2]. For osc-placement you could use: pip install osc-placement openstack allocation candidate list --resource DISK_GB=20 --resource MEMORY_MB=2048 --resource VCPU=1 --os-placement-api-version 1.10 And you can explore placement state with other commands like openstack resource provider list, resource provider inventory list, resource provider usage show. [1] https://developer.openstack.org/api-ref/placement/ [2] https://docs.openstack.org/osc-placement/latest/index.html On Wed, Aug 1, 2018 at 6:16 PM Ben Nemec wrote: > Hi, > > I'm having an issue with no valid host errors when starting instances > and I'm struggling to figure out why. I thought the problem was disk > space, but I changed the disk_allocation_ratio and I'm still getting no > valid host. The host does have plenty of disk space free, so that > shouldn't be a problem. > > However, I'm not even sure it's disk that's causing the failures because > I can't find any information in the logs about why the no valid host is > happening. All I get from the scheduler is: > > "Got no allocation candidates from the Placement API. This may be a > temporary occurrence as compute nodes start up and begin reporting > inventory to the Placement service." > > While in placement I see: > > 2018-08-01 15:02:22.062 20 DEBUG nova.api.openstack.placement.requestlog > [req-0a830ce9-e2af-413a-86cb-b47ae129b676 > fc44fe5cefef43f4b921b9123c95e694 b07e6dc2e6284b00ac7070aa3457c15e - > default default] Starting request: 10.2.2.201 "GET > /placement/allocation_candidates?limit=1000&resources=DISK_GB%3A20%2CMEMORY_MB%3A2048%2CVCPU%3A1" > > __call__ > > /usr/lib/python2.7/site-packages/nova/api/openstack/placement/requestlog.py:38 > 2018-08-01 15:02:22.103 20 INFO nova.api.openstack.placement.requestlog > [req-0a830ce9-e2af-413a-86cb-b47ae129b676 > fc44fe5cefef43f4b921b9123c95e694 b07e6dc2e6284b00ac7070aa3457c15e - > default default] 10.2.2.201 "GET > /placement/allocation_candidates?limit=1000&resources=DISK_GB%3A20%2CMEMORY_MB%3A2048%2CVCPU%3A1" > > status: 200 len: 53 microversion: 1.25 > > Basically it just seems to be logging that it got a request, but there's > no information about what it did with that request. > > So where do I go from here? Is there somewhere else I can look to see > why placement returned no candidates? > > Thanks. > > -Ben > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Thanks, Andrey Volkov, Software Engineer, Mirantis, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Wed Aug 1 16:12:02 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 1 Aug 2018 11:12:02 -0500 Subject: [openstack-dev] [requirements][ffe] Critical bug found in python-cinderclient In-Reply-To: <1533066509-sup-992@lrrr.local> References: <20180731191507.GA4366@sm-workstation> <1533066509-sup-992@lrrr.local> Message-ID: <20180801161202.szodk2phagz37k67@gentoo.org> On 18-07-31 15:50:42, Doug Hellmann wrote: > Excerpts from Sean McGinnis's message of 2018-07-31 14:15:08 -0500: > > A critical bug has been found in python-cinderclient that is impacting both > > horizon and python-openstackclient (at least). > > > > https://bugs.launchpad.net/cinder/+bug/1784703 > > > > tl;dr is, something new was added with a microversion, but support for that was > > done incorrectly such that nothing less than that new microversion would be > > allowed. This patch addresses the issue: > > > > https://review.openstack.org/587601 > > > > Once that lands we will need a new python-cinderclient release to unbreak > > clients. We may want to blacklist python-cinderclient 4.0.0, but I think at > > least just raising the upper-constraints should get things working again. > > > > Sean > > > > Both adding the exclusion and changing the upper constraint makes sense, > since it will ensure that bad version never makes it back into the > constraints list. > > We don't need to sync the exclusion setting into all of the projects > that depend on the client, so we won't need a new release of any of the > downstream consumers. > > We could add the exclusion to OSC on master, just for accuracy's sake. > Ya, it sounds like this is a valid FFE -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mordred at inaugust.com Wed Aug 1 16:19:22 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 1 Aug 2018 11:19:22 -0500 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> <2805093.59XxL7INj6@whitebase.usersys.redhat.com> <39149cb0-09f2-f081-fcb1-4044d251fc7a@inaugust.com> Message-ID: <3dae9812-0341-4f4c-64d2-993cf91b6593@inaugust.com> On 08/01/2018 08:17 AM, Andrey Kurilin wrote: > > > ср, 1 авг. 2018 г. в 15:37, Monty Taylor >: > > On 08/01/2018 06:22 AM, Luigi Toscano wrote: > > On Wednesday, 1 August 2018 12:49:13 CEST Andrey Kurilin wrote: > >> Hey Ian and stackers! > >> > >> ср, 1 авг. 2018 г. в 8:45, Ian Wienand >: > >>> Hello, > >>> > >>> It seems freenode is currently receiving a lot of unsolicited > traffic > >>> across all channels.  The freenode team are aware [1] and doing > their > >>> best. > >>> > >>> There are not really a lot of options.  We can set "+r" on channels > >>> which means only nickserv registered users can join channels. > We have > >>> traditionally avoided this, because it is yet one more barrier to > >>> communication when many are already unfamiliar with IRC access. > >>> However, having channels filled with irrelevant messages is > also not > >>> very accessible. > >>> > >>> This is temporarily enabled in #openstack-infra for the time > being, so > >>> we can co-ordinate without interruption. > >>> > >>> Thankfully AFAIK we have not needed an abuse policy on this before; > >>> but I guess we are the point we need some sort of coordinated > >>> response. > >>> > >>> I'd suggest to start, people with an interest in a channel can > request > >>> +r from an IRC admin in #openstack-infra and we track it at [2] >>> > >>> Longer term ... suggestions welcome? :) > >> > >> Move to Slack? We can provide auto-sending to emails invitations for > >> joining by clicking the button on some page at openstack.org > . It will not > >> add more berrier for new contributors and, at the same time, > this way will > >> give some base filtering by emails at least. > > slack is pretty unworkable for many reasons. The biggest of them is > that > it is not Open Source and we don't require OpenStack developers to use > proprietary software to work on OpenStack. > > The quality of slack that makes it effective at fighting spam is also > the quality that makes it toxic as a community platform - the need for > an invitation and being structured as silos. > > Even if we were to decide to abandon our Open Source principles and > leave behind those in our contributor base who believe that Free > Software Needs Free Tools [1] - moving to slack would be a GIANT > undertaking. As such, it would not be a very effective way to deal with > this current spam storm. > > > No, please no. If we need to move to another service, better go > to a FLOSS > > one, like Matrix.org, or others. > > We had some discussion in Vancouver about investigating the use of > Matrix. We are a VERY large community, so we need to do scale and > viability testing before it's even a worthy topic to raise with the TC > and the community for consideration. If we did, we'd aim to run our own > home server. > > > The last paragraph is the best answer why we never switch from IRC. > "we are a VERY large community" > > Looking back at migration to Zuul V3: the project which is written by > folks who > know potencial high-load and usage, the project which has a great > background. > Some issues appeared only after launching it in production. Fortunately, > Zuul-community > quickly fixed them and we have this great CI system now. > > As for the FOSS alternatives for the Slack aka modern IRC, I did not > heard anything > scalable for the size we need. Also, in case of any issues, they will > not be fixed as > quickly as it was with Zull V3 (thank you folks!). Yes. This is an excellent point. In fact, just trying to figure out how to properly test that a different choice can handle the scale is ... very hard at best. > Another issue, the alternative should be popular, modern and usable. IRC > is the thing which > is used by a lot of communities (i.e. you do not need to install some > no-name tool to communicate for one more topic), the same for Slack and > I suppose > some other tools havethe same popularity (but I do not have installed > versions of them). > If the alternative doesn't feet these criteria, a lot of people will > stay at Freenode and migration will fail. Yup. Totally agree. > However, it's worth noting that matrix is not immune to spam. As an > open > federated protocol, it's a target as well. Running our own home server > might give us some additional tools - but it might not, and we might be > in the same scenario except now we're running another service and we > had > the pain of moving. > > All that to say though, matrix seems like the best potential option > available that meets the largest number of desires from our user base. > Once we've checked it out for viability it might be worth discussing. > > As above, any effort there is a pretty giant one that will require a > large amount of planning, a pretty sizeable amount of technical > preparation and would be disruptive at the least, I don't think that'll > help us with the current spam storm though. > > Monty > > [1] https://mako.cc/writing/hill-free_tools.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Best regards, > Andrey Kurilin. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From chris.friesen at windriver.com Wed Aug 1 16:23:06 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 1 Aug 2018 10:23:06 -0600 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> Message-ID: <5B61DE6A.8030705@windriver.com> On 08/01/2018 09:58 AM, Andrey Volkov wrote: > Hi, > > It seems you need first to check what placement knows about resources of your cloud. > This can be done either with REST API [1] or with osc-placement [2]. > For osc-placement you could use: > > pip install osc-placement > openstack allocation candidate list --resource DISK_GB=20 --resource > MEMORY_MB=2048 --resource VCPU=1 --os-placement-api-version 1.10 > > And you can explore placement state with other commands like openstack resource > provider list, resource provider inventory list, resource provider usage show. > Unfortunately this doesn't help figure out what the missing resources were *at the time of the failure*. The fact that there is no real way to get the equivalent of the old detailed scheduler logs is a known shortcoming in placement, and will become more of a problem if/when we move more complicated things like CPU pinning, hugepages, and NUMA-awareness into placement. The problem is that getting useful logs out of placement would require significant development work. Chris From s at cassiba.com Wed Aug 1 16:32:38 2018 From: s at cassiba.com (Samuel Cassiba) Date: Wed, 1 Aug 2018 09:32:38 -0700 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> <8f78caa3-6f84-0a41-b880-2f0e8a61eb4a@redhat.com> Message-ID: On Wed, Aug 1, 2018 at 5:21 AM, Andrey Kurilin wrote: > I can make an assumption that for marketing reasons, Slack Inc can propose > extended Free plan. > But anyway, even with default one the only thing which can limit us is > `10,000 searchable messages` which is bigger than 0 (freenode doesn't store > messages). > > > Why I like slack? because a lot of people are familar with it (a lot of > companies use it as like some opensource communities, like k8s ) > > PS: I realize that OpenStack Community will never go away from Freenode and > IRC, but I do not want to stay silent. > My response wasn't intended to become a wall of text, but my individual experience dovetails with the ongoing thread. The intent here is not to focus on one thing or the other, but to highlight some of the strengths and drawbacks. This is a great proposal on-paper. As you said, lots of people are already familiar with the technology and concept at this point. It generally seems to make sense. The unfortunate reality is that with something that has N searchable messages -- that counts for the whole instance -- it will be exceeded within the first few days due to the initial surge, requiring tweaking, if possible. Ten thousand messages is not much for a large, distributed, culturally diverse group heavily entrenched in IRC, even if it is a nice looking number. There should not be a limit on recorded history such as that, lest it be forgotten every few months. >From a technological perspective, that puts both such a proposal and the existing solution at direct odds. Having a proprietary third-party be the gatekeepers to chat-based outlets is not a good prospect over the long-term. For recorded history, eavesdrop, by far, exceeds that imposed value, by sheer virtue of it existing. In freemium offerings, much knowledge gets blown to the aether in exchange for gifs and emoji reactions. In these situations, of course, the users are, by default, the product. The long-term effects which can have lasting effects on a large, multicultural, open source project already under siege on certain fronts. Production OpenStack deployments have usually hitched their wagon to OpenStack: The Project for a multi-year effort at a minimum, which can and tends to involve some level of activity in parts of the community over that time. People come and go, but the long-term goals have generally remained the same. While the long-term ramifications of large FLOSS communities being on freemium proprietary platforms are just beginning to be felt, they're not quite to the point of inertia yet. Short of paying obscene amounts of money for chat, FLOSS alternatives need to be championed, far above any proprietary options with a free welcome mat, no matter how awesome and feature-rich they may be. Making a change of this order, this far in, is a drastic undertaking. I've been witness and participant in a similar migration, which took place a few years ago. It was heralded with much fanfare, a new day for engagement. It was full-on party parrot, until it wasn't. To this day, there are still IRC stragglers, with one or two experienced -- sometimes self-appointed -- individuals that tirelessly, asynchronously, answer softball questions and redirect to the other outlets for the more involved. Extended community channels, like development channels, are just kind of left to rot, with a topic that says "Go over here ---->". There is very little moderation, which develops a certain narrative all on its own. Today, that community on the free offering is quieter, more vibrant and immediately knowledgeable, albeit at the expense of recorded history. Questions take on a recurring theme at times, requiring one-to-one or one-to-many engagement for every question. The person wanting some fish tonight doesn't have a clean lake or stream to catch their dinner. Unfortunately, some of those long-term effects are beginning to be felt as of recent, after "everyone" is off of IRC. Fewer long-term maintainers are sticking around, and even fewer are stepping up to replace them. On the upshot, there are more new users always finding their way to the slick proprietary chat group. -scas From doug at doughellmann.com Wed Aug 1 16:48:39 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 01 Aug 2018 12:48:39 -0400 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: <6d7a88b7-8897-6732-9df3-b10ca95f0078@inaugust.com> References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> <8f78caa3-6f84-0a41-b880-2f0e8a61eb4a@redhat.com> <1533127260-sup-4269@lrrr.local> <20180801133829.ihfvnbmmghlqgosg@yuggoth.org> <6d7a88b7-8897-6732-9df3-b10ca95f0078@inaugust.com> Message-ID: <1533141859-sup-2013@lrrr.local> Excerpts from Monty Taylor's message of 2018-08-01 09:58:03 -0500: > On 08/01/2018 08:38 AM, Jeremy Stanley wrote: > > On 2018-08-01 09:58:48 -0300 (-0300), Rafael Weingärtner wrote: > >> What about Rocket chat instead of Slack? It is open source. > >> https://github.com/RocketChat/Rocket.Chat > >> > >> Monty, what kind of evaluation would you guys need? I might be > >> able to help. > > > > Consider reading and possibly resurrecting the infra spec for it: > > > > https://review.openstack.org/319506 > > > > My main concern is how we'll go about authenticating and policing > > whatever gateway we set up. As soon as spammers and other abusers > > find out there's an open (or nearly so) proxy to a major IRC > > network, they'll use it to hide their origins from the IRC server > > operators and put us in the middle of the problem. > > To be clear -- I was not suggesting running matrix and IRC. I was > suggesting investigating running a matrix home server and the > permanently moving all openstack channels to it. > > matrix synapse supports federated identity providers with saml and cas > support implemented. I would imagine we'd want to configure it to > federate to openstackid for logging in to the home server -so that might > involve either adding saml support to openstackid or writing an > openid-connect driver to synapse. > This matches my expectations. We did talk about supporting a temporary bridge to IRC, during the migration, but I don't think we need to run an "open" home server to have that. Doug From manjeet.s.bhatia at intel.com Wed Aug 1 16:49:28 2018 From: manjeet.s.bhatia at intel.com (Bhatia, Manjeet S) Date: Wed, 1 Aug 2018 16:49:28 +0000 Subject: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to VF mirroring In-Reply-To: References: <6345119E91D5C843A93D64F498ACFA136999ECF2@SHSMSX101.ccr.corp.intel.com> Message-ID: Hi, Yes, we need to refine spec for sure, once a consensus is reached focus will be on implementation, Here’s implementation patch (WIP) https://review.openstack.org/#/c/584892/ , we can’t really review api part until spec if finalized but, other stuff like config and common issues can still be pointed out and progress can be made until consensus on api is reached. Miguel, I think this will be added to etherpad for PTG discussions as well ? Thanks and Regards ! Manjeet From: Miguel Lavalle [mailto:miguel at mlavalle.com] Sent: Tuesday, July 31, 2018 10:26 AM To: Zhao, Forrest Cc: OpenStack Development Mailing List Subject: Re: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to VF mirroring Hi Forrest, Yes, in my email, I was precisely referring to the work around https://review.openstack.org/#/c/574477. Now that we are wrapping up Rocky, I wanted to raise the visibility of this spec. I am glad you noticed. This week we are going to cut our RC-1 and I don't anticipate that we will will have a RC-2 for Rocky. So starting next week, let's go back to the spec and refine it, so we can start implementing in Stein as soon as possible. Depending on how much progress we make in the spec, we may need to schedule a discussion during the PTG in Denver, September 10 - 14, in case face to face time is needed to reach an agreement. I know that Manjeet is going to attend the PTG and he has already talked to me about this spec in the recent past. So maybe Manjeet could be the conduit to represent this spec in Denver, in case we need to talk about it there Best regards Miguel On Tue, Jul 31, 2018 at 4:12 AM, Zhao, Forrest > wrote: Hi Miguel, In your mail “PTL candidacy for the Stein cycle”, it mentioned that “port mirroring for SR-IOV VF to VF mirroring” is within Stein goal. Could you tell where is the place to discuss the design for this feature? Mailing list, IRC channel, weekly meeting or others? I was involved in its spec review at https://review.openstack.org/#/c/574477/; but it has not been updated for a while. Thanks, Forrest -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.franceschini.rm at gmail.com Wed Aug 1 16:55:30 2018 From: andrea.franceschini.rm at gmail.com (Andrea Franceschini) Date: Wed, 1 Aug 2018 18:55:30 +0200 Subject: [openstack-dev] [tricircle] Tricircle or Trio2o Message-ID: Hello All, While I was looking for multisite openstack solutions I stumbled on Tricircle project which seemed fairly perfect for the job except that l it was split in two parts, tricircle itself for the network part and Trio2o for all the rest. Now it seems that the Trio2o project is no longer maintained and I'm wondering what other options exist for multisite openstack, stated that tricircle seems more NFV oriented. Actually a heat multisite solution would work too, but I cannot find any reference to this kind of solutions. Do you have any idea/advice? Thanks, Andrea From doug at doughellmann.com Wed Aug 1 17:02:18 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 01 Aug 2018 13:02:18 -0400 Subject: [openstack-dev] [all][python3] approved python3-first goal for stein Message-ID: <1533142679-sup-7596@lrrr.local> The goal to run jobs under python3 first has been approved for Stein [1]. We are going to be using storyboard for tracking work on the goal, so I have started creating the stories and tasks. Because of the complexity of the work, I am setting up 1 story for each team, with 1 task for each repository. For repositories that haven't migrated to storyboard, the tasks will be associated with the openstack/governance repository. See [2] for the relevant stories. The first phase of the work for this goal involves a lot of zuul reconfiguration, so please do not start on it until after we have completed the Rocky release. As the goal document describes, the champions for this goal will propose the patches to move the zuul settings (we have some nice tools to do this in a consistent way). I expect we will start with that around the time of the PTG. Stand by for more details. Doug [1] https://governance.openstack.org/tc/goals/stein/python3-first.html [2] https://storyboard.openstack.org/#!/search?tags=goal-python3-first From openstack at nemebean.com Wed Aug 1 17:06:52 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 1 Aug 2018 12:06:52 -0500 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> Message-ID: Aha, thanks! That explains why I couldn't find any client commands for placement before. To close the loop on the problem I was having, it looks like the allocation_ratio config opts are now just defaults, and if you want to change ratios after the initial deployment you need to do so with the client. This is what I used: openstack resource provider inventory class set f931b646-ce18-43f6-8c95-bd3ba82fb9a8 DISK_GB --total 111 --allocation_ratio 2.0 I will note that it's a little annoying that you have to specify all of the fields on this call. To change allocation_ratio I also had to pass total even though I didn't want to change total. MEMORY_MB is even worse because not passing max_unit and reserved as well will cause those to revert to max_int and 0, which I can't imagine ever being right. I guess this isn't something that users would do a lot, but it could be a nasty surprise if they tried to update something, had reserved go to zero but didn't notice, and suddenly find they're overcommitting more than they intended. Anyway, thanks again for the help. I hope this thread will be useful to other people who are learning placement too. -Ben On 08/01/2018 10:58 AM, Andrey Volkov wrote: > Hi, > > It seems you need first to check what placement knows about resources of > your cloud. > This can be done either with REST API [1] or with osc-placement [2]. > For osc-placement you could use: > > pip install osc-placement > openstack allocation candidate list --resource DISK_GB=20 --resource > MEMORY_MB=2048 --resource VCPU=1 --os-placement-api-version 1.10 > > And you can explore placement state with other commands like openstack > resource provider list, resource provider inventory list, resource > provider usage show. > > [1] https://developer.openstack.org/api-ref/placement/ > [2] https://docs.openstack.org/osc-placement/latest/index.html > > On Wed, Aug 1, 2018 at 6:16 PM Ben Nemec > wrote: > > Hi, > > I'm having an issue with no valid host errors when starting instances > and I'm struggling to figure out why.  I thought the problem was disk > space, but I changed the disk_allocation_ratio and I'm still getting no > valid host.  The host does have plenty of disk space free, so that > shouldn't be a problem. > > However, I'm not even sure it's disk that's causing the failures > because > I can't find any information in the logs about why the no valid host is > happening.  All I get from the scheduler is: > > "Got no allocation candidates from the Placement API. This may be a > temporary occurrence as compute nodes start up and begin reporting > inventory to the Placement service." > > While in placement I see: > > 2018-08-01 15:02:22.062 20 DEBUG > nova.api.openstack.placement.requestlog > [req-0a830ce9-e2af-413a-86cb-b47ae129b676 > fc44fe5cefef43f4b921b9123c95e694 b07e6dc2e6284b00ac7070aa3457c15e - > default default] Starting request: 10.2.2.201 "GET > /placement/allocation_candidates?limit=1000&resources=DISK_GB%3A20%2CMEMORY_MB%3A2048%2CVCPU%3A1" > > __call__ > /usr/lib/python2.7/site-packages/nova/api/openstack/placement/requestlog.py:38 > 2018-08-01 15:02:22.103 20 INFO nova.api.openstack.placement.requestlog > [req-0a830ce9-e2af-413a-86cb-b47ae129b676 > fc44fe5cefef43f4b921b9123c95e694 b07e6dc2e6284b00ac7070aa3457c15e - > default default] 10.2.2.201 "GET > /placement/allocation_candidates?limit=1000&resources=DISK_GB%3A20%2CMEMORY_MB%3A2048%2CVCPU%3A1" > > status: 200 len: 53 microversion: 1.25 > > Basically it just seems to be logging that it got a request, but > there's > no information about what it did with that request. > > So where do I go from here?  Is there somewhere else I can look to see > why placement returned no candidates? > > Thanks. > > -Ben > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Thanks, > > Andrey Volkov, > Software Engineer, Mirantis, Inc. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From openstack at nemebean.com Wed Aug 1 17:17:36 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 1 Aug 2018 12:17:36 -0500 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <5B61DE6A.8030705@windriver.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> Message-ID: <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> On 08/01/2018 11:23 AM, Chris Friesen wrote: > On 08/01/2018 09:58 AM, Andrey Volkov wrote: >> Hi, >> >> It seems you need first to check what placement knows about resources >> of your cloud. >> This can be done either with REST API [1] or with osc-placement [2]. >> For osc-placement you could use: >> >> pip install osc-placement >> openstack allocation candidate list --resource DISK_GB=20 --resource >> MEMORY_MB=2048 --resource VCPU=1 --os-placement-api-version 1.10 >> >> And you can explore placement state with other commands like openstack >> resource >> provider list, resource provider inventory list, resource provider >> usage show. >> > > Unfortunately this doesn't help figure out what the missing resources > were *at the time of the failure*. > > The fact that there is no real way to get the equivalent of the old > detailed scheduler logs is a known shortcoming in placement, and will > become more of a problem if/when we move more complicated things like > CPU pinning, hugepages, and NUMA-awareness into placement. > > The problem is that getting useful logs out of placement would require > significant development work. Yeah, in my case I only had one compute node so it was obvious what the problem was, but if I had a scheduling failure on a busy cloud with hundreds of nodes I don't see how you would ever track it down. Maybe we need to have a discussion with operators about how often they do post-mortem debugging of this sort of thing? From chris.friesen at windriver.com Wed Aug 1 17:24:08 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 1 Aug 2018 11:24:08 -0600 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> Message-ID: <5B61ECB8.50909@windriver.com> On 08/01/2018 11:17 AM, Ben Nemec wrote: > > > On 08/01/2018 11:23 AM, Chris Friesen wrote: >> The fact that there is no real way to get the equivalent of the old detailed >> scheduler logs is a known shortcoming in placement, and will become more of a >> problem if/when we move more complicated things like CPU pinning, hugepages, >> and NUMA-awareness into placement. >> >> The problem is that getting useful logs out of placement would require >> significant development work. > > Yeah, in my case I only had one compute node so it was obvious what the problem > was, but if I had a scheduling failure on a busy cloud with hundreds of nodes I > don't see how you would ever track it down. Maybe we need to have a discussion > with operators about how often they do post-mortem debugging of this sort of thing? For Wind River's Titanium Cloud it was enough of an issue that we customized the scheduler to emit detailed logs on scheduler failure. We started upstreaming it[1] but the effort stalled out when the upstream folks requested major implementation changes. Chris [1] https://blueprints.launchpad.net/nova/+spec/improve-sched-logging From fungi at yuggoth.org Wed Aug 1 17:30:05 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 1 Aug 2018 17:30:05 +0000 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: <1533141859-sup-2013@lrrr.local> References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> <8f78caa3-6f84-0a41-b880-2f0e8a61eb4a@redhat.com> <1533127260-sup-4269@lrrr.local> <20180801133829.ihfvnbmmghlqgosg@yuggoth.org> <6d7a88b7-8897-6732-9df3-b10ca95f0078@inaugust.com> <1533141859-sup-2013@lrrr.local> Message-ID: <20180801173005.ditu3eb7zczf557u@yuggoth.org> On 2018-08-01 12:48:39 -0400 (-0400), Doug Hellmann wrote: > Excerpts from Monty Taylor's message of 2018-08-01 09:58:03 -0500: > > On 08/01/2018 08:38 AM, Jeremy Stanley wrote: > > > On 2018-08-01 09:58:48 -0300 (-0300), Rafael Weingärtner wrote: > > >> What about Rocket chat instead of Slack? It is open source. > > >> https://github.com/RocketChat/Rocket.Chat > > >> > > >> Monty, what kind of evaluation would you guys need? I might be > > >> able to help. > > > > > > Consider reading and possibly resurrecting the infra spec for it: > > > > > > https://review.openstack.org/319506 > > > > > > My main concern is how we'll go about authenticating and policing > > > whatever gateway we set up. As soon as spammers and other abusers > > > find out there's an open (or nearly so) proxy to a major IRC > > > network, they'll use it to hide their origins from the IRC server > > > operators and put us in the middle of the problem. > > > > To be clear -- I was not suggesting running matrix and IRC. I was > > suggesting investigating running a matrix home server and the > > permanently moving all openstack channels to it. > > > > matrix synapse supports federated identity providers with saml and cas > > support implemented. I would imagine we'd want to configure it to > > federate to openstackid for logging in to the home server -so that might > > involve either adding saml support to openstackid or writing an > > openid-connect driver to synapse. > > This matches my expectations. We did talk about supporting a temporary > bridge to IRC, during the migration, but I don't think we need to run an > "open" home server to have that. Yes, I'm not concerned about Matrix specifically. My response was triggered by Rafael's suggestion of running a Rocket.Chat interface for users. Sorry if that wasn't clear. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From melwittt at gmail.com Wed Aug 1 17:32:55 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 1 Aug 2018 10:32:55 -0700 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> Message-ID: On Wed, 1 Aug 2018 12:17:36 -0500, Ben Nemec wrote: > > > On 08/01/2018 11:23 AM, Chris Friesen wrote: >> On 08/01/2018 09:58 AM, Andrey Volkov wrote: >>> Hi, >>> >>> It seems you need first to check what placement knows about resources >>> of your cloud. >>> This can be done either with REST API [1] or with osc-placement [2]. >>> For osc-placement you could use: >>> >>> pip install osc-placement >>> openstack allocation candidate list --resource DISK_GB=20 --resource >>> MEMORY_MB=2048 --resource VCPU=1 --os-placement-api-version 1.10 >>> >>> And you can explore placement state with other commands like openstack >>> resource >>> provider list, resource provider inventory list, resource provider >>> usage show. >>> >> >> Unfortunately this doesn't help figure out what the missing resources >> were *at the time of the failure*. >> >> The fact that there is no real way to get the equivalent of the old >> detailed scheduler logs is a known shortcoming in placement, and will >> become more of a problem if/when we move more complicated things like >> CPU pinning, hugepages, and NUMA-awareness into placement. >> >> The problem is that getting useful logs out of placement would require >> significant development work. > > Yeah, in my case I only had one compute node so it was obvious what the > problem was, but if I had a scheduling failure on a busy cloud with > hundreds of nodes I don't see how you would ever track it down. Maybe > we need to have a discussion with operators about how often they do > post-mortem debugging of this sort of thing? I think it's definitely a significant issue that troubleshooting "No allocation candidates returned" from placement is so difficult. However, it's not straightforward to log detail in placement when the request for allocation candidates is essentially "SELECT * FROM nodes WHERE cpu usage < needed and disk usage < needed and memory usage < needed" and the result is returned from the API. I think better logging is something we want to have, so if anyone has ideas around it, do share them. -melanie From chris.friesen at windriver.com Wed Aug 1 18:02:02 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 1 Aug 2018 12:02:02 -0600 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> Message-ID: <5B61F59A.1080502@windriver.com> On 08/01/2018 11:32 AM, melanie witt wrote: > I think it's definitely a significant issue that troubleshooting "No allocation > candidates returned" from placement is so difficult. However, it's not > straightforward to log detail in placement when the request for allocation > candidates is essentially "SELECT * FROM nodes WHERE cpu usage < needed and disk > usage < needed and memory usage < needed" and the result is returned from the API. I think the only way to get useful info on a failure would be to break down the huge SQL statement into subclauses and store the results of the intermediate queries. So then if it failed placement could log something like: hosts with enough CPU: hosts that also have enough disk: hosts that also have enough memory: hosts that also meet extra spec host aggregate keys: hosts that also meet image properties host aggregate keys: hosts that also have requested PCI devices: And maybe we could optimize the above by only emitting logs where the list has a length less than X (to avoid flooding the logs with hostnames in large clusters). This would let you zero in on the things that finally caused the list to be whittled down to nothing. Chris From tenobreg at redhat.com Wed Aug 1 18:39:26 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Wed, 1 Aug 2018 15:39:26 -0300 Subject: [openstack-dev] [sahara] Anti-Affinity Broke In-Reply-To: <1854445.BzLhQUzhMP@whitebase.usersys.redhat.com> References: <1854445.BzLhQUzhMP@whitebase.usersys.redhat.com> Message-ID: Hi Joe, sorry for only replying to this now, but I just got time to work on it today. When you did those workarounds, did it all work properly? I'm hitting an issue with a KeyError whili trying to add the resource to the properties. properties[SERVER_GROUP_NAMES].insert(i, server_group_resource) Thanks, On Fri, Jun 22, 2018 at 5:03 AM Luigi Toscano wrote: > On Friday, 22 June 2018 05:00:16 CEST Joe Topjian wrote: > > Hello, > > > > I originally posted this to the general openstack list to get a sanity > > check on what I was seeing. Jeremy F reached out and confirmed that, so > I'm > > going to re-post the details here to begin a discussion. > > Hi, > > thanks for investigating the issue; it's not the most trivial thing to > test > without a real CI system based on baremetal, and we don't have one at this > time. > > > I can also create something on StoryBoard for this, too. > > Yes, that would be preferred; could you please open it describing the > symptoms > that you found in addition to the workarounds? > > Ciao > -- > Luigi > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe at topjian.net Wed Aug 1 18:50:02 2018 From: joe at topjian.net (Joe Topjian) Date: Wed, 1 Aug 2018 12:50:02 -0600 Subject: [openstack-dev] [sahara] Anti-Affinity Broke In-Reply-To: References: <1854445.BzLhQUzhMP@whitebase.usersys.redhat.com> Message-ID: Hello, Yes, those workarounds were able to get anti-affinity working. In my original email I noted the uninitialized key issue. We're getting around this by doing the following: properties[SERVER_GROUP_NAMES] = [] Thanks, Joe On Wed, Aug 1, 2018 at 12:39 PM, Telles Nobrega wrote: > Hi Joe, > > sorry for only replying to this now, but I just got time to work on it > today. > > When you did those workarounds, did it all work properly? I'm hitting an > issue with a KeyError whili trying to add the resource to the properties. > > properties[SERVER_GROUP_NAMES].insert(i, > server_group_resource) > > Thanks, > > > On Fri, Jun 22, 2018 at 5:03 AM Luigi Toscano wrote: > >> On Friday, 22 June 2018 05:00:16 CEST Joe Topjian wrote: >> > Hello, >> > >> > I originally posted this to the general openstack list to get a sanity >> > check on what I was seeing. Jeremy F reached out and confirmed that, so >> I'm >> > going to re-post the details here to begin a discussion. >> >> Hi, >> >> thanks for investigating the issue; it's not the most trivial thing to >> test >> without a real CI system based on baremetal, and we don't have one at >> this >> time. >> >> > I can also create something on StoryBoard for this, too. >> >> Yes, that would be preferred; could you please open it describing the >> symptoms >> that you found in addition to the workarounds? >> >> Ciao >> -- >> Luigi >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -- > > TELLES NOBREGA > > SOFTWARE ENGINEER > > Red Hat Brasil > > Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo > > tenobreg at redhat.com > > TRIED. TESTED. TRUSTED. > Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil > pelo Great Place to Work. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tenobreg at redhat.com Wed Aug 1 18:51:45 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Wed, 1 Aug 2018 15:51:45 -0300 Subject: [openstack-dev] [sahara] Anti-Affinity Broke In-Reply-To: References: <1854445.BzLhQUzhMP@whitebase.usersys.redhat.com> Message-ID: Thanks, that is what I did as well. On Wed, Aug 1, 2018 at 3:50 PM Joe Topjian wrote: > Hello, > > Yes, those workarounds were able to get anti-affinity working. In my > original email I noted the uninitialized key issue. We're getting around > this by doing the following: > > properties[SERVER_GROUP_NAMES] = [] > > Thanks, > Joe > > > On Wed, Aug 1, 2018 at 12:39 PM, Telles Nobrega > wrote: > >> Hi Joe, >> >> sorry for only replying to this now, but I just got time to work on it >> today. >> >> When you did those workarounds, did it all work properly? I'm hitting an >> issue with a KeyError whili trying to add the resource to the properties. >> >> properties[SERVER_GROUP_NAMES].insert(i, >> server_group_resource) >> >> Thanks, >> >> >> On Fri, Jun 22, 2018 at 5:03 AM Luigi Toscano >> wrote: >> >>> On Friday, 22 June 2018 05:00:16 CEST Joe Topjian wrote: >>> > Hello, >>> > >>> > I originally posted this to the general openstack list to get a sanity >>> > check on what I was seeing. Jeremy F reached out and confirmed that, >>> so I'm >>> > going to re-post the details here to begin a discussion. >>> >>> Hi, >>> >>> thanks for investigating the issue; it's not the most trivial thing to >>> test >>> without a real CI system based on baremetal, and we don't have one at >>> this >>> time. >>> >>> > I can also create something on StoryBoard for this, too. >>> >>> Yes, that would be preferred; could you please open it describing the >>> symptoms >>> that you found in addition to the workarounds? >>> >>> Ciao >>> -- >>> Luigi >>> >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> -- >> >> TELLES NOBREGA >> >> SOFTWARE ENGINEER >> >> Red Hat Brasil >> >> Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo >> >> tenobreg at redhat.com >> >> TRIED. TESTED. TRUSTED. >> Red Hat é reconhecida entre as melhores empresas para trabalhar no >> Brasil pelo Great Place to Work. >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Aug 1 19:06:53 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 1 Aug 2018 14:06:53 -0500 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> Message-ID: <6d859301-5384-7424-26be-aa1978a1d10f@gmail.com> On 8/1/2018 12:32 PM, melanie witt wrote: > I think it's definitely a significant issue that troubleshooting "No > allocation candidates returned" from placement is so difficult. However, > it's not straightforward to log detail in placement when the request for > allocation candidates is essentially "SELECT * FROM nodes WHERE cpu > usage < needed and disk usage < needed and memory usage < needed" and > the result is returned from the API. > > I think better logging is something we want to have, so if anyone has > ideas around it, do share them. I don't have any amazing ideas but I did put it on the PTG etherpad for discussion: https://etherpad.openstack.org/p/nova-ptg-stein -- Thanks, Matt From mriedemos at gmail.com Wed Aug 1 19:22:14 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 1 Aug 2018 14:22:14 -0500 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> Message-ID: <90a5944a-085b-4cd2-d1b2-b490fc466bee@gmail.com> On 8/1/2018 12:06 PM, Ben Nemec wrote: > To close the loop on the problem I was having, it looks like the > allocation_ratio config opts are now just defaults, and if you want to > change ratios after the initial deployment you need to do so with the > client. You mean how https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.disk_allocation_ratio defaults to 0.0 and that's used in the ResourceTracker to set the inventory? https://github.com/openstack/nova/blob/31e6e715e00571925b1163950ea028bdade60d76/nova/compute/resource_tracker.py#L120 That should get defaulted to 1.0 if you didn't change the config option: https://github.com/openstack/nova/blob/31e6e715e00571925b1163950ea028bdade60d76/nova/objects/compute_node.py#L207 If you wanted 2.0, then you should set the disk_allocation_ratio config option to 2.0 on that host - I don't think that is a behavior change is it? > > I will note that it's a little annoying that you have to specify all of > the fields on this call. I agree with you. The "openstack resource provider inventory set" command is similar in that it is a total overwrite of all inventory for the provider: https://docs.openstack.org/osc-placement/latest/cli/index.html#resource-provider-inventory-set So if you want to add just one inventory class (or change one) then you have to repeat all of the existing inventory if you don't want to lose that. And I don't think "openstack resource provider inventory class set" lets you add new inventory classes, it only lets you update existing ones. So we probably need something like an --amend option on both commands which are sort of meta commands to retain everything else about the inventory for the provider but only changes the fields that the user specifies. We've mostly just been trying to get out *any* CLI support at all, so what is there now is basic functionality, warts and all, and we can iterate over time to make the tools more usable. To track this, I've created an RFE bug in launchpad: https://bugs.launchpad.net/placement-osc-plugin/+bug/1784932 -- Thanks, Matt From doug at doughellmann.com Wed Aug 1 19:23:06 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 01 Aug 2018 15:23:06 -0400 Subject: [openstack-dev] [tc] Technical Committee update for 1 Aug 2018 Message-ID: <1533151355-sup-4719@lrrr.local> This is the weekly summary of work being done by the Technical Committee members. The full list of active items is managed in the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recent Activity == Approved changes: - python3-first goal: https://review.openstack.org/#/c/575933/ == Ongoing Discussions == Dims started porting the compute:starter-kit tag to be a constellation. This highlighted an issue with the current definition of that set of projects and their use of the tools in the base services list. - https://review.openstack.org/#/c/586212/ We had 7 teams without any volunteers to serve as PTL for the Stein cycle (Dragonflow, Freezer, Loci, Packaging-RPM, Refstack, Searchlight, and Winstackers). The Trove team had a volunteer, but that person does not qualify under our normal rules that say the PTL should be an ATC. This triggered some discussion of what the PTL's role is, why we have it, and whether we may need to make some changes in how it is defined and used. The TC will be reviewing what to do with those teams over the next week or two. - https://wiki.openstack.org/wiki/OpenStack_health_tracker#Status_updates Thierry posted about the changes to the Summit & PTG and how they will affect the Stein release cycle. tl;dr: Because the Summit and PTG will be combined at the end of Stein, the cycle will be a little longer to allow the release dates to sync with the event. - http://lists.openstack.org/pipermail/openstack-dev/2018-July/132651.html == TC member actions/focus/discussions for the coming week(s) == The TC members who are liaisons to one of the teams affected by a lack of PTL candidates should contact the current PTL for those teams to ensure that they are aware of the problem. Please review the suggested topics for the TC to discuss at the PTG: - https://etherpad.openstack.org/p/tc-stein-ptg The PTG is approaching quickly. Please complete any remaining team health checks. == Contacting the TC == The Technical Committee uses a series of weekly "office hour" time slots for synchronous communication. We hope that by having several such times scheduled, we will have more opportunities to engage with members of the community from different timezones. Office hour times in #openstack-tc: - 09:00 UTC on Tuesdays - 01:00 UTC on Wednesdays - 15:00 UTC on Thursdays If you have something you would like the TC to discuss, you can add it to our office hour conversation starter etherpad at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Many of us also run IRC bouncers which stay in #openstack-tc most of the time, so please do not feel that you need to wait for an office hour time to pose a question or offer a suggestion. You can use the string "tc-members" to alert the members to your question. You will find channel logs with past conversations at http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ If you expect your topic to require significant discussion or to need input from members of the community other than the TC, please start a mailing list discussion on openstack-dev at lists.openstack.org and use the subject tag "[tc]" to bring it to the attention of TC members. From sundar.nadathur at intel.com Wed Aug 1 19:39:57 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 1 Aug 2018 12:39:57 -0700 Subject: [openstack-dev] [Nova] [Cyborg] Updates to os-acc proposal In-Reply-To: References: <49db1d12-1fd3-93d1-a31e-8a2a5a35654d@intel.com> Message-ID: Hi Eric,     Please see my responses inline. On an unrelated note, thanks for the pointer to the GPU spec (https://review.openstack.org/#/c/579359/10/doc/source/specs/rocky/device-passthrough.rst). I will review that. On 7/31/2018 10:42 AM, Eric Fried wrote: > Sundar- > >> * Cyborg drivers deal with device-specific aspects, including >> discovery/enumeration of devices and handling the Device Half of the >> attach (preparing devices/accelerators for attach to an instance, >> post-attach cleanup (if any) after successful attach, releasing >> device/accelerator resources on instance termination or failed >> attach, etc.) >> * os-acc plugins deal with hypervisor/system/architecture-specific >> aspects, including handling the Instance Half of the attach (e.g. >> for libvirt with PCI, preparing the XML snippet to be included in >> the domain XML). > This sounds well and good, but discovery/enumeration will also be > hypervisor/system/architecture-specific. So... Fair enough. We had discussed that too. The Cyborg drivers can also invoke REST APIs etc. for Power. >> Thus, the drivers and plugins are expected to be complementary. For >> example, for 2 devices of types T1 and T2, there shall be 2 separate >> Cyborg drivers. Further, we would have separate plugins for, say, >> x86+KVM systems and Power systems. We could then have four different >> deployments -- T1 on x86+KVM, T2 on x86+KVM, T1 on Power, T2 on Power -- >> by suitable combinations of the drivers and plugins. > ...the discovery/enumeration code for T1 on x86+KVM (lsdev? lspci? > walking the /dev file system?) will be totally different from the > discovery/enumeration code for T1 on Power > (pypowervm.wrappers.ManagedSystem.get(adapter)). > > I don't mind saying "drivers do the device side; plugins do the instance > side" but I don't see getting around the fact that both "sides" will > need to have platform-specific code Agreed. So, we could say: - The plugins do the instance half. They are hypervisor-specific and platform-specific. (The term 'platform' subsumes both the architecture (Power, x86) and the server/system type.) They are invoked by os-acc. - The drivers do the device half, device discovery/enumeration and anything not explicitly assigned to plugins. They contain device-specific and platform-specific code. They are invoked by Cyborg agent and os-acc. Are you ok with the workflow in https://docs.google.com/drawings/d/1cX06edia_Pr7P5nOB08VsSMsgznyrz4Yy2u8nb596sU/edit?usp=sharing ? >> One secondary detail to note is that Nova compute calls os-acc per >> instance for all accelerators for that instance, not once for each >> accelerator. > You mean for getVAN()? Yes -- BTW, I renamed it as prepareVANs() or prepareVAN(), because it is not just a query as the name getVAN implies, but has side effects. > Because AFAIK, os_vif.plug(list_of_vif_objects, > InstanceInfo) is *not* how nova uses os-vif for plugging. Yes, the os-acc will invoke the plug() once per VAN. IIUC, Nova calls Neutron once per instance for all networks, as seen in this code sequence in nova/nova/compute/manager.py: _build_and_run_instance() --> _build_resources() -->     _build_networks_for_instance() --> _allocate_network() The _allocate_network() actually takes a list of requested_networks, and handles all networks for an instance [1]. Chasing this further down: _allocate_network --> _allocate_network_async() --> self.network_api.allocate_for_instance()      == nova/network/rpcapi.py::allocate_for_instance() So, even the RPC out of Nova seems to take a list of networks [2]. [1] https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1529 [2] https://github.com/openstack/nova/blob/master/nova/network/rpcapi.py#L163 > Thanks, > Eric > //lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Regards, Sundar From cboylan at sapwetik.org Wed Aug 1 19:43:28 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 01 Aug 2018 12:43:28 -0700 Subject: [openstack-dev] [all] Gerrit project renaming Friday August 3 at 16:00UTC Message-ID: <1533152608.2876314.1460344792.2502F1B3@webmail.messagingengine.com> Hello everyone, The infra team will be renaming a couple of projects in Gerrit on Friday starting at 16:00UTC. This requires us to restart Gerrit on review.openstack.org. The total noticeable downtime should be no more than about 10 minutes. If you would like to follow along, the current process is laid out at https://etherpad.openstack.org/p/project-renames-2018-08-03. Let us know if you have questions or concerns, Clark From openstack at fried.cc Wed Aug 1 20:01:38 2018 From: openstack at fried.cc (Eric Fried) Date: Wed, 1 Aug 2018 15:01:38 -0500 Subject: [openstack-dev] [Nova] [Cyborg] Updates to os-acc proposal In-Reply-To: References: <49db1d12-1fd3-93d1-a31e-8a2a5a35654d@intel.com> Message-ID: Sundar- > On an unrelated note, thanks for the > pointer to the GPU spec > (https://review.openstack.org/#/c/579359/10/doc/source/specs/rocky/device-passthrough.rst). > I will review that. Thanks. Please note that this is for nova-powervm, PowerVM's *out-of-tree* compute driver. We hope to bring this into the in-tree driver eventually (unless we skip straight to the cyborg model :) but it should give a good idea of some of the requirements and use cases we're looking to support. > Fair enough. We had discussed that too. The Cyborg drivers can also > invoke REST APIs etc. for Power. Ack. > Agreed. So, we could say: > - The plugins do the instance half. They are hypervisor-specific and > platform-specific. (The term 'platform' subsumes both the architecture > (Power, x86) and the server/system type.) They are invoked by os-acc. > - The drivers do the device half, device discovery/enumeration and > anything not explicitly assigned to plugins. They contain > device-specific and platform-specific code. They are invoked by Cyborg > agent and os-acc. Sounds good. > Are you ok with the workflow in > https://docs.google.com/drawings/d/1cX06edia_Pr7P5nOB08VsSMsgznyrz4Yy2u8nb596sU/edit?usp=sharing > ? Yes (but see below). >> You mean for getVAN()? > Yes -- BTW, I renamed it as prepareVANs() or prepareVAN(), because it is > not just a query as the name getVAN implies, but has side effects. Ack. >> Because AFAIK, os_vif.plug(list_of_vif_objects, >> InstanceInfo) is *not* how nova uses os-vif for plugging. > > Yes, the os-acc will invoke the plug() once per VAN. IIUC, Nova calls > Neutron once per instance for all networks, as seen in this code > sequence in nova/nova/compute/manager.py: > > _build_and_run_instance() --> _build_resources() --> > >     _build_networks_for_instance() --> _allocate_network() > > The _allocate_network() actually takes a list of requested_networks, and > handles all networks for an instance [1]. > > Chasing this further down: > > _allocate_network --> _allocate_network_async() > > --> self.network_api.allocate_for_instance() > >      == nova/network/rpcapi.py::allocate_for_instance() > > So, even the RPC out of Nova seems to take a list of networks [2]. Yes yes, but by the time we get to os_vif.plug(), we're doing one VIF at a time. That corresponds to what you've got in your flow diagram, so as long as that's accurate, I'm fine with it. That said, we could discuss os_acc.plug taking a list of VANs and threading out the calls to the plugin's plug() method (which takes one at a time). I think we've talked a bit about this before: the pros and cons of having the threading managed by os-acc or by the plugin. We could have the same discussion for prepareVANs() too. > [1] > https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1529 > [2] > https://github.com/openstack/nova/blob/master/nova/network/rpcapi.py#L163 >> Thanks, >> Eric >> //lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Regards, > Sundar > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jaypipes at gmail.com Wed Aug 1 20:09:44 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 1 Aug 2018 16:09:44 -0400 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <5B61F59A.1080502@windriver.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> Message-ID: On 08/01/2018 02:02 PM, Chris Friesen wrote: > On 08/01/2018 11:32 AM, melanie witt wrote: > >> I think it's definitely a significant issue that troubleshooting "No >> allocation >> candidates returned" from placement is so difficult. However, it's not >> straightforward to log detail in placement when the request for >> allocation >> candidates is essentially "SELECT * FROM nodes WHERE cpu usage < >> needed and disk >> usage < needed and memory usage < needed" and the result is returned >> from the API. > > I think the only way to get useful info on a failure would be to break > down the huge SQL statement into subclauses and store the results of the > intermediate queries. This is a good idea and something that can be done. Unfortunately, it's refactoring work and as a community, we tend to prioritize fancy features like NUMA topology and CPU pinning over refactoring work. Best, -jay From openstack at nemebean.com Wed Aug 1 20:55:13 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 1 Aug 2018 15:55:13 -0500 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <90a5944a-085b-4cd2-d1b2-b490fc466bee@gmail.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <90a5944a-085b-4cd2-d1b2-b490fc466bee@gmail.com> Message-ID: <6b96c555-57d9-4fda-a061-10ae9cf49f09@nemebean.com> On 08/01/2018 02:22 PM, Matt Riedemann wrote: > On 8/1/2018 12:06 PM, Ben Nemec wrote: >> To close the loop on the problem I was having, it looks like the >> allocation_ratio config opts are now just defaults, and if you want to >> change ratios after the initial deployment you need to do so with the >> client. > > You mean how > https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.disk_allocation_ratio > defaults to 0.0 and that's used in the ResourceTracker to set the > inventory? > > https://github.com/openstack/nova/blob/31e6e715e00571925b1163950ea028bdade60d76/nova/compute/resource_tracker.py#L120 > > > That should get defaulted to 1.0 if you didn't change the config option: > > https://github.com/openstack/nova/blob/31e6e715e00571925b1163950ea028bdade60d76/nova/objects/compute_node.py#L207 > > > If you wanted 2.0, then you should set the disk_allocation_ratio config > option to 2.0 on that host - I don't think that is a behavior change is it? I changed disk_allocation_ratio to 2.0 in the config file and it had no effect on the existing resource provider. I assume that is because I had initially deployed with it unset, so I got 1.0, and when I later wanted to change it the provider already existed with the default value. So in the past I could do the following: 1) Change disk_allocation_ratio in nova.conf 2) Restart nova-scheduler and/or nova-compute Now it seems like I need to do: 1) Change disk_allocation_ratio in nova.conf 2) Restart nova-scheduler, nova-compute, and nova-placement (or some subset of those?) 3) Use osc-placement to fix up the ratios on any existing resource providers > >> >> I will note that it's a little annoying that you have to specify all >> of the fields on this call. > > I agree with you. The "openstack resource provider inventory set" > command is similar in that it is a total overwrite of all inventory for > the provider: > > https://docs.openstack.org/osc-placement/latest/cli/index.html#resource-provider-inventory-set > > > So if you want to add just one inventory class (or change one) then you > have to repeat all of the existing inventory if you don't want to lose > that. And I don't think "openstack resource provider inventory class > set" lets you add new inventory classes, it only lets you update > existing ones. > > So we probably need something like an --amend option on both commands > which are sort of meta commands to retain everything else about the > inventory for the provider but only changes the fields that the user > specifies. > > We've mostly just been trying to get out *any* CLI support at all, so > what is there now is basic functionality, warts and all, and we can > iterate over time to make the tools more usable. > > To track this, I've created an RFE bug in launchpad: > > https://bugs.launchpad.net/placement-osc-plugin/+bug/1784932 > Cool, thanks. From jillr at redhat.com Wed Aug 1 22:05:02 2018 From: jillr at redhat.com (Jill Rouleau) Date: Wed, 01 Aug 2018 15:05:02 -0700 Subject: [openstack-dev] [tripleo] The Weekly Owl - 25th Edition In-Reply-To: References: <1532974544.5688.10.camel@redhat.com> Message-ID: <1533161102.7169.12.camel@redhat.com> On Tue, 2018-07-31 at 07:38 -0400, Pradeep Kilambi wrote: > > > On Mon, Jul 30, 2018 at 2:17 PM Jill Rouleau wrote: > > On Mon, 2018-07-30 at 11:35 -0400, Pradeep Kilambi wrote: > > >  > > >  > > > On Mon, Jul 30, 2018 at 10:42 AM Alex Schultz > > > > > wrote: > > > > On Mon, Jul 30, 2018 at 8:32 AM, Martin Magr > > > > wrote: > > > > > > > > > > > > > > > On Tue, Jul 17, 2018 at 6:12 PM, Emilien Macchi > t.co > > > > m> wrote: > > > > >> > > > > >> Your fellow reporter took a break from writing, but is now > > back > > > > on his > > > > >> pen. > > > > >> > > > > >> Welcome to the twenty-fifth edition of a weekly update in > > TripleO > > > > world! > > > > >> The goal is to provide a short reading (less than 5 minutes) > > to > > > > learn > > > > >> what's new this week. > > > > >> Any contributions and feedback are welcome. > > > > >> Link to the previous version: > > > > >> http://lists.openstack.org/pipermail/openstack-dev/2018-June/ > > 1314 > > > > 26.html > > > > >> > > > > >> +---------------------------------+ > > > > >> | General announcements | > > > > >> +---------------------------------+ > > > > >> > > > > >> +--> Rocky Milestone 3 is next week. After, any feature code > > will > > > > require > > > > >> Feature Freeze Exception (FFE), asked on the mailing-list. > > We'll > > > > enter a > > > > >> bug-fix only and stabilization period, until we can push the > > > > first stable > > > > >> version of Rocky. > > > > > > > > > > > > > > > Hey guys, > > > > > > > > > >   I would like to ask for FFE for backup and restore, where we > > > > ended up > > > > > deciding where is the best place for the code base for this > > > > project (please > > > > > see [1] for details). We believe that B&R support for > > overcloud > > > > control > > > > > plane will be good addition to a rocky release, but we started > > > > with this > > > > > initiative quite late indeed. The final result should the > > support > > > > in > > > > > openstack client, where "openstack overcloud (backup|restore)" > > > > would work as > > > > > a charm. Thanks in advance for considering this feature. > > > > > > > > >  > > > > Was there a blueprint/spec for this effort?  Additionally do we > > have > > > > a > > > > list of the outstanding work required for this? If it's just > > these > > > > two > > > > playbooks, it might be ok for an FFE. But if there's additional > > > > tripleoclient related changes, I wouldn't necessarily feel > > > > comfortable > > > > with these unless we have a complete list of work.  Just as a > > side > > > > note, I'm not sure putting these in tripleo-common is going to > > be > > > > the > > > > ideal place for this. > > > > Was it this review? https://review.openstack.org/#/c/582453/ > > > > For Stein we'll have an ansible role[0] and playbook repo[1] where > > these > > types of tasks should live. > > > > [0] https://github.com/openstack/ansible-role-openstack-operations  > > [1] https://review.openstack.org/#/c/583415/ > Thanks Jill! The issue is, we want to be able to backport this to > Queens once merged. With the new repos you're mentioning would this be > possible? If no, then this wont work for us unfortunately. > We wouldn't backport the new packages to Queens, however the repos will be on github and available to clone and use.  This would be far preferable than adding them to tripleo-common so late in the rocky cycle then having to break them back out right away in stein. > >   > > > > > > >  > > > Thanks Alex. For Rocky, if we can ship the playbooks with relevant > > > docs we should be good. We will integrated with client in Stein > > > release with restore logic included. Regarding putting tripleo- > > common,  > > > we're open to suggestions. I think Dan just submitted the review > > so we > > > can get some eyes on the playbooks. Where do you suggest is better > > > place for these instead? > > >   > > > >  > > > > Thanks, > > > > -Alex > > > >  > > > > > Regards, > > > > > Martin > > > > > > > > > > [1] https://review.openstack.org/#/c/582453/ > > > > > > > > > >> > > > > >> +--> Next PTG will be in Denver, please propose topics: > > > > >> https://etherpad.openstack.org/p/tripleoci-ptg-stein > > > > >> +--> Multiple squads are currently brainstorming a framework > > to > > > > provide > > > > >> validations pre/post upgrades - stay in touch! > > > > >> > > > > >> +------------------------------+ > > > > >> | Continuous Integration | > > > > >> +------------------------------+ > > > > >> > > > > >> +--> Sprint theme: migration to Zuul v3 (More on > > > > >> https://trello.com/c/vyWXcKOB/841-sprint-16-goals) > > > > >> +--> Sagi is the rover and Chandan is the ruck. Please tell > > them > > > > any CI > > > > >> issue. > > > > >> +--> Promotion on master is 4 days, 0 days on Queens and Pike > > and > > > > 1 day on > > > > >> Ocata. > > > > >> +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad- > > meet > > > > ing > > > > >> > > > > >> +-------------+ > > > > >> | Upgrades | > > > > >> +-------------+ > > > > >> > > > > >> +--> Good progress on major upgrades workflow, need reviews! > > > > >> +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-s > > quad > > > > -status > > > > >> > > > > >> +---------------+ > > > > >> | Containers | > > > > >> +---------------+ > > > > >> > > > > >> +--> We switched python-tripleoclient to deploy containerized > > > > undercloud > > > > >> by default! > > > > >> +--> Image prepare via workflow is still work in progress. > > > > >> +--> More: > > > > >> https://etherpad.openstack.org/p/tripleo-containers-squad-sta > > tus > > > > >> > > > > >> +----------------------+ > > > > >> | config-download | > > > > >> +----------------------+ > > > > >> > > > > >> +--> UI integration is almost done (need review) > > > > >> +--> Bug with failure listing is being fixed: > > > > >> https://bugs.launchpad.net/tripleo/+bug/1779093 > > > > >> +--> More: > > > > >> https://etherpad.openstack.org/p/tripleo-config-download-squa > > d-st > > > > atus > > > > >> > > > > >> +--------------+ > > > > >> | Integration | > > > > >> +--------------+ > > > > >> > > > > >> +--> We're enabling decoupled deployment plans e.g for > > OpenShift, > > > > DPDK > > > > >> etc: > > > > >> https://review.openstack.org/#/q/topic:alternate_plans+(statu > > s:op > > > > en+OR+status:merged) > > > > >> (need reviews). > > > > >> +--> More: > > > > >> https://etherpad.openstack.org/p/tripleo-integration-squad-st > > atus > > > > >> > > > > >> +---------+ > > > > >> | UI/CLI | > > > > >> +---------+ > > > > >> > > > > >> +--> Good progress on network configuration via UI > > > > >> +--> Config-download patches are being reviewed and a lot of > > > > testing is > > > > >> going on. > > > > >> +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-sq > > uad- > > > > status > > > > >> > > > > >> +---------------+ > > > > >> | Validations | > > > > >> +---------------+ > > > > >> > > > > >> +--> Working on OpenShift validations, need reviews. > > > > >> +--> More: > > > > >> https://etherpad.openstack.org/p/tripleo-validations-squad-st > > atus > > > > >> > > > > >> +---------------+ > > > > >> | Networking | > > > > >> +---------------+ > > > > >> > > > > >> +--> No updates this week. > > > > >> +--> More: > > > > >> https://etherpad.openstack.org/p/tripleo-networking-squad-sta > > tus > > > > >> > > > > >> +--------------+ > > > > >> | Workflows | > > > > >> +--------------+ > > > > >> > > > > >> +--> No updates this week. > > > > >> +--> More: https://etherpad.openstack.org/p/tripleo-workflows > > -squ > > > > ad-status > > > > >> > > > > >> +-----------+ > > > > >> | Security | > > > > >> +-----------+ > > > > >> > > > > >> +--> Working on Secrets management and Limit TripleO users > > > > efforts > > > > >> +--> More: https://etherpad.openstack.org/p/tripleo-security- > > squa > > > > d > > > > >> > > > > >> +------------+ > > > > >> | Owl fact  | > > > > >> +------------+ > > > > >> Elf owls live in a cacti. They are the smallest owls, and > > live in > > > > the > > > > >> southwestern United States and Mexico. It will sometimes make > > its > > > > home in > > > > >> the giant saguaro cactus, nesting in holes made by other > > animals. > > > > However, > > > > >> the elf owl isn’t picky and will also live in trees or on > > > > telephone poles. > > > > >> > > > > >> Source: > > > > >> http://mentalfloss.com/article/68473/15-mysterious-facts-abou > > t-ow > > > > ls > > > > >> > > > > >> Thank you all for reading and stay tuned! > > > > >> -- > > > > >> Your fellow reporter, Emilien Macchi > > > > >> > > > > >> > > > > > > ____________________________________________________________________ > > > > ______ > > > > >> OpenStack Development Mailing List (not for usage questions) > > > > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subjec > > t:un > > > > subscribe > > > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > -dev > > > > >> > > > > > > > > > > > > > > > > ____________________________________________________________________ > > > > ______ > > > > > OpenStack Development Mailing List (not for usage questions) > > > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject > > :uns > > > > ubscribe > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack- > > dev > > > > > > > > >  > > > > > > ____________________________________________________________________ > > > > ______ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:u > > nsub > > > > scribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-de > > v > > > >  > > >  > > > --  > > > Cheers, > > > ~ Prad > > > > > ____________________________________________________________________ > > __ > > > ____ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:uns > > ubsc > > > ribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_ > > ____________________________________________________________________ > > _____ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsub > > scribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > --  > Cheers, > ~ Prad > ______________________________________________________________________ > ____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubsc > ribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From mriedemos at gmail.com Wed Aug 1 23:05:06 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 1 Aug 2018 18:05:06 -0500 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <6b96c555-57d9-4fda-a061-10ae9cf49f09@nemebean.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <90a5944a-085b-4cd2-d1b2-b490fc466bee@gmail.com> <6b96c555-57d9-4fda-a061-10ae9cf49f09@nemebean.com> Message-ID: <39e76be8-f3d2-09b6-54a7-b6c127f0aeb1@gmail.com> On 8/1/2018 3:55 PM, Ben Nemec wrote: > I changed disk_allocation_ratio to 2.0 in the config file and it had no > effect on the existing resource provider.  I assume that is because I > had initially deployed with it unset, so I got 1.0, and when I later > wanted to change it the provider already existed with the default value. Yeah I think so, unless the inventory changes we don't mess with changing the allocation ratio. >  So in the past I could do the following: > > 1) Change disk_allocation_ratio in nova.conf > 2) Restart nova-scheduler and/or nova-compute > > Now it seems like I need to do: > > 1) Change disk_allocation_ratio in nova.conf > 2) Restart nova-scheduler, nova-compute, and nova-placement (or some > subset of those?) Restarting the placement service wouldn't have any effect here. > 3) Use osc-placement to fix up the ratios on any existing resource > providers Yeah that's what you'd need to do in this case. I believe Jay Pipes might have somewhere between 3 and 10 specs for the allocation ratio / nova conf / placement inventory / aggregates problems floating around, so he's probably best to weigh in here. Like: https://review.openstack.org/#/c/552105/ -- Thanks, Matt From imain at redhat.com Wed Aug 1 23:34:31 2018 From: imain at redhat.com (Ian Main) Date: Wed, 1 Aug 2018 16:34:31 -0700 Subject: [openstack-dev] [tripleo] Patches to speed up plan operations Message-ID: Hey folks! So I've been working on some patches to speed up plan operations in TripleO. This was originally driven by the UI needing to be able to perform a 'plan upload' in something less than several minutes. :) https://review.openstack.org/#/c/581153/ https://review.openstack.org/#/c/581141/ I have a functioning set of patches, and it actually cuts over 2 minutes off the overcloud deployment time. Without patch: + openstack overcloud plan create --templates /home/stack/tripleo-heat-templates/ overcloud Creating Swift container to store the plan Creating plan from template files in: /home/stack/tripleo-heat-templates/ Plan created. real 3m3.415s With patch: + openstack overcloud plan create --templates /home/stack/tripleo-heat-templates/ overcloud Creating Swift container to store the plan Creating plan from template files in: /home/stack/tripleo-heat-templates/ Plan created. real 0m44.694s This is on VMs. On real hardware it now takes something like 15-20 seconds to do the plan upload which is much more manageable from the UI standpoint. Some things about what this patch does: - It makes use of process-templates.py (written for the undercloud) to process the jinjafied templates. This reduces replication with the existing version in the code base and is very fast as it's all done on local disk. - It stores the bulk of the templates as a tarball in swift. Any individual files in swift take precedence over the contents of the tarball so it should be backwards compatible. This is a great speed up as we're not accessing a lot of individual files in swift. There's still some work to do; cleaning up and fixing the unit tests, testing upgrades etc. I just wanted to get some feedback on the general idea and hopefully some reviews and/or help - especially with the unit test stuff. Thanks everyone! Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Thu Aug 2 01:03:19 2018 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 1 Aug 2018 19:03:19 -0600 Subject: [openstack-dev] [TripleO] easily identifying how services are configured In-Reply-To: <88d7f66c-4215-b032-0b98-2671f14dab21@redhat.com> References: <8e58a5def5b66d229e9c304cbbc9f25c02d49967.camel@redhat.com> <927f5ff4ec528bdcc5877c7a1a5635c62f5f1cb5.camel@redhat.com> <5c220d66-d4e5-2b19-048c-af3a37c846a3@nemebean.com> <88d7f66c-4215-b032-0b98-2671f14dab21@redhat.com> Message-ID: On Mon, Jul 9, 2018 at 6:28 AM, Bogdan Dobrelya wrote: > On 7/6/18 7:02 PM, Ben Nemec wrote: >> >> >> >> On 07/05/2018 01:23 PM, Dan Prince wrote: >>> >>> On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote: >>>> >>>> >>>> I would almost rather see us organize the directories by service >>>> name/project instead of implementation. >>>> >>>> Instead of: >>>> >>>> puppet/services/nova-api.yaml >>>> puppet/services/nova-conductor.yaml >>>> docker/services/nova-api.yaml >>>> docker/services/nova-conductor.yaml >>>> >>>> We'd have: >>>> >>>> services/nova/nova-api-puppet.yaml >>>> services/nova/nova-conductor-puppet.yaml >>>> services/nova/nova-api-docker.yaml >>>> services/nova/nova-conductor-docker.yaml >>>> >>>> (or perhaps even another level of directories to indicate >>>> puppet/docker/ansible?) >>> >>> >>> I'd be open to this but doing changes on this scale is a much larger >>> developer and user impact than what I was thinking we would be willing >>> to entertain for the issue that caused me to bring this up (i.e. how to >>> identify services which get configured by Ansible). >>> >>> Its also worth noting that many projects keep these sorts of things in >>> different repos too. Like Kolla fully separates kolla-ansible and >>> kolla-kubernetes as they are quite divergent. We have been able to >>> preserve some of our common service architectures but as things move >>> towards kubernetes we may which to change things structurally a bit >>> too. >> >> >> True, but the current directory layout was from back when we intended to >> support multiple deployment tools in parallel (originally >> tripleo-image-elements and puppet). Since I think it has become clear that >> it's impractical to maintain two different technologies to do essentially >> the same thing I'm not sure there's a need for it now. It's also worth >> noting that kolla-kubernetes basically died because there wasn't enough >> people to maintain both deployment methods, so we're not the only ones who >> have found that to be true. If/when we move to kubernetes I would >> anticipate it going like the initial containers work did - development for a >> couple of cycles, then a switch to the new thing and deprecation of the old >> thing, then removal of support for the old thing. >> >> That being said, because of the fact that the service yamls are >> essentially an API for TripleO because they're referenced in user > > > this ^^ > >> resource registries, I'm not sure it's worth the churn to move everything >> either. I think that's going to be an issue either way though, it's just a >> question of the scope. _Something_ is going to move around no matter how we >> reorganize so it's a problem that needs to be addressed anyway. > > > [tl;dr] I can foresee reorganizing that API becomes a nightmare for > maintainers doing backports for queens (and the LTS downstream release based > on it). Now imagine kubernetes support comes within those next a few years, > before we can let the old API just go... > > I have an example [0] to share all that pain brought by a simple move of > 'API defaults' from environments/services-docker to environments/services > plus environments/services-baremetal. Each time a file changes contents by > its old location, like here [1], I had to run a lot of sanity checks to > rebase it properly. Like checking for the updated paths in resource > registries are still valid or had to/been moved as well, then picking the > source of truth for diverged old vs changes locations - all that to loose > nothing important in progress. > > So I'd say please let's do *not* change services' paths/namespaces in t-h-t > "API" w/o real need to do that, when there is no more alternatives left to > that. > Ok so it's time to dig this thread back up. I'm currently looking at the chrony support which will require a new service[0][1]. Rather than add it under puppet, we'll likely want to leverage ansible. So I guess the question is where do we put services going forward? Additionally as we look to truly removing the baremetal deployment options and puppet service deployment, it seems like we need to consolidate under a single structure. Given that we don't want force too much churn, does this mean that we should align to the docker/services/*.yaml structure or should we be proposing a new structure that we can try to align on. There is outstanding tech-debt around the nested stacks and references within these services when we added the container deployments so it's something that would be beneficial to start tackling sooner rather than later. Personally I think we're always going to have the issue when we rename files that could have been referenced by custom templates, but I don't think we can continue to carry the outstanding tech debt around these static locations. Should we be investing in coming up with some sort of mappings that we can use/warn a user on when we move files? Thanks, -Alex [0] https://review.openstack.org/#/c/586679/ [1] https://review.openstack.org/#/c/588111/ > [0] https://review.openstack.org/#/q/topic:containers-default-stable/queens > [1] https://review.openstack.org/#/c/567810 > >> >> -Ben >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From liu.xuefeng1 at zte.com.cn Thu Aug 2 01:32:49 2018 From: liu.xuefeng1 at zte.com.cn (liu.xuefeng1 at zte.com.cn) Date: Thu, 2 Aug 2018 09:32:49 +0800 (CST) Subject: [openstack-dev] =?utf-8?q?Candidacy=C2=A0=C2=A0to_continue_my_wor?= =?utf-8?q?k_as_the_Senlin_PTL=C2=A0=C2=A0for_theStein_cycle?= Message-ID: <201808020932495789683@zte.com.cn> Hi all, This is my candidacy to continue my work as the Senlin PTL for the Stein cycle. In Rocky cycle, we finished many features, testing and bug fixing works. For example: * Kubernetes: Added dependency relationship between the master cluster and the worker cluster created for Kubernetes. * Docker driver: Supported update name operation for docker profile. * Nova server: Added operation support to migrate a nova server node. * Health check improve: Added a new detection type that actively pools the node health using a URL specified in the health policy. * Dashboard:adds "Resize" action for cluster panel. * Testing: API/Function/Integration test were moved to senlin-tempest-plugin. At the same time, in Rocky cycle I found and added two devlopers to the Senlin team, both of them given a great work for the team. As a PTL in Stein cycle, I'd like to continue to focus on the tasks as follows: * Grow the Senlin team of contributors and core reviewers. * Continue to improve k8s on Senlin feature implementation. * Collaborate with other OpenStack projects with joint blueprints. * Actively monitor incoming bug reports and assigned to fix them. * Acknowledgment of OpenStack-wide Goals. Thanks for taking the time to read through this roadmap and to consider my candidacy. Best Regards, XueFeng Liu (Irc:XueFeng) -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhaochao1984 at gmail.com Thu Aug 2 01:51:00 2018 From: zhaochao1984 at gmail.com (=?UTF-8?B?6LW16LaF?=) Date: Thu, 2 Aug 2018 09:51:00 +0800 Subject: [openstack-dev] [all][election][tc] Lederless projects. In-Reply-To: <20180801003249.GE15918@thor.bakeyournoodle.com> References: <20180731235512.GB15918@thor.bakeyournoodle.com> <20180801003249.GE15918@thor.bakeyournoodle.com> Message-ID: The trove team had our weekly meeting last night, all attended core members and the new contributors from Samsung R&D Center in Krakow, Poland were agreed on that we only have Dariusz Krol as an PTL candidate and hope he could be accepted as a valid candidate [1]. Tony has pointed to me that the process for appointing PTL for leaderlesss projects, thanks. I talked about the situation about the current Trove team before([2] and privately responded which resulted to a report as [3] to the TC project healthy checking mail). I thought about continuing my role as the Trove PTL, but it turns out it's better to have someone who could take more time on the project, and happend that we just have a whole team joining us. I think all of the current active team members will continue working on the project, but it's sad none of us has much bandwith on the project now, so we think it could be a good chance for the project to progress(though maybe quite slowly ) to a whole team focusing on Trove. There're always worries about the healthty about Trove and talks about rearchitecturing, the project team has also been discussed these topics many times internally, however it seems impossible for the time and forseable future, until we have a bigger core team and more participation. We all think there will be more opportunites to change this situation with the join of a whole team(though it's may be small now) focusing on Trove. Would love to hear more suggestions and participtions from whom are interested in Trove. Thanks. [1] http://eavesdrop.openstack.org/meetings/trove/2018/trove.2018-08-01-14.00.log.html#l-71 [2] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132475.html [3] https://wiki.openstack.org/wiki/OpenStack_health_tracker#Trove On Wed, Aug 1, 2018 at 8:32 AM, Tony Breeds wrote: > On Wed, Aug 01, 2018 at 09:55:13AM +1000, Tony Breeds wrote: > > > > Hello all, > > The PTL Nomination period is now over. The official candidate list > > is available on the election website[0]. > > > > There are 8 projects without candidates, so according to this > > resolution[1], the TC will have to decide how the following > > projects will proceed: Dragonflow, Freezer, Loci, Packaging_Rpm, > > RefStack, Searchlight, Trove and Winstackers. > > Hello TC, > A few extra details[1]: > > --------------------------------------------------- > Projects[1] : 65 > Projects with candidates : 57 ( 87.69%) > Projects with election : 2 ( 3.08%) > --------------------------------------------------- > Need election : 2 (Senlin Tacker) > Need appointment : 8 (Dragonflow Freezer Loci Packaging_Rpm > RefStack > Searchlight Trove Winstackers) > =================================================== > Stats gathered @ 2018-08-01 00:11:59 UTC > > Of the 8 projects that can be considered leaderless, Trove did have a > candidate[2] that doesn't meet the ATC criteria in that they do not > have a merged change. > > I also excluded Security due to the governance review[3] to remove it as > a project and the companion email discussion[4] > > Yours Tony. > > [1] http://paste.openstack.org/show/727002 > [2] https://review.openstack.org/587333 > [3] https://review.openstack.org/586896 > [4] http://lists.openstack.org/pipermail/openstack-dev/2018- > July/132595.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- To be free as in freedom. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ianyrchoi at gmail.com Thu Aug 2 02:03:43 2018 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Thu, 2 Aug 2018 11:03:43 +0900 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: <16e69b47c8b71bf6f920ab8f3df61928@arcor.de> References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> <5B4DF6B4.9030501@openstack.org> <7f053a98-718b-0470-9ffa-934f0e715a76@gmail.com> <5B4E132E.5050607@openstack.org> <5B50A476.8010606@openstack.org> <5B5F295F.3090608@openstack.org> <1f5afd62cc3a9a8923586a404e707366@arcor.de> <16e69b47c8b71bf6f920ab8f3df61928@arcor.de> Message-ID: <83a0d94f-dc74-e1c1-951b-1fcec2fca6f1@gmail.com> Hello Sebastian, Korean has also currently 100% translation now. About two weeks ago, there were a discussion how to include the list of translators per translated document. My proposal is mentioned in [1] - do you think it is a good idea and it is under implementation, or parsing the name of translators in header lines on po files (e.g., four lines on [2]) would be better idea? With many thanks, /Ian [1] http://eavesdrop.openstack.org/irclogs/%23openstack-i18n/%23openstack-i18n.2018-07-19.log.html#t2018-07-19T15:09:46 [2] http://git.openstack.org/cgit/openstack/i18n/tree/doc/source/locale/de/LC_MESSAGES/doc.po#n1 Frank Kloeker wrote on 7/31/2018 6:39 PM: > Hi Sebastian, > > okay, it's translated now. In Edge whitepaper is the problem with > XML-Parsing of the term AT&T. Don't know how to escape this. Maybe you > will see the warning during import too. > > kind regards > > Frank > > Am 2018-07-30 20:09, schrieb Sebastian Marcet: >> Hi Frank, >> i was double checking pot file and realized that original pot missed >> some parts of the original paper (subsections of the paper) apologizes >> on that >> i just re uploaded an updated pot file with missing subsections >> >> regards >> >> On Mon, Jul 30, 2018 at 2:20 PM, Frank Kloeker wrote: >> >>> Hi Jimmy, >>> >>> from the GUI I'll get this link: >>> >> https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >> >>> [1] >>> >>> paper version  are only in container whitepaper: >>> >>> >> https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >> >>> [2] >>> >>> In general there is no group named papers >>> >>> kind regards >>> >>> Frank >>> >>> Am 2018-07-30 17:06, schrieb Jimmy McArthur: >>> Frank, >>> >>> We're getting a 404 when looking for the pot file on the Zanata API: >>> >> https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >> >>> [3] >>> >>> As a result, we can't pull the po files.  Any idea what might be >>> happening? >>> >>> Seeing the same thing with both papers... >>> >>> Thank you, >>> Jimmy >>> >>> Frank Kloeker wrote: >>> Hi Jimmy, >>> >>> Korean and German version are now done on the new format. Can you >>> check publishing? >>> >>> thx >>> >>> Frank >>> >>> Am 2018-07-19 16:47, schrieb Jimmy McArthur: >>> Hi all - >>> >>> Follow up on the Edge paper specifically: >>> >> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >> >>> [4] This is now available. As I mentioned on IRC this morning, it >>> should >>> be VERY close to the PDF.  Probably just needs a quick review. >>> >>> Let me know if I can assist with anything. >>> >>> Thank you to i18n team for all of your help!!! >>> >>> Cheers, >>> Jimmy >>> >>> Jimmy McArthur wrote: >>> Ian raises some great points :) I'll try to address below... >>> >>> Ian Y. Choi wrote: >>> Hello, >>> >>> When I saw overall translation source strings on container >>> whitepaper, I would infer that new edge computing whitepaper >>> source strings would include HTML markup tags. >>> One of the things I discussed with Ian and Frank in Vancouver is >>> the expense of recreating PDFs with new translations.  It's >>> prohibitively expensive for the Foundation as it requires design >>> resources which we just don't have.  As a result, we created the >>> Containers whitepaper in HTML, so that it could be easily updated >>> w/o working with outside design contractors.  I indicated that we >>> would also be moving the Edge paper to HTML so that we could prevent >>> that additional design resource cost. >>> On the other hand, the source strings of edge computing whitepaper >>> which I18n team previously translated do not include HTML markup >>> tags, since the source strings are based on just text format. >>> The version that Akihiro put together was based on the Edge PDF, >>> which we unfortunately didn't have the resources to implement in the >>> same format. >>> >>> I really appreciate Akihiro's work on RST-based support on >>> publishing translated edge computing whitepapers, since >>> translators do not have to re-translate all the strings. >>> I would like to second this. It took a lot of initiative to work on >>> the RST-based translation.  At the moment, it's just not usable for >>> the reasons mentioned above. >>> On the other hand, it seems that I18n team needs to investigate on >>> translating similar strings of HTML-based edge computing whitepaper >>> source strings, which would discourage translators. >>> Can you expand on this? I'm not entirely clear on why the HTML >>> based translation is more difficult. >>> >>> That's my point of view on translating edge computing whitepaper. >>> >>> For translating container whitepaper, I want to further ask the >>> followings since *I18n-based tools* >>> would mean for translators that translators can test and publish >>> translated whitepapers locally: >>> >>> - How to build translated container whitepaper using original >>> Silverstripe-based repository? >>> https://docs.openstack.org/i18n/latest/tools.html [5] describes >>> well how to build translated artifacts for RST-based OpenStack >>> repositories >>> but I could not find the way how to build translated container >>> whitepaper with translated resources on Zanata. >>> This is a little tricky.  It's possible to set up a local version >>> of the OpenStack website >>> >> (https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >> >>> [6]).  However, we have to manually ingest the po files as they are >>> completed and then push them out to production, so that wouldn't do >>> much to help with your local build.  I'm open to suggestions on how >>> we can make this process easier for the i18n team. >>> >>> Thank you, >>> Jimmy >>> >>> With many thanks, >>> >>> /Ian >>> >>> Jimmy McArthur wrote on 7/17/2018 11:01 PM: >>> Frank, >>> >>> I'm sorry to hear about the displeasure around the Edge paper.  As >>> mentioned in a prior thread, the RST format that Akihiro worked did >>> not work with the  Zanata process that we have been using with our >>> CMS.  Additionally, the existing EDGE page is a PDF, so we had to >>> build a new template to work with the new HTML whitepaper layout we >>> created for the Containers paper. I outlined this in the thread " >>> [OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge Computing >>> Whitepaper Translation" on 6/25/18 and mentioned we would be ready >>> with the template around 7/13. >>> >>> We completed the work on the new whitepaper template and then put >>> out the pot files on Zanata so we can get the po language files >>> back. If this process is too cumbersome for the translation team, >>> I'm open to discussion, but right now our entire translation process >>> is based on the official OpenStack Docs translation process outlined >>> by the i18n team: >>> https://docs.openstack.org/i18n/latest/en_GB/tools.html [7] >>> >>> Again, I realize Akihiro put in some work on his own proposing the >>> new translation type. If the i18n team is moving to this format >>> instead, we can work on redoing our process. >>> >>> Please let me know if I can clarify further. >>> >>> Thanks, >>> Jimmy >>> >>> Frank Kloeker wrote: >>> Hi Jimmy, >>> >>> permission was added for you and Sebastian. The Container Whitepaper >>> is on the Zanata frontpage now. But we removed Edge Computing >>> whitepaper last week because there is a kind of displeasure in the >>> team since the results of translation are still not published beside >>> Chinese version. It would be nice if we have a commitment from the >>> Foundation that results are published in a specific timeframe. This >>> includes your requirements until the translation should be >>> available. >>> >>> thx Frank >>> >>> Am 2018-07-16 17:26, schrieb Jimmy McArthur: >>> Sorry, I should have also added... we additionally need permissions >>> so >>> that we can add the a new version of the pot file to this project: >>> >> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >> >>> [8] Thanks! >>> Jimmy >>> >>> Jimmy McArthur wrote: >>> Hi all - >>> >>> We have both of the current whitepapers up and available for >>> translation.  Can we promote these on the Zanata homepage? >>> >>> >> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >> >>> [9] >>> >> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >> >>> [10] Thanks all! >>> Jimmy >>> >>> >> __________________________________________________________________________ >> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> [12] >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [12] >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [12] >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [12] >> >> >> >> Links: >> ------ >> [1] >> https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >> >> [2] >> https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >> >> [3] >> https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >> >> [4] >> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >> >> [5] https://docs.openstack.org/i18n/latest/tools.html >> [6] >> https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >> [7] https://docs.openstack.org/i18n/latest/en_GB/tools.html >> [8] >> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >> >> [9] >> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >> >> [10] >> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >> >> [11] >> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> [12] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gong.yongsheng at 99cloud.net Thu Aug 2 02:36:51 2018 From: gong.yongsheng at 99cloud.net (=?GBK?B?uajTwMn6?=) Date: Thu, 2 Aug 2018 10:36:51 +0800 (CST) Subject: [openstack-dev] [Tacker] - TACKER + NETWORKING_SFC + NSH In-Reply-To: References: Message-ID: <2da24d4b.3736.164f87eb8f1.Coremail.gong.yongsheng@99cloud.net> William, tacker is just using network-sfc API, we have tested the ovs driver of it. regards, yong sheng gong -- 龚永生 九州云信息科技有限公司 99CLOUD Co. Ltd. 邮箱(Email):gong.yongsheng at 99cloud.net 地址:北京市海淀区上地三街嘉华大厦B座806 Addr : Room 806, Tower B, Jiahua Building, No. 9 Shangdi 3rd Street, Haidian District, Beijing, China 手机(Mobile):+86-18618199879 公司网址(WebSite):http://99cloud.net At 2018-08-01 02:17:14, "william sales" wrote: Hello guys, is there any version of Tacker that allows the use of networking_sfc with NSH? Thankful. William Sales -------------- next part -------------- An HTML attachment was scrubbed... URL: From linghucongsong at 163.com Thu Aug 2 03:01:31 2018 From: linghucongsong at 163.com (linghucongsong) Date: Thu, 2 Aug 2018 11:01:31 +0800 (CST) Subject: [openstack-dev] [tricircle] Tricircle or Trio2o In-Reply-To: References: Message-ID: <7ed0df37.65a5.164f8954cf9.Coremail.linghucongsong@163.com> HI Andrea ! Yes, just as you said.The tricircle is now only work for network.Because the trio2o do not as the openstack official project. so it is a long time nobody contribute to it. But recently In the next openstack stein circle. we have plan to make tricircle and trio2o work together in the tricircle stein plan. see below link: https://etherpad.openstack.org/p/tricircle-stein-plan After this fininsh we can play tricircle and tri2o2 together and make multisite openstack solutions more effictive. At 2018-08-02 00:55:30, "Andrea Franceschini" wrote: >Hello All, > >While I was looking for multisite openstack solutions I stumbled on >Tricircle project which seemed fairly perfect for the job except that >l it was split in two parts, tricircle itself for the network part and >Trio2o for all the rest. > >Now it seems that the Trio2o project is no longer maintained and I'm >wondering what other options exist for multisite openstack, stated >that tricircle seems more NFV oriented. > >Actually a heat multisite solution would work too, but I cannot find >any reference to this kind of solutions. > >Do you have any idea/advice? > >Thanks, > >Andrea > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Thu Aug 2 03:13:19 2018 From: zigo at debian.org (Thomas Goirand) Date: Thu, 2 Aug 2018 05:13:19 +0200 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> Message-ID: <12b36042-0303-0c2b-5239-7e4084fe1219@debian.org> On 07/12/2018 10:38 PM, Thomas Goirand wrote: > Hi everyone! > > [...] Here's more examples that shows why we should be gating earlier with newer Python versions: Nova: https://review.openstack.org/#/c/584365/ Glance: https://review.openstack.org/#/c/586716/ Murano: https://bugs.debian.org/904581 Pyghmi: https://bugs.debian.org/905213 There's also some "raise StopIteration" issues in: - ceilometer - cinder - designate - glance - glare - heat - karbor - manila - murano - networking-ovn - neutron-vpnaas - nova - rally - zaqar It'd be nice to have these addressed ASAP. Cheers, Thomas Goirand (zigo) From soulxu at gmail.com Thu Aug 2 05:12:54 2018 From: soulxu at gmail.com (Alex Xu) Date: Thu, 2 Aug 2018 13:12:54 +0800 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> Message-ID: 2018-08-02 4:09 GMT+08:00 Jay Pipes : > On 08/01/2018 02:02 PM, Chris Friesen wrote: > >> On 08/01/2018 11:32 AM, melanie witt wrote: >> >> I think it's definitely a significant issue that troubleshooting "No >>> allocation >>> candidates returned" from placement is so difficult. However, it's not >>> straightforward to log detail in placement when the request for >>> allocation >>> candidates is essentially "SELECT * FROM nodes WHERE cpu usage < needed >>> and disk >>> usage < needed and memory usage < needed" and the result is returned >>> from the API. >>> >> >> I think the only way to get useful info on a failure would be to break >> down the huge SQL statement into subclauses and store the results of the >> intermediate queries. >> > > This is a good idea and something that can be done. > That sounds like you need separate sql query for each resource to get the intermediate, will that be terrible performance than a single query to get the final result? > > Unfortunately, it's refactoring work and as a community, we tend to > prioritize fancy features like NUMA topology and CPU pinning over > refactoring work. > > Best, > -jay > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harlowja at fastmail.com Thu Aug 2 05:34:45 2018 From: harlowja at fastmail.com (Joshua Harlow) Date: Wed, 01 Aug 2018 22:34:45 -0700 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <5B61F59A.1080502@windriver.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> Message-ID: <5B6297F5.9050507@fastmail.com> If I could, I would have something *like* the EXPLAIN syntax for looking at a sql query, but instead of telling me the query plan for a sql query, it would tell me the decisions (placement plan?) that resulted in a given resource being placed at a certain location. And I would be able to say request the explanation for a given request id (historical even) so that analysis could be done post-change and pre-change (say I update the algorithm for selection) so that the effects of alternations to said decisions could be determined. If it could also have a front-end like what is at http://sorting.at/ (press the play button) that'd be super sweet also (but not for sorting, but instead for placement, which if u squint at that webpage could have something similar built). My 3 cents, ha -Josh Chris Friesen wrote: > On 08/01/2018 11:32 AM, melanie witt wrote: > >> I think it's definitely a significant issue that troubleshooting "No >> allocation >> candidates returned" from placement is so difficult. However, it's not >> straightforward to log detail in placement when the request for >> allocation >> candidates is essentially "SELECT * FROM nodes WHERE cpu usage < >> needed and disk >> usage < needed and memory usage < needed" and the result is returned >> from the API. > > I think the only way to get useful info on a failure would be to break > down the huge SQL statement into subclauses and store the results of the > intermediate queries. So then if it failed placement could log something > like: > > hosts with enough CPU: > hosts that also have enough disk: > hosts that also have enough memory: > hosts that also meet extra spec host aggregate keys: > hosts that also meet image properties host aggregate keys: > hosts that also have requested PCI devices: > > And maybe we could optimize the above by only emitting logs where the > list has a length less than X (to avoid flooding the logs with hostnames > in large clusters). > > This would let you zero in on the things that finally caused the list to > be whittled down to nothing. > > Chris > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From marios at redhat.com Thu Aug 2 05:45:11 2018 From: marios at redhat.com (Marios Andreou) Date: Thu, 2 Aug 2018 08:45:11 +0300 Subject: [openstack-dev] [tripleo] Proposing Lukas Bezdicka core on TripleO In-Reply-To: References: Message-ID: +1 ! On Wed, Aug 1, 2018 at 2:31 PM, Giulio Fidente wrote: > Hi, > > I would like to propose Lukas Bezdicka core on TripleO. > > Lukas did a lot work in our tripleoclient, tripleo-common and > tripleo-heat-templates repos to make FFU possible. > > FFU, which is meant to permit upgrades from Newton to Queens, requires > in depth understanding of many TripleO components (for example Heat, > Mistral and the TripleO client) but also of specific TripleO features > which were added during the course of the three releases (for example > config-download and upgrade tasks). I believe his FFU work to have been > very challenging. > > Given his broad understanding, more recently Lukas started helping doing > reviews in other areas. > > I am so sure he'll be a great addition to our group that I am not even > looking for comments, just votes :D > -- > Giulio Fidente > GPG KEY: 08D733BA > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpena at redhat.com Thu Aug 2 08:26:47 2018 From: jpena at redhat.com (Javier Pena) Date: Thu, 2 Aug 2018 04:26:47 -0400 (EDT) Subject: [openstack-dev] [all][election][tc] Lederless projects. In-Reply-To: <20180801003249.GE15918@thor.bakeyournoodle.com> References: <20180731235512.GB15918@thor.bakeyournoodle.com> <20180801003249.GE15918@thor.bakeyournoodle.com> Message-ID: <406614116.22425715.1533198407835.JavaMail.zimbra@redhat.com> ----- Original Message ----- > On Wed, Aug 01, 2018 at 09:55:13AM +1000, Tony Breeds wrote: > > > > Hello all, > > The PTL Nomination period is now over. The official candidate list > > is available on the election website[0]. > > > > There are 8 projects without candidates, so according to this > > resolution[1], the TC will have to decide how the following > > projects will proceed: Dragonflow, Freezer, Loci, Packaging_Rpm, The Packaging RPM team had our weekly meeting yesterday. We are sorry for the inconveniences, caused by some miscommunication on our side. We decided to propose Dirk Mueller as PTL for TC appointment for the Stein cycle [1], and we will make an effort to avoid this situation in the future. Thanks, Javier [1] - http://eavesdrop.openstack.org/meetings/rpm_packaging/2018/rpm_packaging.2018-08-01-13.01.log.html#l-44 > > RefStack, Searchlight, Trove and Winstackers. > > Hello TC, > A few extra details[1]: > > --------------------------------------------------- > Projects[1] : 65 > Projects with candidates : 57 ( 87.69%) > Projects with election : 2 ( 3.08%) > --------------------------------------------------- > Need election : 2 (Senlin Tacker) > Need appointment : 8 (Dragonflow Freezer Loci Packaging_Rpm RefStack > Searchlight Trove Winstackers) > =================================================== > Stats gathered @ 2018-08-01 00:11:59 UTC > > Of the 8 projects that can be considered leaderless, Trove did have a > candidate[2] that doesn't meet the ATC criteria in that they do not > have a merged change. > > I also excluded Security due to the governance review[3] to remove it as > a project and the companion email discussion[4] > > Yours Tony. > > [1] http://paste.openstack.org/show/727002 > [2] https://review.openstack.org/587333 > [3] https://review.openstack.org/586896 > [4] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132595.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From ifatafekn at gmail.com Thu Aug 2 08:43:24 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Thu, 2 Aug 2018 11:43:24 +0300 Subject: [openstack-dev] [vitrage][ptg] Vitrage virtual PTG Message-ID: Hi, As discussed in our IRC meeting yesterday [1], we will hold the Vitrage virtual PTG on the first week of October. If you would like to participate, you are welcome to add your name, time zone and ideas for discussion in the PTG etherpad[2]. [1] http://eavesdrop.openstack.org/meetings/vitrage/2018/vitrage.2018-08-01-08.00.log.html [2] https://etherpad.openstack.org/p/vitrage-stein-ptg Br, Ifat -------------- next part -------------- An HTML attachment was scrubbed... URL: From andr.kurilin at gmail.com Thu Aug 2 08:43:35 2018 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Thu, 2 Aug 2018 11:43:35 +0300 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: <12b36042-0303-0c2b-5239-7e4084fe1219@debian.org> References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> <12b36042-0303-0c2b-5239-7e4084fe1219@debian.org> Message-ID: Hi Thomas! On Thu, 2 Aug 2018 at 06:13, Thomas Goirand wrote: > On 07/12/2018 10:38 PM, Thomas Goirand wrote: > > Hi everyone! > > > > [...] > Here's more examples that shows why we should be gating earlier with > newer Python versions: > > Nova: > https://review.openstack.org/#/c/584365/ > > Glance: > https://review.openstack.org/#/c/586716/ > > Murano: > https://bugs.debian.org/904581 > > Pyghmi: > https://bugs.debian.org/905213 > > There's also some "raise StopIteration" issues in: > - ceilometer > - cinder > - designate > - glance > - glare > - heat > - karbor > - manila > - murano > - networking-ovn > - neutron-vpnaas > - nova > - rally Can you provide any traceback or steps to reproduce the issue for Rally project ? > > - zaqar > > It'd be nice to have these addressed ASAP. > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Aug 2 08:58:53 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 2 Aug 2018 10:58:53 +0200 Subject: [openstack-dev] [all][election] PTL nominations are now closed In-Reply-To: <20180731235512.GB15918@thor.bakeyournoodle.com> References: <20180731235512.GB15918@thor.bakeyournoodle.com> Message-ID: <5e4754db-a601-afa0-2690-459373fdc7c4@openstack.org> Tony Breeds wrote: > [...] > There are 8 projects without candidates, so according to this > resolution[1], the TC will have to decide how the following > projects will proceed: Dragonflow, Freezer, Loci, Packaging_Rpm, > RefStack, Searchlight, Trove and Winstackers. Here is my take on that... Packaging_Rpm has a late candidate (Dirk Mueller). We always have a few teams per cycle that miss the election call, that would fall under that. Trove had a volunteer (Dariusz Krol), but that person did not fill the requirements for candidates. Given that the previous PTL (Zhao Chao) plans to stay around to help onboarding the new contributors, I'd support appointing Dariusz. I suspect Freezer falls in the same bucket as Packaging_Rpm and we should get a candidate there. I would reach out to caoyuan see if they would be interested in steeping up. LOCI is also likely in the same bucket. However, given that it's a deployment project, if we can't get anyone to step up and guarantee some level of currentness, we should consider removing it from the "official" list. Dragonflow is a bit in the LOCI case. It feels like a miss too, but if it's not, given that it's an add-on project that runs within Neutron, I would consider removing it from the "official" list if we can't find anyone to step up. For Winstackers and Searchlight, those are low-activity teams (18 and 13 commits), which brings the question of PTL workload for feature-complete projects. Finally, RefStack: I feel like this should be wrapped into an Interoperability SIG, since that project team is not producing "OpenStack", but helping fostering OpenStack interoperability. Having separate groups (Interop WG, RefStack) sounds overkill anyway, and with the introduction of SIGs we have been recentering project teams on upstream code production. -- Thierry Carrez (ttx) From cdent+os at anticdent.org Thu Aug 2 09:18:56 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 2 Aug 2018 10:18:56 +0100 (BST) Subject: [openstack-dev] [placement] #openstack-placement IRC channel requires registered nicks Message-ID: I thought I should post a message here for visibility that yesterday we made the openstack-placement IRC channel +r so that the recent spate of spammers could be blocked. This means that you must have a registered nick to gain access to the channel. There's information on how to register at: https://freenode.net/kb/answer/registration Plenty of other channels have been doing the same thing, see: https://etherpad.openstack.org/p/freenode-plus-r-08-2018 -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From oaanson at gmail.com Thu Aug 2 09:56:37 2018 From: oaanson at gmail.com (Omer Anson) Date: Thu, 2 Aug 2018 12:56:37 +0300 Subject: [openstack-dev] [all][election] PTL nominations are now closed In-Reply-To: <5e4754db-a601-afa0-2690-459373fdc7c4@openstack.org> References: <20180731235512.GB15918@thor.bakeyournoodle.com> <5e4754db-a601-afa0-2690-459373fdc7c4@openstack.org> Message-ID: Hi, I'm sorry for the inconvenience. I completely missed the nomination period. Is it possible to send in a late nomination for Dragonflow? Thanks, Omer Anson. On Thu, 2 Aug 2018 at 11:59, Thierry Carrez wrote: > Tony Breeds wrote: > > [...] > > There are 8 projects without candidates, so according to this > > resolution[1], the TC will have to decide how the following > > projects will proceed: Dragonflow, Freezer, Loci, Packaging_Rpm, > > RefStack, Searchlight, Trove and Winstackers. > > Here is my take on that... > > Packaging_Rpm has a late candidate (Dirk Mueller). We always have a few > teams per cycle that miss the election call, that would fall under that. > > Trove had a volunteer (Dariusz Krol), but that person did not fill the > requirements for candidates. Given that the previous PTL (Zhao Chao) > plans to stay around to help onboarding the new contributors, I'd > support appointing Dariusz. > > I suspect Freezer falls in the same bucket as Packaging_Rpm and we > should get a candidate there. I would reach out to caoyuan see if they > would be interested in steeping up. > > LOCI is also likely in the same bucket. However, given that it's a > deployment project, if we can't get anyone to step up and guarantee some > level of currentness, we should consider removing it from the "official" > list. > > Dragonflow is a bit in the LOCI case. It feels like a miss too, but if > it's not, given that it's an add-on project that runs within Neutron, I > would consider removing it from the "official" list if we can't find > anyone to step up. > > For Winstackers and Searchlight, those are low-activity teams (18 and 13 > commits), which brings the question of PTL workload for feature-complete > projects. > > Finally, RefStack: I feel like this should be wrapped into an > Interoperability SIG, since that project team is not producing > "OpenStack", but helping fostering OpenStack interoperability. Having > separate groups (Interop WG, RefStack) sounds overkill anyway, and with > the introduction of SIGs we have been recentering project teams on > upstream code production. > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Thu Aug 2 10:10:40 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 2 Aug 2018 11:10:40 +0100 (BST) Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> Message-ID: Responses to some of Jay's comments below, but first, to keep this on track with the original goal of the thread ("How to debug no valid host failures with placement") before I drag it to the side, some questions. When people ask for something like what Chris mentioned: hosts with enough CPU: hosts that also have enough disk: hosts that also have enough memory: hosts that also meet extra spec host aggregate keys: hosts that also meet image properties host aggregate keys: hosts that also have requested PCI devices: What are the operational questions that people are trying to answer with those results? Is the idea to be able to have some insight into the resource usage and reporting on and from the various hosts and discover that things are being used differently than thought? Is placement a resource monitoring tool, or is it more simple and focused than that? Or is it that we might have flavors or other resource requesting constraints that have bad logic and we want to see at what stage the failure is? I don't know and I haven't really seen it stated explicitly here, and knowing it would help. Do people want info like this for requests as they happen, or to be able to go back later and try the same request again with some flag on that says: "diagnose what happened"? Or to put it another way: Before we design something that provides the information above, which is a solution to an undescribed problem, can we describe the problem more completely first to make sure that what solution we get is the right one. The thing above, that set of information, is context free. On Wed, 1 Aug 2018, Jay Pipes wrote: > On 08/01/2018 02:02 PM, Chris Friesen wrote: >> I think the only way to get useful info on a failure would be to break down >> the huge SQL statement into subclauses and store the results of the >> intermediate queries. > > This is a good idea and something that can be done. I can see how it would be a good idea from an explicit debugging standpoint, but is it a good idea on all fronts? From the very early days when placement was just a thing under your pen on whiteboards, we were trying to achieve something that wasn't the FilterScheduler but achieved efficiencies and some measure of black boxed-ness by being as near as possible to a single giant SQL statement as we could get it. Do we want to get too far away from that? Another thing to consider is that in a large installation, logging these intermediate results (if done in the listing-hosts way indicated above) would be very large without some truncating or "only if < N results" guards. Would another approach be to make it easy to replay a resource request that incrementally retries the request with a less constrained set of requirements (expanding by some heuristic we design)? Something on a different URI where the response is in neither of what /allocation_candidates or /resourcer_providers returns, but allows the caller to know the boundary of results and no results is. One could also imagine a non-http interface to placement that outputs something a bit like 'top': a regularly updating scan of resource usage. But it's hard to know if that is even relevant without more info as asked above. It could very well be that explicit debugging of filtering stages is the right way to go, but we should look closely at the costs of doing so. Part of me is all: Please, yes, let's do it, it would make the code _so_ much more comprehensible. But there were reasons we made the complex SQL in the first place. > Unfortunately, it's refactoring work and as a community, we tend to > prioritize fancy features like NUMA topology and CPU pinning over refactoring > work. I think if we, as a community, said "no", that would be okay. That's really all it would take. We effectively say "no" to features all the time anyway, because we've generated software to which it takes 3 years to add something like placement to anyway, for very little appreciable gain in that time (Yes there are many improvements under the surface and with things like race conditions, but in terms of what can be accomplished with the new tooling, we're still not there). If our labour is indeed valuable we can choose to exercise greater control over its direction. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From glongwave at gmail.com Thu Aug 2 11:30:48 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Thu, 2 Aug 2018 19:30:48 +0800 Subject: [openstack-dev] =?utf-8?q?=5Boslo=5D_proposing_Mois=C3=A9s_Guimar?= =?utf-8?q?=C3=A3es_for_oslo=2Econfig_core?= In-Reply-To: References: <1533129742-sup-2007@lrrr.local> Message-ID: +1 2018-08-01 23:38 GMT+08:00 John Dennis : > On 08/01/2018 09:27 AM, Doug Hellmann wrote: > >> Moisés Guimarães (moguimar) did quite a bit of work on oslo.config >> during the Rocky cycle to add driver support. Based on that work, >> and a discussion we have had since then about general cleanup needed >> in oslo.config, I think he would make a good addition to the >> oslo.config review team. >> >> Please indicate your approval or concerns with +1/-1. >> > > +1 > > > -- > John Dennis > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Thu Aug 2 12:07:14 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Thu, 2 Aug 2018 08:07:14 -0400 Subject: [openstack-dev] [placement] #openstack-placement IRC channel requires registered nicks In-Reply-To: References: Message-ID: On Thu, Aug 2, 2018 at 5:18 AM, Chris Dent wrote: > > > I thought I should post a message here for visibility that yesterday > we made the openstack-placement IRC channel +r so that the recent > spate of spammers could be blocked. > > This means that you must have a registered nick to gain access to > the channel. There's information on how to register at: > > https://freenode.net/kb/answer/registration > > Plenty of other channels have been doing the same thing, see: > > https://etherpad.openstack.org/p/freenode-plus-r-08-2018 In case you (or others) missed it, infra actually went through and made all official OpenStack channels +r. They're also set to redirect to #openstack-unregistered where there's a message about what's going on and people there to help navigate registering a nick. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at tipit.net Thu Aug 2 12:45:47 2018 From: sebastian at tipit.net (Sebastian Marcet) Date: Thu, 2 Aug 2018 09:45:47 -0300 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: <83a0d94f-dc74-e1c1-951b-1fcec2fca6f1@gmail.com> References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> <5B4DF6B4.9030501@openstack.org> <7f053a98-718b-0470-9ffa-934f0e715a76@gmail.com> <5B4E132E.5050607@openstack.org> <5B50A476.8010606@openstack.org> <5B5F295F.3090608@openstack.org> <1f5afd62cc3a9a8923586a404e707366@arcor.de> <16e69b47c8b71bf6f920ab8f3df61928@arcor.de> <83a0d94f-dc74-e1c1-951b-1fcec2fca6f1@gmail.com> Message-ID: Hello Ian, due the nature of the pot file format and mechanic we cant add the translators as msgid entries bc will only exist on the corresponding po file per lang said that , i think that we could create a solution using both [1] and [2] that said * adding "TRANSLATORS" msgid on pot file, so i could get that string per lang * adding translators names as stated on [2] as po file metadata so i could parse and display per language regards On Wed, Aug 1, 2018 at 11:03 PM, Ian Y. Choi wrote: > Hello Sebastian, > > Korean has also currently 100% translation now. > About two weeks ago, there were a discussion how to include the list of > translators per translated document. > > My proposal is mentioned in [1] - do you think it is a good idea and it is > under implementation, > or parsing the name of translators in header lines on po files (e.g., four > lines on [2]) would be better idea? > > > With many thanks, > > /Ian > > [1] http://eavesdrop.openstack.org/irclogs/%23openstack-i18n/% > 23openstack-i18n.2018-07-19.log.html#t2018-07-19T15:09:46 > [2] http://git.openstack.org/cgit/openstack/i18n/tree/doc/source > /locale/de/LC_MESSAGES/doc.po#n1 > > > Frank Kloeker wrote on 7/31/2018 6:39 PM: > >> Hi Sebastian, >> >> okay, it's translated now. In Edge whitepaper is the problem with >> XML-Parsing of the term AT&T. Don't know how to escape this. Maybe you will >> see the warning during import too. >> >> kind regards >> >> Frank >> >> Am 2018-07-30 20:09, schrieb Sebastian Marcet: >> >>> Hi Frank, >>> i was double checking pot file and realized that original pot missed >>> some parts of the original paper (subsections of the paper) apologizes >>> on that >>> i just re uploaded an updated pot file with missing subsections >>> >>> regards >>> >>> On Mon, Jul 30, 2018 at 2:20 PM, Frank Kloeker wrote: >>> >>> Hi Jimmy, >>>> >>>> from the GUI I'll get this link: >>>> >>>> https://translate.openstack.org/rest/file/translation/edge- >>> computing/pot-translation/de/po?docId=cloud-edge-computing- >>> beyond-the-data-center >>> >>>> [1] >>>> >>>> paper version are only in container whitepaper: >>>> >>>> >>>> https://translate.openstack.org/rest/file/translation/levera >>> ging-containers-openstack/paper/de/po?docId=leveraging- >>> containers-and-openstack >>> >>>> [2] >>>> >>>> In general there is no group named papers >>>> >>>> kind regards >>>> >>>> Frank >>>> >>>> Am 2018-07-30 17:06, schrieb Jimmy McArthur: >>>> Frank, >>>> >>>> We're getting a 404 when looking for the pot file on the Zanata API: >>>> >>>> https://translate.openstack.org/rest/file/translation/papers >>> /papers/de/po?docId=edge-computing >>> >>>> [3] >>>> >>>> As a result, we can't pull the po files. Any idea what might be >>>> happening? >>>> >>>> Seeing the same thing with both papers... >>>> >>>> Thank you, >>>> Jimmy >>>> >>>> Frank Kloeker wrote: >>>> Hi Jimmy, >>>> >>>> Korean and German version are now done on the new format. Can you >>>> check publishing? >>>> >>>> thx >>>> >>>> Frank >>>> >>>> Am 2018-07-19 16:47, schrieb Jimmy McArthur: >>>> Hi all - >>>> >>>> Follow up on the Edge paper specifically: >>>> >>>> https://translate.openstack.org/iteration/view/edge-computin >>> g/pot-translation/documents?dswid=-3192 >>> >>>> [4] This is now available. As I mentioned on IRC this morning, it >>>> should >>>> be VERY close to the PDF. Probably just needs a quick review. >>>> >>>> Let me know if I can assist with anything. >>>> >>>> Thank you to i18n team for all of your help!!! >>>> >>>> Cheers, >>>> Jimmy >>>> >>>> Jimmy McArthur wrote: >>>> Ian raises some great points :) I'll try to address below... >>>> >>>> Ian Y. Choi wrote: >>>> Hello, >>>> >>>> When I saw overall translation source strings on container >>>> whitepaper, I would infer that new edge computing whitepaper >>>> source strings would include HTML markup tags. >>>> One of the things I discussed with Ian and Frank in Vancouver is >>>> the expense of recreating PDFs with new translations. It's >>>> prohibitively expensive for the Foundation as it requires design >>>> resources which we just don't have. As a result, we created the >>>> Containers whitepaper in HTML, so that it could be easily updated >>>> w/o working with outside design contractors. I indicated that we >>>> would also be moving the Edge paper to HTML so that we could prevent >>>> that additional design resource cost. >>>> On the other hand, the source strings of edge computing whitepaper >>>> which I18n team previously translated do not include HTML markup >>>> tags, since the source strings are based on just text format. >>>> The version that Akihiro put together was based on the Edge PDF, >>>> which we unfortunately didn't have the resources to implement in the >>>> same format. >>>> >>>> I really appreciate Akihiro's work on RST-based support on >>>> publishing translated edge computing whitepapers, since >>>> translators do not have to re-translate all the strings. >>>> I would like to second this. It took a lot of initiative to work on >>>> the RST-based translation. At the moment, it's just not usable for >>>> the reasons mentioned above. >>>> On the other hand, it seems that I18n team needs to investigate on >>>> translating similar strings of HTML-based edge computing whitepaper >>>> source strings, which would discourage translators. >>>> Can you expand on this? I'm not entirely clear on why the HTML >>>> based translation is more difficult. >>>> >>>> That's my point of view on translating edge computing whitepaper. >>>> >>>> For translating container whitepaper, I want to further ask the >>>> followings since *I18n-based tools* >>>> would mean for translators that translators can test and publish >>>> translated whitepapers locally: >>>> >>>> - How to build translated container whitepaper using original >>>> Silverstripe-based repository? >>>> https://docs.openstack.org/i18n/latest/tools.html [5] describes >>>> well how to build translated artifacts for RST-based OpenStack >>>> repositories >>>> but I could not find the way how to build translated container >>>> whitepaper with translated resources on Zanata. >>>> This is a little tricky. It's possible to set up a local version >>>> of the OpenStack website >>>> >>>> (https://github.com/OpenStackweb/openstack-org/blob/master/ >>> installation.md >>> >>>> [6]). However, we have to manually ingest the po files as they are >>>> completed and then push them out to production, so that wouldn't do >>>> much to help with your local build. I'm open to suggestions on how >>>> we can make this process easier for the i18n team. >>>> >>>> Thank you, >>>> Jimmy >>>> >>>> With many thanks, >>>> >>>> /Ian >>>> >>>> Jimmy McArthur wrote on 7/17/2018 11:01 PM: >>>> Frank, >>>> >>>> I'm sorry to hear about the displeasure around the Edge paper. As >>>> mentioned in a prior thread, the RST format that Akihiro worked did >>>> not work with the Zanata process that we have been using with our >>>> CMS. Additionally, the existing EDGE page is a PDF, so we had to >>>> build a new template to work with the new HTML whitepaper layout we >>>> created for the Containers paper. I outlined this in the thread " >>>> [OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge Computing >>>> Whitepaper Translation" on 6/25/18 and mentioned we would be ready >>>> with the template around 7/13. >>>> >>>> We completed the work on the new whitepaper template and then put >>>> out the pot files on Zanata so we can get the po language files >>>> back. If this process is too cumbersome for the translation team, >>>> I'm open to discussion, but right now our entire translation process >>>> is based on the official OpenStack Docs translation process outlined >>>> by the i18n team: >>>> https://docs.openstack.org/i18n/latest/en_GB/tools.html [7] >>>> >>>> Again, I realize Akihiro put in some work on his own proposing the >>>> new translation type. If the i18n team is moving to this format >>>> instead, we can work on redoing our process. >>>> >>>> Please let me know if I can clarify further. >>>> >>>> Thanks, >>>> Jimmy >>>> >>>> Frank Kloeker wrote: >>>> Hi Jimmy, >>>> >>>> permission was added for you and Sebastian. The Container Whitepaper >>>> is on the Zanata frontpage now. But we removed Edge Computing >>>> whitepaper last week because there is a kind of displeasure in the >>>> team since the results of translation are still not published beside >>>> Chinese version. It would be nice if we have a commitment from the >>>> Foundation that results are published in a specific timeframe. This >>>> includes your requirements until the translation should be >>>> available. >>>> >>>> thx Frank >>>> >>>> Am 2018-07-16 17:26, schrieb Jimmy McArthur: >>>> Sorry, I should have also added... we additionally need permissions >>>> so >>>> that we can add the a new version of the pot file to this project: >>>> >>>> https://translate.openstack.org/project/view/edge-computing/ >>> versions?dswid=-7835 >>> >>>> [8] Thanks! >>>> Jimmy >>>> >>>> Jimmy McArthur wrote: >>>> Hi all - >>>> >>>> We have both of the current whitepapers up and available for >>>> translation. Can we promote these on the Zanata homepage? >>>> >>>> >>>> https://translate.openstack.org/project/view/leveraging-cont >>> ainers-openstack?dswid=5684 >>> >>>> [9] >>>> >>>> https://translate.openstack.org/iteration/view/edge-computin >>> g/master/documents?dswid=5684 >>> >>>> [10] Thanks all! >>>> Jimmy >>>> >>>> >>>> __________________________________________________________________________ >>> >>> >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> [12] >>>> >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [12] >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [12] >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [12] >>> >>> >>> >>> Links: >>> ------ >>> [1] >>> https://translate.openstack.org/rest/file/translation/edge- >>> computing/pot-translation/de/po?docId=cloud-edge-computing- >>> beyond-the-data-center >>> [2] >>> https://translate.openstack.org/rest/file/translation/levera >>> ging-containers-openstack/paper/de/po?docId=leveraging- >>> containers-and-openstack >>> [3] >>> https://translate.openstack.org/rest/file/translation/papers >>> /papers/de/po?docId=edge-computing >>> [4] >>> https://translate.openstack.org/iteration/view/edge-computin >>> g/pot-translation/documents?dswid=-3192 >>> [5] https://docs.openstack.org/i18n/latest/tools.html >>> [6] https://github.com/OpenStackweb/openstack-org/blob/master/ >>> installation.md >>> [7] https://docs.openstack.org/i18n/latest/en_GB/tools.html >>> [8] >>> https://translate.openstack.org/project/view/edge-computing/ >>> versions?dswid=-7835 >>> [9] >>> https://translate.openstack.org/project/view/leveraging-cont >>> ainers-openstack?dswid=5684 >>> [10] >>> https://translate.openstack.org/iteration/view/edge-computin >>> g/master/documents?dswid=5684 >>> [11] http://OpenStack-dev-request at lists.openstack.org?subject:uns >>> ubscribe >>> [12] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Aug 2 13:13:21 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 2 Aug 2018 13:13:21 +0000 Subject: [openstack-dev] [all][election] PTL nominations are now closed In-Reply-To: <5e4754db-a601-afa0-2690-459373fdc7c4@openstack.org> References: <20180731235512.GB15918@thor.bakeyournoodle.com> <5e4754db-a601-afa0-2690-459373fdc7c4@openstack.org> Message-ID: <20180802131321.3v3uhulwklryzqg7@yuggoth.org> On 2018-08-02 10:58:53 +0200 (+0200), Thierry Carrez wrote: [...] > Finally, RefStack: I feel like this should be wrapped into an > Interoperability SIG, since that project team is not producing > "OpenStack", but helping fostering OpenStack interoperability. > Having separate groups (Interop WG, RefStack) sounds overkill > anyway, and with the introduction of SIGs we have been recentering > project teams on upstream code production. That was one of the possibilities I discussed with them during their meeting a month ago: http://eavesdrop.openstack.org/irclogs/%23refstack/%23refstack.2018-07-03.log.html#t2018-07-03T17:05:43 Election official hat off and TC Refstack liaison hat on, I think if Chris Hoge doesn't volunteer to act as PTL this cycle to oversee shutting down the team and reassigning its deliverables, then we need to help them fast-track that nowish and not appoint a Stein cycle PTL. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Thu Aug 2 13:31:29 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 02 Aug 2018 09:31:29 -0400 Subject: [openstack-dev] [all][election] PTL nominations are now closed In-Reply-To: <20180802131321.3v3uhulwklryzqg7@yuggoth.org> References: <20180731235512.GB15918@thor.bakeyournoodle.com> <5e4754db-a601-afa0-2690-459373fdc7c4@openstack.org> <20180802131321.3v3uhulwklryzqg7@yuggoth.org> Message-ID: <1533216619-sup-8639@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-08-02 13:13:21 +0000: > On 2018-08-02 10:58:53 +0200 (+0200), Thierry Carrez wrote: > [...] > > Finally, RefStack: I feel like this should be wrapped into an > > Interoperability SIG, since that project team is not producing > > "OpenStack", but helping fostering OpenStack interoperability. > > Having separate groups (Interop WG, RefStack) sounds overkill > > anyway, and with the introduction of SIGs we have been recentering > > project teams on upstream code production. > > That was one of the possibilities I discussed with them during their > meeting a month ago: > > http://eavesdrop.openstack.org/irclogs/%23refstack/%23refstack.2018-07-03.log.html#t2018-07-03T17:05:43 > > Election official hat off and TC Refstack liaison hat on, I think if > Chris Hoge doesn't volunteer to act as PTL this cycle to oversee > shutting down the team and reassigning its deliverables, then we > need to help them fast-track that nowish and not appoint a Stein > cycle PTL. This came up at a joint leadership meeting right after we created SIGs and the Interop WG was reluctant to make any structural changes at the time because they had just gone through a renaming process for the working group. Changing "WG" to "SIG" feels much lighter weight, so maybe we can move ahead with that now. Doug From sfinucan at redhat.com Thu Aug 2 14:03:37 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Thu, 02 Aug 2018 15:03:37 +0100 Subject: [openstack-dev] =?iso-8859-1?q?=5Boslo=5D_proposing_Mois=E9s_Guim?= =?iso-8859-1?q?ar=E3es_for_oslo=2Econfig_core?= In-Reply-To: <1533129742-sup-2007@lrrr.local> References: <1533129742-sup-2007@lrrr.local> Message-ID: <9356f1138a421dda4078bcf9495239860772d578.camel@redhat.com> On Wed, 2018-08-01 at 09:27 -0400, Doug Hellmann wrote: > Moisés Guimarães (moguimar) did quite a bit of work on oslo.config > during the Rocky cycle to add driver support. Based on that work, > and a discussion we have had since then about general cleanup needed > in oslo.config, I think he would make a good addition to the > oslo.config review team. > > Please indicate your approval or concerns with +1/-1. > > Doug +1. The more the merrier. Stephen From sfinucan at redhat.com Thu Aug 2 14:11:25 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Thu, 02 Aug 2018 15:11:25 +0100 Subject: [openstack-dev] Paste unmaintained Message-ID: <687c3ce92c327a15b6a37930e49ebc97f4d6a95f.camel@redhat.com> tl;dr: It seems Paste [1] may be entering unmaintained territory and we may need to do something about it. I was cleaning up some warning messages that nova was issuing this morning and noticed a few coming from Paste. I was going to draft a PR to fix this, but a quick browse through the Bitbucket project [2] suggests there has been little to no activity on that for well over a year. One particular open PR - "Python 3.7 support" - is particularly concerning, given the recent mailing list threads on the matter. Given that multiple projects are using this, we may want to think about reaching out to the author and seeing if there's anything we can do to at least keep this maintained going forward. I've talked to cdent about this already but if anyone else has ideas, please let me know. Stephen [1] https://pypi.org/project/Paste/ [2] https://bitbucket.org/ianb/paste/ [3] https://bitbucket.org/ianb/paste/pull-requests/41 From doug at doughellmann.com Thu Aug 2 14:16:01 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 02 Aug 2018 10:16:01 -0400 Subject: [openstack-dev] [all][election] PTL nominations are now closed In-Reply-To: References: <20180731235512.GB15918@thor.bakeyournoodle.com> <5e4754db-a601-afa0-2690-459373fdc7c4@openstack.org> Message-ID: <1533219327-sup-3532@lrrr.local> Excerpts from Omer Anson's message of 2018-08-02 12:56:37 +0300: > Hi, > > I'm sorry for the inconvenience. I completely missed the nomination period. > Is it possible to send in a late nomination for Dragonflow? At this point the TC is going to be looking for a volunteer, so if there is one please let us know. Doug > > Thanks, > Omer Anson. > > On Thu, 2 Aug 2018 at 11:59, Thierry Carrez wrote: > > > Tony Breeds wrote: > > > [...] > > > There are 8 projects without candidates, so according to this > > > resolution[1], the TC will have to decide how the following > > > projects will proceed: Dragonflow, Freezer, Loci, Packaging_Rpm, > > > RefStack, Searchlight, Trove and Winstackers. > > > > Here is my take on that... > > > > Packaging_Rpm has a late candidate (Dirk Mueller). We always have a few > > teams per cycle that miss the election call, that would fall under that. > > > > Trove had a volunteer (Dariusz Krol), but that person did not fill the > > requirements for candidates. Given that the previous PTL (Zhao Chao) > > plans to stay around to help onboarding the new contributors, I'd > > support appointing Dariusz. > > > > I suspect Freezer falls in the same bucket as Packaging_Rpm and we > > should get a candidate there. I would reach out to caoyuan see if they > > would be interested in steeping up. > > > > LOCI is also likely in the same bucket. However, given that it's a > > deployment project, if we can't get anyone to step up and guarantee some > > level of currentness, we should consider removing it from the "official" > > list. > > > > Dragonflow is a bit in the LOCI case. It feels like a miss too, but if > > it's not, given that it's an add-on project that runs within Neutron, I > > would consider removing it from the "official" list if we can't find > > anyone to step up. > > > > For Winstackers and Searchlight, those are low-activity teams (18 and 13 > > commits), which brings the question of PTL workload for feature-complete > > projects. > > > > Finally, RefStack: I feel like this should be wrapped into an > > Interoperability SIG, since that project team is not producing > > "OpenStack", but helping fostering OpenStack interoperability. Having > > separate groups (Interop WG, RefStack) sounds overkill anyway, and with > > the introduction of SIGs we have been recentering project teams on > > upstream code production. > > > > -- > > Thierry Carrez (ttx) > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From doug at doughellmann.com Thu Aug 2 14:19:02 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 02 Aug 2018 10:19:02 -0400 Subject: [openstack-dev] [all][election] PTL nominations are now closed In-Reply-To: <5e4754db-a601-afa0-2690-459373fdc7c4@openstack.org> References: <20180731235512.GB15918@thor.bakeyournoodle.com> <5e4754db-a601-afa0-2690-459373fdc7c4@openstack.org> Message-ID: <1533219428-sup-5277@lrrr.local> Excerpts from Thierry Carrez's message of 2018-08-02 10:58:53 +0200: > Tony Breeds wrote: > > [...] > > There are 8 projects without candidates, so according to this > > resolution[1], the TC will have to decide how the following > > projects will proceed: Dragonflow, Freezer, Loci, Packaging_Rpm, > > RefStack, Searchlight, Trove and Winstackers. > > Here is my take on that... > > Packaging_Rpm has a late candidate (Dirk Mueller). We always have a few > teams per cycle that miss the election call, that would fall under that. > > Trove had a volunteer (Dariusz Krol), but that person did not fill the > requirements for candidates. Given that the previous PTL (Zhao Chao) > plans to stay around to help onboarding the new contributors, I'd > support appointing Dariusz. > > I suspect Freezer falls in the same bucket as Packaging_Rpm and we > should get a candidate there. I would reach out to caoyuan see if they > would be interested in steeping up. > > LOCI is also likely in the same bucket. However, given that it's a > deployment project, if we can't get anyone to step up and guarantee some > level of currentness, we should consider removing it from the "official" > list. > > Dragonflow is a bit in the LOCI case. It feels like a miss too, but if > it's not, given that it's an add-on project that runs within Neutron, I > would consider removing it from the "official" list if we can't find > anyone to step up. > > For Winstackers and Searchlight, those are low-activity teams (18 and 13 > commits), which brings the question of PTL workload for feature-complete > projects. Even for feature-complete projects we need to know how to reach the maintainers, otherwise I feel like we would consider the project unmaintained, wouldn't we? > > Finally, RefStack: I feel like this should be wrapped into an > Interoperability SIG, since that project team is not producing > "OpenStack", but helping fostering OpenStack interoperability. Having > separate groups (Interop WG, RefStack) sounds overkill anyway, and with > the introduction of SIGs we have been recentering project teams on > upstream code production. > From chris.friesen at windriver.com Thu Aug 2 14:27:09 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 2 Aug 2018 08:27:09 -0600 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <5B6297F5.9050507@fastmail.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> <5B6297F5.9050507@fastmail.com> Message-ID: <5B6314BD.5030106@windriver.com> On 08/01/2018 11:34 PM, Joshua Harlow wrote: > And I would be able to say request the explanation for a given request id > (historical even) so that analysis could be done post-change and pre-change (say > I update the algorithm for selection) so that the effects of alternations to > said decisions could be determined. This would require storing a snapshot of all resources prior to processing every request...seems like that could add overhead and increase storage consumption. Chris From doug at doughellmann.com Thu Aug 2 14:27:51 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 02 Aug 2018 10:27:51 -0400 Subject: [openstack-dev] Paste unmaintained In-Reply-To: <687c3ce92c327a15b6a37930e49ebc97f4d6a95f.camel@redhat.com> References: <687c3ce92c327a15b6a37930e49ebc97f4d6a95f.camel@redhat.com> Message-ID: <1533219691-sup-5515@lrrr.local> Excerpts from Stephen Finucane's message of 2018-08-02 15:11:25 +0100: > tl;dr: It seems Paste [1] may be entering unmaintained territory and we > may need to do something about it. > > I was cleaning up some warning messages that nova was issuing this > morning and noticed a few coming from Paste. I was going to draft a PR > to fix this, but a quick browse through the Bitbucket project [2] > suggests there has been little to no activity on that for well over a > year. One particular open PR - "Python 3.7 support" - is particularly > concerning, given the recent mailing list threads on the matter. > > Given that multiple projects are using this, we may want to think about > reaching out to the author and seeing if there's anything we can do to > at least keep this maintained going forward. I've talked to cdent about > this already but if anyone else has ideas, please let me know. > > Stephen > > [1] https://pypi.org/project/Paste/ > [2] https://bitbucket.org/ianb/paste/ > [3] https://bitbucket.org/ianb/paste/pull-requests/41 > The last I heard, a few years ago Ian moved away from Python to JavaScript as part of his work at Mozilla. The support around paste.deploy has been sporadic since then, and was one of the reasons we discussed a goal of dropping paste.ini as a configuration file. Do we have a real sense of how many of the projects below, which list Paste in requirements.txt, actually use it directly or rely on it for configuration? Doug $ beagle search --ignore-case --file requirements.txt 'paste[><=! ]' +----------------------------------------+--------------------------------------------------------+------+--------------------+ | Repository | Filename | Line | Text | +----------------------------------------+--------------------------------------------------------+------+--------------------+ | airship-armada | requirements.txt | 8 | Paste>=2.0.3 | | airship-deckhand | requirements.txt | 12 | Paste # MIT | | anchor | requirements.txt | 9 | Paste # MIT | | apmec | requirements.txt | 6 | Paste>=2.0.2 # MIT | | barbican | requirements.txt | 22 | Paste>=2.0.2 # MIT | | cinder | requirements.txt | 37 | Paste>=2.0.2 # MIT | | congress | requirements.txt | 11 | Paste>=2.0.2 # MIT | | designate | requirements.txt | 25 | Paste>=2.0.2 # MIT | | ec2-api | requirements.txt | 20 | Paste # MIT | | freezer-api | requirements.txt | 8 | Paste>=2.0.2 # MIT | | gce-api | requirements.txt | 16 | Paste>=2.0.2 # MIT | | glance | requirements.txt | 31 | Paste>=2.0.2 # MIT | | glare | requirements.txt | 29 | Paste>=2.0.2 # MIT | | karbor | requirements.txt | 28 | Paste>=2.0.2 # MIT | | kingbird | requirements.txt | 7 | Paste>=2.0.2 # MIT | | manila | requirements.txt | 30 | Paste>=2.0.2 # MIT | | meteos | requirements.txt | 29 | Paste # MIT | | monasca-events-api | requirements.txt | 6 | Paste # MIT | | monasca-log-api | requirements.txt | 6 | Paste>=2.0.2 # MIT | | murano | requirements.txt | 28 | Paste>=2.0.2 # MIT | | neutron | requirements.txt | 6 | Paste>=2.0.2 # MIT | | nova | requirements.txt | 19 | Paste>=2.0.2 # MIT | | novajoin | requirements.txt | 6 | Paste>=2.0.2 # MIT | | oslo.service | requirements.txt | 17 | Paste>=2.0.2 # MIT | | requirements | global-requirements.txt | 187 | Paste # MIT | | searchlight | requirements.txt | 27 | Paste>=2.0.2 # MIT | | tacker | requirements.txt | 6 | Paste>=2.0.2 # MIT | | tatu | requirements.txt | 18 | Paste # MIT | | tricircle | requirements.txt | 7 | Paste>=2.0.2 # MIT | | trio2o | requirements.txt | 7 | Paste # MIT | | trove | requirements.txt | 11 | Paste>=2.0.2 # MIT | | upstream-institute-virtual-environment | elements/upstream-training/static/tmp/requirements.txt | 147 | Paste==2.0.3 | +----------------------------------------+--------------------------------------------------------+------+--------------------+ From sean.mcginnis at gmx.com Thu Aug 2 14:31:05 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 2 Aug 2018 09:31:05 -0500 Subject: [openstack-dev] [all][election] PTL nominations are now closed In-Reply-To: <1533219428-sup-5277@lrrr.local> References: <20180731235512.GB15918@thor.bakeyournoodle.com> <5e4754db-a601-afa0-2690-459373fdc7c4@openstack.org> <1533219428-sup-5277@lrrr.local> Message-ID: <20180802143104.GA32366@sm-workstation> > > > > Packaging_Rpm has a late candidate (Dirk Mueller). We always have a few > > teams per cycle that miss the election call, that would fall under that. > > +1 for appointing Dirk as PTL. > > Trove had a volunteer (Dariusz Krol), but that person did not fill the > > requirements for candidates. Given that the previous PTL (Zhao Chao) > > plans to stay around to help onboarding the new contributors, I'd > > support appointing Dariusz. > > I would be fine with this. But I also wonder if it might make sense to move Trove out of governance while they go through this transition so they have more leeway to evolve the project how they need to, with the expectation that if things get to a good and healthy point we can quickly re-accept the project as official. > > I suspect Freezer falls in the same bucket as Packaging_Rpm and we > > should get a candidate there. I would reach out to caoyuan see if they > > would be interested in steeping up. > > > > LOCI is also likely in the same bucket. However, given that it's a > > deployment project, if we can't get anyone to step up and guarantee some > > level of currentness, we should consider removing it from the "official" > > list. > > > > Dragonflow is a bit in the LOCI case. It feels like a miss too, but if > > it's not, given that it's an add-on project that runs within Neutron, I > > would consider removing it from the "official" list if we can't find > > anyone to step up. > > Omer has responded that the deadline was missed and he would like to continue as PTL. I think that is acceptable. (though unfortunate that it was missed) > > For Winstackers and Searchlight, those are low-activity teams (18 and 13 > > commits), which brings the question of PTL workload for feature-complete > > projects. > > Even for feature-complete projects we need to know how to reach the > maintainers, otherwise I feel like we would consider the project > unmaintained, wouldn't we? > I agree with Doug, I think there needs to be someone designated as the contact point for issues with the project. We've seen other "stable" things suddenly go unstable due to library updates or other external factors. I don't think Thierry was suggesting there not be a PTL for these, but for any potential PTL candidates they can know that the demands on their time to fill that role _should_ be pretty light. > > > > Finally, RefStack: I feel like this should be wrapped into an > > Interoperability SIG, since that project team is not producing > > "OpenStack", but helping fostering OpenStack interoperability. Having > > separate groups (Interop WG, RefStack) sounds overkill anyway, and with > > the introduction of SIGs we have been recentering project teams on > > upstream code production. > > > I agree this has gotten to the point where it probably now makes more sense to be owned by a SIG rather than being a full project team. From jeremyfreudberg at gmail.com Thu Aug 2 14:33:48 2018 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Thu, 2 Aug 2018 10:33:48 -0400 Subject: [openstack-dev] Paste unmaintained In-Reply-To: <1533219691-sup-5515@lrrr.local> References: <687c3ce92c327a15b6a37930e49ebc97f4d6a95f.camel@redhat.com> <1533219691-sup-5515@lrrr.local> Message-ID: On Thu, Aug 2, 2018 at 10:27 AM, Doug Hellmann wrote: > Excerpts from Stephen Finucane's message of 2018-08-02 15:11:25 +0100: >> tl;dr: It seems Paste [1] may be entering unmaintained territory and we >> may need to do something about it. >> >> I was cleaning up some warning messages that nova was issuing this >> morning and noticed a few coming from Paste. I was going to draft a PR >> to fix this, but a quick browse through the Bitbucket project [2] >> suggests there has been little to no activity on that for well over a >> year. One particular open PR - "Python 3.7 support" - is particularly >> concerning, given the recent mailing list threads on the matter. >> >> Given that multiple projects are using this, we may want to think about >> reaching out to the author and seeing if there's anything we can do to >> at least keep this maintained going forward. I've talked to cdent about >> this already but if anyone else has ideas, please let me know. >> >> Stephen >> >> [1] https://pypi.org/project/Paste/ >> [2] https://bitbucket.org/ianb/paste/ >> [3] https://bitbucket.org/ianb/paste/pull-requests/41 >> > > The last I heard, a few years ago Ian moved away from Python to > JavaScript as part of his work at Mozilla. The support around > paste.deploy has been sporadic since then, and was one of the reasons > we discussed a goal of dropping paste.ini as a configuration file. > > Do we have a real sense of how many of the projects below, which > list Paste in requirements.txt, actually use it directly or rely > on it for configuration? > > Doug > > $ beagle search --ignore-case --file requirements.txt 'paste[><=! ]' > +----------------------------------------+--------------------------------------------------------+------+--------------------+ > | Repository | Filename | Line | Text | > +----------------------------------------+--------------------------------------------------------+------+--------------------+ > | airship-armada | requirements.txt | 8 | Paste>=2.0.3 | > | airship-deckhand | requirements.txt | 12 | Paste # MIT | > | anchor | requirements.txt | 9 | Paste # MIT | > | apmec | requirements.txt | 6 | Paste>=2.0.2 # MIT | > | barbican | requirements.txt | 22 | Paste>=2.0.2 # MIT | > | cinder | requirements.txt | 37 | Paste>=2.0.2 # MIT | > | congress | requirements.txt | 11 | Paste>=2.0.2 # MIT | > | designate | requirements.txt | 25 | Paste>=2.0.2 # MIT | > | ec2-api | requirements.txt | 20 | Paste # MIT | > | freezer-api | requirements.txt | 8 | Paste>=2.0.2 # MIT | > | gce-api | requirements.txt | 16 | Paste>=2.0.2 # MIT | > | glance | requirements.txt | 31 | Paste>=2.0.2 # MIT | > | glare | requirements.txt | 29 | Paste>=2.0.2 # MIT | > | karbor | requirements.txt | 28 | Paste>=2.0.2 # MIT | > | kingbird | requirements.txt | 7 | Paste>=2.0.2 # MIT | > | manila | requirements.txt | 30 | Paste>=2.0.2 # MIT | > | meteos | requirements.txt | 29 | Paste # MIT | > | monasca-events-api | requirements.txt | 6 | Paste # MIT | > | monasca-log-api | requirements.txt | 6 | Paste>=2.0.2 # MIT | > | murano | requirements.txt | 28 | Paste>=2.0.2 # MIT | > | neutron | requirements.txt | 6 | Paste>=2.0.2 # MIT | > | nova | requirements.txt | 19 | Paste>=2.0.2 # MIT | > | novajoin | requirements.txt | 6 | Paste>=2.0.2 # MIT | > | oslo.service | requirements.txt | 17 | Paste>=2.0.2 # MIT | > | requirements | global-requirements.txt | 187 | Paste # MIT | > | searchlight | requirements.txt | 27 | Paste>=2.0.2 # MIT | > | tacker | requirements.txt | 6 | Paste>=2.0.2 # MIT | > | tatu | requirements.txt | 18 | Paste # MIT | > | tricircle | requirements.txt | 7 | Paste>=2.0.2 # MIT | > | trio2o | requirements.txt | 7 | Paste # MIT | > | trove | requirements.txt | 11 | Paste>=2.0.2 # MIT | > | upstream-institute-virtual-environment | elements/upstream-training/static/tmp/requirements.txt | 147 | Paste==2.0.3 | > +----------------------------------------+--------------------------------------------------------+------+--------------------+ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev In the case of Sahara, our dependency on it comes through oslo.service. So I suspect there will be other projects in that camp too. And in Sahara's case, we really do rely on Paste, but would be happy to switch if a library with similar features was identified. From cdent+os at anticdent.org Thu Aug 2 14:36:16 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 2 Aug 2018 15:36:16 +0100 (BST) Subject: [openstack-dev] Paste unmaintained In-Reply-To: <687c3ce92c327a15b6a37930e49ebc97f4d6a95f.camel@redhat.com> References: <687c3ce92c327a15b6a37930e49ebc97f4d6a95f.camel@redhat.com> Message-ID: On Thu, 2 Aug 2018, Stephen Finucane wrote: > Given that multiple projects are using this, we may want to think about > reaching out to the author and seeing if there's anything we can do to > at least keep this maintained going forward. I've talked to cdent about > this already but if anyone else has ideas, please let me know. I've sent some exploratory email to Ian, the original author, to get a sense of where things are and whether there's an option for us (or if for some reason us wasn't okay, me) to adopt it. If email doesn't land I'll try again with other media I agree with the idea of trying to move away from using it, as mentioned elsewhere in this thread and in IRC, but it's not a simple step as at least in some projects we are using paste files as configuration that people are allowed (and do) change. Moving away from that is the hard part, not figuring out how to load WSGI middleware in a modern way. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From openstack at nemebean.com Thu Aug 2 14:40:50 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 2 Aug 2018 09:40:50 -0500 Subject: [openstack-dev] [oslo] PTL on PTO, no meeting next week Message-ID: I'm out next week and I'm told Monday is a bank holiday in some places, so we're going to skip the Oslo meeting for August 6th. Of course if you have issues you don't have to wait for a meeting to ask. The Oslo team is pretty much always around in #openstack-oslo. I should be back the following week so we'll resume the normal meeting schedule then. -Ben From chris.friesen at windriver.com Thu Aug 2 14:55:03 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 2 Aug 2018 08:55:03 -0600 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> Message-ID: <5B631B47.3080506@windriver.com> On 08/02/2018 04:10 AM, Chris Dent wrote: > When people ask for something like what Chris mentioned: > > hosts with enough CPU: > hosts that also have enough disk: > hosts that also have enough memory: > hosts that also meet extra spec host aggregate keys: > hosts that also meet image properties host aggregate keys: > hosts that also have requested PCI devices: > > What are the operational questions that people are trying to answer > with those results? Is the idea to be able to have some insight into > the resource usage and reporting on and from the various hosts and > discover that things are being used differently than thought? Is > placement a resource monitoring tool, or is it more simple and > focused than that? Or is it that we might have flavors or other > resource requesting constraints that have bad logic and we want to > see at what stage the failure is? I don't know and I haven't really > seen it stated explicitly here, and knowing it would help. > > Do people want info like this for requests as they happen, or to be > able to go back later and try the same request again with some flag > on that says: "diagnose what happened"? > > Or to put it another way: Before we design something that provides > the information above, which is a solution to an undescribed > problem, can we describe the problem more completely first to make > sure that what solution we get is the right one. The thing above, > that set of information, is context free. The reason my organization added additional failure-case logging to the pre-placement scheduler was that we were enabling complex features (cpu pinning, hugepages, PCI, SRIOV, CPU model requests, NUMA topology, etc.) and we were running into scheduling failures, and people were asking the question "why did this scheduler request fail to find a valid host?". There are a few reasons we might want to ask this question. Some of them include: 1) double-checking the scheduler is working properly when first using additional features 2) weeding out images/flavors with excessive or mutually-contradictory constraints 3) determining whether the cluster needs to be reconfigured to meet user requirements I suspect that something like "do the same request again with a debug flag" would cover many scenarios. I suspect its main weakness would be dealing with contention between short-lived entities. Chris From jaypipes at gmail.com Thu Aug 2 15:01:39 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 2 Aug 2018 11:01:39 -0400 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> Message-ID: <156b4688-aba3-2a01-3917-b42a209f0ecd@gmail.com> On 08/02/2018 01:12 AM, Alex Xu wrote: > 2018-08-02 4:09 GMT+08:00 Jay Pipes >: > > On 08/01/2018 02:02 PM, Chris Friesen wrote: > > On 08/01/2018 11:32 AM, melanie witt wrote: > > I think it's definitely a significant issue that > troubleshooting "No allocation > candidates returned" from placement is so difficult. > However, it's not > straightforward to log detail in placement when the request > for allocation > candidates is essentially "SELECT * FROM nodes WHERE cpu > usage < needed and disk > usage < needed and memory usage < needed" and the result is > returned from the API. > > > I think the only way to get useful info on a failure would be to > break down the huge SQL statement into subclauses and store the > results of the intermediate queries. > > > This is a good idea and something that can be done. > > > That sounds like you need separate sql query for each resource to get > the intermediate, will that be terrible performance than a single query > to get the final result? No, not necessarily. And what I'm referring to is doing a single query per "related resource/trait placement request group" -- which is pretty much what we're heading towards anyway. If we had a request for: GET /allocation_candidates? resources0=VCPU:1& required0=HW_CPU_X86_AVX2,!HW_CPU_X86_VMX& resources1=MEMORY_MB:1024 and logged something like this: DEBUG: [placement request ID XXX] request group 1 of 2 for 1 PCPU, requiring HW_CPU_X86_AVX2, forbidding HW_CPU_X86_VMX, returned 10 matches DEBUG: [placement request ID XXX] request group 2 of 2 for 1024 MEMORY_MB returned 3 matches that would at least go a step towards being more friendly for debugging a particular request's results. -jay From openstack at nemebean.com Thu Aug 2 15:07:05 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 2 Aug 2018 10:07:05 -0500 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <39e76be8-f3d2-09b6-54a7-b6c127f0aeb1@gmail.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <90a5944a-085b-4cd2-d1b2-b490fc466bee@gmail.com> <6b96c555-57d9-4fda-a061-10ae9cf49f09@nemebean.com> <39e76be8-f3d2-09b6-54a7-b6c127f0aeb1@gmail.com> Message-ID: <51750e57-9a9a-de88-0ab5-e63d8e511524@nemebean.com> On 08/01/2018 06:05 PM, Matt Riedemann wrote: > On 8/1/2018 3:55 PM, Ben Nemec wrote: >> I changed disk_allocation_ratio to 2.0 in the config file and it had >> no effect on the existing resource provider.  I assume that is because >> I had initially deployed with it unset, so I got 1.0, and when I later >> wanted to change it the provider already existed with the default value. > > Yeah I think so, unless the inventory changes we don't mess with > changing the allocation ratio. That makes sense. It would be nice if it were more explicitly stated in the option help, but I guess Jay's spec below would obsolete that behavior so maybe it's better to just pursue that. > >>   So in the past I could do the following: >> >> 1) Change disk_allocation_ratio in nova.conf >> 2) Restart nova-scheduler and/or nova-compute >> >> Now it seems like I need to do: >> >> 1) Change disk_allocation_ratio in nova.conf >> 2) Restart nova-scheduler, nova-compute, and nova-placement (or some >> subset of those?) > > Restarting the placement service wouldn't have any effect here. Wouldn't I need to restart it if I wanted new resource providers to use the new default? > >> 3) Use osc-placement to fix up the ratios on any existing resource >> providers > > Yeah that's what you'd need to do in this case. > > I believe Jay Pipes might have somewhere between 3 and 10 specs for the > allocation ratio / nova conf / placement inventory / aggregates problems > floating around, so he's probably best to weigh in here. Like: > https://review.openstack.org/#/c/552105/ > From sean.mcginnis at gmx.com Thu Aug 2 15:09:48 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 2 Aug 2018 10:09:48 -0500 Subject: [openstack-dev] [infra][horizon][osc] ClientImpact tag automation Message-ID: <20180802150947.GA1359@sm-workstation> I'm wondering if someone on the infra team can give me some pointers on how to approach something, and looking for any general feedback as well. Background ========== We've had things like the DocImpact tag that could be added to commit messages that would tie into some automation to create a launchpad bug when that commit merged. While we had a larger docs team and out-of-tree docs, I think this really helped us make sure we didn't lose track of needed documentation updates. I was able to find part of how that is implemented in jeepyb: http://git.openstack.org/cgit/openstack-infra/jeepyb/tree/jeepyb/cmd/notify_impact.py Current Challenge ================= Similar to the need to follow up with documentation, I've seen a lot of cases where projects have added features or made other changes that impact downstream consumers of that project. Most often, I've seen cases where something like python-cinderclient adds some functionality, but it is on projects like Horizon or python-openstackclient to proactively go out and discover those changes. Not only just seeking out those changes, but also evaluating whether a given change should have any impact on their project. So we've ended up in a lot of cases where either new functionality isn't made available through these interfaces until a cycle or two later, or probably worse, cases where something is now broken with no one aware of it until an actual end user hits a problem and files a bug. ClientImpact Plan ================= I've run this by a few people and it seems to have some support. Or course I'm open to any other suggestions. What I would like to do is add a ClientImpact tag handling that could be added very similarly to DocImpact. The way I see it working is it would work in much the same way where project's can use this to add the tag to a commit message when they know it is something that will require additional work in OSC or Horizon (or others). Then when that commit merges, automation would create a launchpad bug and/or Storyboard story, including a default set of client projects. Perhaps we can find some way to make those impacted clients configurable by source project, but that could be a follow-on optimization. I am concerned that this could create some extra overhead for these projects. But my hope is it would be a quick evaluation by a bug triager in those projects where they can, hopefully, quickly determine if a change does not in fact impact them and just close the ones they don't think require any follow on work. I do hope that this will save some time and speed things up overall for these projects to be notified that there is something that needs their attention without needing someone to take the time to actively go out and discover that. Help Needed =========== >From the bits I've found for the DocImpact handling, it looks like it should not be too much effort to implement the logic to handle a ClientImpact flag. But I have not been able to find all the moving parts that work together to perform that automation. If anyone has any background knowledge on how DocImpact is implemented and can give me a few pointers, I think I should be able to take it from there to get this implemented. Or if there is someone that knows this well and is interested in working on some of the implementation, that would be very welcome too! Sean From deepak.dt at gmail.com Thu Aug 2 15:11:26 2018 From: deepak.dt at gmail.com (Deepak Tiwari) Date: Thu, 2 Aug 2018 10:11:26 -0500 Subject: [openstack-dev] Add SRIOV mirroring support to Tap as a Service (https://review.openstack.org/#/c/584514/) Message-ID: Hi TaaS Dev team, This mail is regarding the comment to move the changes out of stable/ocata branch. I would like to explain the reasons why we require these changes in Ocata branch. We intend to deploy TaaS-plugin with Openstack-helm (OSH) charts in our labs. However OSH as of now supports only Ocata. So we need to put in the changes to Ocata branch of TaaS to enable us to deploy and test it. Of course in parallel we are working on a commit for master branch as well, however we require this feature in ocata branch also. Due to the fact that we are adding a new SRIOV driver, with no changes to existing OVS driver and there being no impact to TaaS API or DB/Data model, the existing functionality shouldn’t be impacted with this change. Please provide your go ahead for the same Br, Deepak -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjturek at linux.vnet.ibm.com Thu Aug 2 15:24:48 2018 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Thu, 2 Aug 2018 11:24:48 -0400 Subject: [openstack-dev] [ironic] Next bug day is Tuesday August 28th! Vote for timeslot! Message-ID: Hey all! Bug day was pretty productive today and we decided to schedule another one for the end of this month, on Tuesday the 28th. For details see the etherpad for the event [0] Also since we're changing things up, we decided to also put up a vote for the timeslot [1] If you have any questions or suggestions on how to improve bug day, I am all ears! Hope to see you there! Thanks, Mike Turek [0] https://etherpad.openstack.org/p/ironic-bug-day-august-28-2018 [1] https://doodle.com/poll/ef4m9zmacm2ey7ce From openstack at sheep.art.pl Thu Aug 2 15:59:20 2018 From: openstack at sheep.art.pl (Radomir Dopieralski) Date: Thu, 2 Aug 2018 17:59:20 +0200 Subject: [openstack-dev] [infra][horizon][osc] ClientImpact tag automation In-Reply-To: <20180802150947.GA1359@sm-workstation> References: <20180802150947.GA1359@sm-workstation> Message-ID: To be honest, I don't see much point in automatically creating bugs that nobody is going to look at. When you implement a new feature, it's up to you to make it available in Horizon and CLI and wherever else, since the people working there simply don't have the time to work on it. Creating a ticket will not magically make someone do that work for you. We are happy to assist with this, but that's it. Anything else is going to get added whenever someone has any free cycles, or it becomes necessary for some reason (like breaking compatibility). That's the current reality, and no automation is going to help with it. On Thu, Aug 2, 2018 at 5:09 PM Sean McGinnis wrote: > I'm wondering if someone on the infra team can give me some pointers on > how to > approach something, and looking for any general feedback as well. > > Background > ========== > We've had things like the DocImpact tag that could be added to commit > messages > that would tie into some automation to create a launchpad bug when that > commit > merged. While we had a larger docs team and out-of-tree docs, I think this > really helped us make sure we didn't lose track of needed documentation > updates. > > I was able to find part of how that is implemented in jeepyb: > > > http://git.openstack.org/cgit/openstack-infra/jeepyb/tree/jeepyb/cmd/notify_impact.py > > Current Challenge > ================= > Similar to the need to follow up with documentation, I've seen a lot of > cases > where projects have added features or made other changes that impact > downstream > consumers of that project. Most often, I've seen cases where something like > python-cinderclient adds some functionality, but it is on projects like > Horizon > or python-openstackclient to proactively go out and discover those changes. > > Not only just seeking out those changes, but also evaluating whether a > given > change should have any impact on their project. So we've ended up in a lot > of > cases where either new functionality isn't made available through these > interfaces until a cycle or two later, or probably worse, cases where > something > is now broken with no one aware of it until an actual end user hits a > problem > and files a bug. > > ClientImpact Plan > ================= > I've run this by a few people and it seems to have some support. Or course > I'm > open to any other suggestions. > > What I would like to do is add a ClientImpact tag handling that could be > added > very similarly to DocImpact. The way I see it working is it would work in > much > the same way where project's can use this to add the tag to a commit > message > when they know it is something that will require additional work in OSC or > Horizon (or others). Then when that commit merges, automation would create > a > launchpad bug and/or Storyboard story, including a default set of client > projects. Perhaps we can find some way to make those impacted clients > configurable by source project, but that could be a follow-on optimization. > > I am concerned that this could create some extra overhead for these > projects. > But my hope is it would be a quick evaluation by a bug triager in those > projects where they can, hopefully, quickly determine if a change does not > in > fact impact them and just close the ones they don't think require any > follow on > work. > > I do hope that this will save some time and speed things up overall for > these > projects to be notified that there is something that needs their attention > without needing someone to take the time to actively go out and discover > that. > > Help Needed > =========== > From the bits I've found for the DocImpact handling, it looks like it > should > not be too much effort to implement the logic to handle a ClientImpact > flag. > But I have not been able to find all the moving parts that work together to > perform that automation. > > If anyone has any background knowledge on how DocImpact is implemented and > can > give me a few pointers, I think I should be able to take it from there to > get > this implemented. Or if there is someone that knows this well and is > interested > in working on some of the implementation, that would be very welcome too! > > Sean > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfinucan at redhat.com Thu Aug 2 16:37:33 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Thu, 02 Aug 2018 17:37:33 +0100 Subject: [openstack-dev] [Openstack] [nova] [os-vif] [vif_plug_ovs] Support for OVS DB tcp socket communication. In-Reply-To: References: Message-ID: <5013190251f53099ba4a3d66feb81db7891d94e5.camel@redhat.com> On Wed, 2018-07-25 at 15:22 +0530, pranab boruah wrote: > Hello folks, > I have filed a bug in os-vif: > https://bugs.launchpad.net/os-vif/+bug/1778724 and working on a > patch. Any feedback/comments from you guys would be extremely > helpful. > Bug details: > OVS DB server has the feature of listening over a TCP socket for > connections rather than just on the unix domain socket. [0] > > If the OVS DB server is listening over a TCP socket, then the ovs- > vsctl commands should include the ovsdb_connection parameter: > # ovs-vsctl --db=tcp:IP:PORT ... > eg: > # ovs-vsctl --db=tcp:169.254.1.1:6640 add-port br-int eth0 > Neutron supports running the ovs-vsctl commands with the > ovsdb_connection parameter. The ovsdb_connection parameter is > configured in openvswitch_agent.ini file. [1] > While adding a vif to the ovs bridge(br-int), Nova(os-vif) invokes > the ovs-vsctl command. Today, there is no support to pass the > ovsdb_connection parameter while invoking the ovs-vsctl command. The > support should be added. This would enhance the functionality of os- > vif, since it would support a scenario when OVS DB server is > listening on a TCP socket connection and on functional parity with > Neutron. > [0] http://www.openvswitch.org/support/dist-docs/ovsdb-server.1.html > [1] > https://docs.openstack.org/neutron/pike/configuration/openvswitch-agent.html > > > TIA,Pranab Perhaps not the same thing, but would the patches mentioned in the below mail work for this too? http://lists.openstack.org/pipermail/openstack-dev/2018-March/127907.html Cheers, Stephen -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Aug 2 16:42:14 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 2 Aug 2018 11:42:14 -0500 Subject: [openstack-dev] [infra][horizon][osc] ClientImpact tag automation In-Reply-To: References: <20180802150947.GA1359@sm-workstation> Message-ID: <20180802164214.GA8088@sm-workstation> On Thu, Aug 02, 2018 at 05:59:20PM +0200, Radomir Dopieralski wrote: > To be honest, I don't see much point in automatically creating bugs that > nobody is going to look at. When you implement a new feature, it's up to > you to make it available in Horizon and CLI and wherever else, since the > people working there simply don't have the time to work on it. Creating a > ticket will not magically make someone do that work for you. We are happy > to assist with this, but that's it. Anything else is going to get added > whenever someone has any free cycles, or it becomes necessary for some > reason (like breaking compatibility). That's the current reality, and no > automation is going to help with it. > I don't think that's universally true with these projects. There are some on these teams that are interested in implementing support for new features and keeping existing things working right. The reality for most of this then is new features won't be available and users will move away from using something like Horizon for whatever else comes along that will give them access to what they need. I know there are very few developers focused on Cinder that also have the skillset to add functionality to Horizon. I agree ideally someone would work on things wherever they are needed, but I think there is a barrier with skills and priorities to make that happen. And at least in the case of Cinder, neither Horizon nor OpenStackClient are required. From jimmy at openstack.org Thu Aug 2 16:43:11 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 02 Aug 2018 11:43:11 -0500 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: <16e69b47c8b71bf6f920ab8f3df61928@arcor.de> References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> <5B4DF6B4.9030501@openstack.org> <7f053a98-718b-0470-9ffa-934f0e715a76@gmail.com> <5B4E132E.5050607@openstack.org> <5B50A476.8010606@openstack.org> <5B5F295F.3090608@openstack.org> <1f5afd62cc3a9a8923586a404e707366@arcor.de> <16e69b47c8b71bf6f920ab8f3df61928@arcor.de> Message-ID: <5B63349F.4010204@openstack.org> Frank, We expect to have these papers up this afternoon. I'll update this thread when we do. Thanks! Jimmy Frank Kloeker wrote: > Hi Sebastian, > > okay, it's translated now. In Edge whitepaper is the problem with > XML-Parsing of the term AT&T. Don't know how to escape this. Maybe you > will see the warning during import too. > > kind regards > > Frank > > Am 2018-07-30 20:09, schrieb Sebastian Marcet: >> Hi Frank, >> i was double checking pot file and realized that original pot missed >> some parts of the original paper (subsections of the paper) apologizes >> on that >> i just re uploaded an updated pot file with missing subsections >> >> regards >> >> On Mon, Jul 30, 2018 at 2:20 PM, Frank Kloeker wrote: >> >>> Hi Jimmy, >>> >>> from the GUI I'll get this link: >>> >> https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >> >>> [1] >>> >>> paper version are only in container whitepaper: >>> >>> >> https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >> >>> [2] >>> >>> In general there is no group named papers >>> >>> kind regards >>> >>> Frank >>> >>> Am 2018-07-30 17:06, schrieb Jimmy McArthur: >>> Frank, >>> >>> We're getting a 404 when looking for the pot file on the Zanata API: >>> >> https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >> >>> [3] >>> >>> As a result, we can't pull the po files. Any idea what might be >>> happening? >>> >>> Seeing the same thing with both papers... >>> >>> Thank you, >>> Jimmy >>> >>> Frank Kloeker wrote: >>> Hi Jimmy, >>> >>> Korean and German version are now done on the new format. Can you >>> check publishing? >>> >>> thx >>> >>> Frank >>> >>> Am 2018-07-19 16:47, schrieb Jimmy McArthur: >>> Hi all - >>> >>> Follow up on the Edge paper specifically: >>> >> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >> >>> [4] This is now available. As I mentioned on IRC this morning, it >>> should >>> be VERY close to the PDF. Probably just needs a quick review. >>> >>> Let me know if I can assist with anything. >>> >>> Thank you to i18n team for all of your help!!! >>> >>> Cheers, >>> Jimmy >>> >>> Jimmy McArthur wrote: >>> Ian raises some great points :) I'll try to address below... >>> >>> Ian Y. Choi wrote: >>> Hello, >>> >>> When I saw overall translation source strings on container >>> whitepaper, I would infer that new edge computing whitepaper >>> source strings would include HTML markup tags. >>> One of the things I discussed with Ian and Frank in Vancouver is >>> the expense of recreating PDFs with new translations. It's >>> prohibitively expensive for the Foundation as it requires design >>> resources which we just don't have. As a result, we created the >>> Containers whitepaper in HTML, so that it could be easily updated >>> w/o working with outside design contractors. I indicated that we >>> would also be moving the Edge paper to HTML so that we could prevent >>> that additional design resource cost. >>> On the other hand, the source strings of edge computing whitepaper >>> which I18n team previously translated do not include HTML markup >>> tags, since the source strings are based on just text format. >>> The version that Akihiro put together was based on the Edge PDF, >>> which we unfortunately didn't have the resources to implement in the >>> same format. >>> >>> I really appreciate Akihiro's work on RST-based support on >>> publishing translated edge computing whitepapers, since >>> translators do not have to re-translate all the strings. >>> I would like to second this. It took a lot of initiative to work on >>> the RST-based translation. At the moment, it's just not usable for >>> the reasons mentioned above. >>> On the other hand, it seems that I18n team needs to investigate on >>> translating similar strings of HTML-based edge computing whitepaper >>> source strings, which would discourage translators. >>> Can you expand on this? I'm not entirely clear on why the HTML >>> based translation is more difficult. >>> >>> That's my point of view on translating edge computing whitepaper. >>> >>> For translating container whitepaper, I want to further ask the >>> followings since *I18n-based tools* >>> would mean for translators that translators can test and publish >>> translated whitepapers locally: >>> >>> - How to build translated container whitepaper using original >>> Silverstripe-based repository? >>> https://docs.openstack.org/i18n/latest/tools.html [5] describes >>> well how to build translated artifacts for RST-based OpenStack >>> repositories >>> but I could not find the way how to build translated container >>> whitepaper with translated resources on Zanata. >>> This is a little tricky. It's possible to set up a local version >>> of the OpenStack website >>> >> (https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >> >>> [6]). However, we have to manually ingest the po files as they are >>> completed and then push them out to production, so that wouldn't do >>> much to help with your local build. I'm open to suggestions on how >>> we can make this process easier for the i18n team. >>> >>> Thank you, >>> Jimmy >>> >>> With many thanks, >>> >>> /Ian >>> >>> Jimmy McArthur wrote on 7/17/2018 11:01 PM: >>> Frank, >>> >>> I'm sorry to hear about the displeasure around the Edge paper. As >>> mentioned in a prior thread, the RST format that Akihiro worked did >>> not work with the Zanata process that we have been using with our >>> CMS. Additionally, the existing EDGE page is a PDF, so we had to >>> build a new template to work with the new HTML whitepaper layout we >>> created for the Containers paper. I outlined this in the thread " >>> [OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge Computing >>> Whitepaper Translation" on 6/25/18 and mentioned we would be ready >>> with the template around 7/13. >>> >>> We completed the work on the new whitepaper template and then put >>> out the pot files on Zanata so we can get the po language files >>> back. If this process is too cumbersome for the translation team, >>> I'm open to discussion, but right now our entire translation process >>> is based on the official OpenStack Docs translation process outlined >>> by the i18n team: >>> https://docs.openstack.org/i18n/latest/en_GB/tools.html [7] >>> >>> Again, I realize Akihiro put in some work on his own proposing the >>> new translation type. If the i18n team is moving to this format >>> instead, we can work on redoing our process. >>> >>> Please let me know if I can clarify further. >>> >>> Thanks, >>> Jimmy >>> >>> Frank Kloeker wrote: >>> Hi Jimmy, >>> >>> permission was added for you and Sebastian. The Container Whitepaper >>> is on the Zanata frontpage now. But we removed Edge Computing >>> whitepaper last week because there is a kind of displeasure in the >>> team since the results of translation are still not published beside >>> Chinese version. It would be nice if we have a commitment from the >>> Foundation that results are published in a specific timeframe. This >>> includes your requirements until the translation should be >>> available. >>> >>> thx Frank >>> >>> Am 2018-07-16 17:26, schrieb Jimmy McArthur: >>> Sorry, I should have also added... we additionally need permissions >>> so >>> that we can add the a new version of the pot file to this project: >>> >> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >> >>> [8] Thanks! >>> Jimmy >>> >>> Jimmy McArthur wrote: >>> Hi all - >>> >>> We have both of the current whitepapers up and available for >>> translation. Can we promote these on the Zanata homepage? >>> >>> >> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >> >>> [9] >>> >> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >> >>> [10] Thanks all! >>> Jimmy >>> >>> >> __________________________________________________________________________ >> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> [12] >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [12] >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [12] >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [12] >> >> >> >> Links: >> ------ >> [1] >> https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >> >> [2] >> https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >> >> [3] >> https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >> >> [4] >> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >> >> [5] https://docs.openstack.org/i18n/latest/tools.html >> [6] >> https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >> >> [7] https://docs.openstack.org/i18n/latest/en_GB/tools.html >> [8] >> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >> >> [9] >> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >> >> [10] >> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >> >> [11] >> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> [12] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From harlowja at fastmail.com Thu Aug 2 16:43:40 2018 From: harlowja at fastmail.com (Joshua Harlow) Date: Thu, 02 Aug 2018 09:43:40 -0700 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <5B6314BD.5030106@windriver.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> <5B6297F5.9050507@fastmail.com> <5B6314BD.5030106@windriver.com> Message-ID: <5B6334BC.70209@fastmail.com> Storage space is a concern; really? If it really is, then keep X of them for some definition of X (days, number, hours, other)? Offload the snapshot asynchronously if snapshotting during requests is a problem. We have the power! :) Chris Friesen wrote: > On 08/01/2018 11:34 PM, Joshua Harlow wrote: > >> And I would be able to say request the explanation for a given request id >> (historical even) so that analysis could be done post-change and >> pre-change (say >> I update the algorithm for selection) so that the effects of >> alternations to >> said decisions could be determined. > > This would require storing a snapshot of all resources prior to > processing every request...seems like that could add overhead and > increase storage consumption. > > Chris > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sfinucan at redhat.com Thu Aug 2 16:44:23 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Thu, 02 Aug 2018 17:44:23 +0100 Subject: [openstack-dev] Setting-up NoVNC 1.0.0 with nova In-Reply-To: <9d76a294-82f0-4384-fee2-01043be19789@gmail.com> References: <9d76a294-82f0-4384-fee2-01043be19789@gmail.com> Message-ID: On Sun, 2018-05-20 at 09:33 -0700, Matt Riedemann wrote: > On 5/20/2018 6:37 AM, Thomas Goirand wrote: > > The novnc package in Debian and Ubuntu is getting very old. So I thought > > about upgrading to 1.0.0, which has lots of very nice newer features, > > like the full screen mode, and so on. > > > > All seemed to work, however, when trying to connect to the console of a > > VM, NoVNC attempts to connect tohttps://example.com:6080/websockify and > > then fails (with a 404). > > > > So I was wondering: what's missing in my setup so that there's a > > /websockify URL? Is there some missing code in the nova-novncproxy so > > that it would forward this URL to /usr/bin/websockify? If so, has anyone > > started working on it? > > > > Also, what's the status of NoVNC with Python 3? I saw lots of print > > statements which are easy to fix, though I even wonder if the code in > > the python-novnc package is useful. Who's using it? Nova-novncproxy? > > That's unlikely, since I didn't package a Python 3 version for it. > > Stephen Finucane (stephenfin on irc) would know best at this point, but > I know he ran into some issues with configuring nova when using novnc > 1.0.0, so check your novncproxy_base_url config option value: > > https://docs.openstack.org/nova/latest/configuration/config.html#vnc.novncproxy_base_url > > Specifically: > > "If using noVNC >= 1.0.0, you should use vnc_lite.html instead of > vnc_auto.html." We've got a patch up to resolve this in DevStack [1]. As Matt notes, the issue is because a path was renamed in noVNC 1.0.0. You could resolve this by including a symlink to the path in your package but it might be better long-term to simply ensure the deployment tools take care of this. We can eventually change the default in nova once noVNC 1.0.0 gains enough momentum. There's a WIP patch up for this too [2]. Let me know if you need more info, Stephen [1] https://review.openstack.org/#/c/550172/6 [2] https://review.openstack.org/#/c/550173/4 From liam.young at canonical.com Thu Aug 2 16:47:10 2018 From: liam.young at canonical.com (Liam Young) Date: Thu, 2 Aug 2018 17:47:10 +0100 Subject: [openstack-dev] [nova] Guests not getting metadata in a Cellsv2 deploy Message-ID: Hi, I have a fresh pike deployment and the guests are not getting metadata. To investigate it further it would really help me to understand what the metadata flow is supposed to look like. In my deployment the guest receives a 404 when hitting http://169.254.169.254/latest/meta-data. I have added some logging to expose the messages passing via amqp and I see the nova-api-metadata service making a call to the super-conductor asking for an InstanceMapping. The super-conductor sends a reply detailing which cell the instance is in and the urls for both mysql and rabbit. The nova-api-metadata service then sends a second message to the superconductor this time asking for an Instance obj. The super-conductor fails to find the instance and returns a failure with a "InstanceNotFound: Instance could not be found" message, the nova-api-metadata service then sends a 404 to the original requester. I think the super-conductor is looking in the wrong database for the instance information. I believe it is looking in cell0 when it should actually be connecting to an entirely different instance of mysql which is associated with the cell that the instance is in. Should the super-conductor even be trying to retrieve the instance information or should the nova-api-metadata service actually be messaging the conductor in the compute cell? Any pointers gratefully received! Thanks Liam -------------- next part -------------- An HTML attachment was scrubbed... URL: From msm at redhat.com Thu Aug 2 16:49:39 2018 From: msm at redhat.com (Michael McCune) Date: Thu, 2 Aug 2018 12:49:39 -0400 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, Today's meeting was primarily focused around two topics: the IETF[7] draft proposal for Best Practices when building HTTP protocols[8], and the upcoming OpenStack Project Teams Gathering (PTG)[9]. The group had taken a collective action to read the aforementioned draft[8], and as such we were well prepared to discuss its nuances. For the most part, we agreed that the draft is a good prepartory text when approaching HTTP APIs and that we should provide a link to it from the guidelines. Although there are a few areas that we identified as points of discussion regarding the text of the draft, on balance it was seen as helpful to the OpenStack community and consistent with our established guidelines. On the topic of the PTG, the group has started planning for the event and is in the early stages gathering content. We will soon have an etherpad available for topic collection and as an added bonus mordred himself made a pronouncement about the API-SIG meeting being a priority in his schedule for this PTG. We hope to see you all there! The OpenStack infra team will be doing the final rename from API-WG to API-SIG this Friday. Although there are not expected to be any issues from this rename, we will be updating documentation references, and appreciate any help in chasing down bugs. There were no new guidelines to discuss, nor bugs that have arisen since last week. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * None # API Guidelines Proposed for Freeze * None # Guidelines that are ready for wider review by the whole community. * None # Guidelines Currently Under Review [3] * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://ietf.org/ [8] https://tools.ietf.org/html/draft-ietf-httpbis-bcp56bis-06 [9] https://www.openstack.org/ptg/ Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg From jungleboyj at gmail.com Thu Aug 2 16:51:04 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Thu, 2 Aug 2018 11:51:04 -0500 Subject: [openstack-dev] [infra][horizon][osc] ClientImpact tag automation In-Reply-To: References: <20180802150947.GA1359@sm-workstation> Message-ID: On 8/2/2018 10:59 AM, Radomir Dopieralski wrote: > To be honest, I don't see much point in automatically creating bugs > that nobody is going to look at. When you implement a new feature, > it's up to you to make it available in Horizon and CLI and wherever > else, since the people working there simply don't have the time to > work on it. Creating a ticket will not magically make someone do that > work for you. We are happy to assist with this, but that's it. > Anything else is going to get added whenever someone has any free > cycles, or it becomes necessary for some reason (like breaking > compatibility). That's the current reality, and no automation is going > to help with it. > I disagree with this view.  In the past there have been companies that have had people working on Horizon to keep it implemented for their purposes.  Have these bugs available would have made their work easier.  I also know that there are people on the OSC team that just work on keeping functions implemented and up to date. At a minimum, having these bugs automatically opened would help when someone is trying to figure out why the new function they are looking for is not available in OSC or Horizon.  A search would turn up the fact that it hasn't been implemented yet.  Currently, we frequently have the discussion 'Has that been implemented in Horizon yet?'  This would reduce the confusion around that subject. So, I support trying to make this happen as I feel it moves us towards a better UX for OpenStack. > On Thu, Aug 2, 2018 at 5:09 PM Sean McGinnis > wrote: > > I'm wondering if someone on the infra team can give me some > pointers on how to > approach something, and looking for any general feedback as well. > > Background > ========== > We've had things like the DocImpact tag that could be added to > commit messages > that would tie into some automation to create a launchpad bug when > that commit > merged. While we had a larger docs team and out-of-tree docs, I > think this > really helped us make sure we didn't lose track of needed > documentation > updates. > > I was able to find part of how that is implemented in jeepyb: > > http://git.openstack.org/cgit/openstack-infra/jeepyb/tree/jeepyb/cmd/notify_impact.py > > Current Challenge > ================= > Similar to the need to follow up with documentation, I've seen a > lot of cases > where projects have added features or made other changes that > impact downstream > consumers of that project. Most often, I've seen cases where > something like > python-cinderclient adds some functionality, but it is on projects > like Horizon > or python-openstackclient to proactively go out and discover those > changes. > > Not only just seeking out those changes, but also evaluating > whether a given > change should have any impact on their project. So we've ended up > in a lot of > cases where either new functionality isn't made available through > these > interfaces until a cycle or two later, or probably worse, cases > where something > is now broken with no one aware of it until an actual end user > hits a problem > and files a bug. > > ClientImpact Plan > ================= > I've run this by a few people and it seems to have some support. > Or course I'm > open to any other suggestions. > > What I would like to do is add a ClientImpact tag handling that > could be added > very similarly to DocImpact. The way I see it working is it would > work in much > the same way where project's can use this to add the tag to a > commit message > when they know it is something that will require additional work > in OSC or > Horizon (or others). Then when that commit merges, automation > would create a > launchpad bug and/or Storyboard story, including a default set of > client > projects. Perhaps we can find some way to make those impacted clients > configurable by source project, but that could be a follow-on > optimization. > > I am concerned that this could create some extra overhead for > these projects. > But my hope is it would be a quick evaluation by a bug triager in > those > projects where they can, hopefully, quickly determine if a change > does not in > fact impact them and just close the ones they don't think require > any follow on > work. > > I do hope that this will save some time and speed things up > overall for these > projects to be notified that there is something that needs their > attention > without needing someone to take the time to actively go out and > discover that. > > Help Needed > =========== > From the bits I've found for the DocImpact handling, it looks like > it should > not be too much effort to implement the logic to handle a > ClientImpact flag. > But I have not been able to find all the moving parts that work > together to > perform that automation. > > If anyone has any background knowledge on how DocImpact is > implemented and can > give me a few pointers, I think I should be able to take it from > there to get > this implemented. Or if there is someone that knows this well and > is interested > in working on some of the implementation, that would be very > welcome too! > > Sean > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From prad at redhat.com Thu Aug 2 17:30:37 2018 From: prad at redhat.com (Pradeep Kilambi) Date: Thu, 2 Aug 2018 13:30:37 -0400 Subject: [openstack-dev] [tripleo] The Weekly Owl - 25th Edition In-Reply-To: <1533161102.7169.12.camel@redhat.com> References: <1532974544.5688.10.camel@redhat.com> <1533161102.7169.12.camel@redhat.com> Message-ID: On Wed, Aug 1, 2018 at 6:06 PM Jill Rouleau wrote: > On Tue, 2018-07-31 at 07:38 -0400, Pradeep Kilambi wrote: > > > > > > On Mon, Jul 30, 2018 at 2:17 PM Jill Rouleau wrote: > > > On Mon, 2018-07-30 at 11:35 -0400, Pradeep Kilambi wrote: > > > > > > > > > > > > On Mon, Jul 30, 2018 at 10:42 AM Alex Schultz > > > > > > > wrote: > > > > > On Mon, Jul 30, 2018 at 8:32 AM, Martin Magr > > > > > wrote: > > > > > > > > > > > > > > > > > > On Tue, Jul 17, 2018 at 6:12 PM, Emilien Macchi > > t.co > > > > > m> wrote: > > > > > >> > > > > > >> Your fellow reporter took a break from writing, but is now > > > back > > > > > on his > > > > > >> pen. > > > > > >> > > > > > >> Welcome to the twenty-fifth edition of a weekly update in > > > TripleO > > > > > world! > > > > > >> The goal is to provide a short reading (less than 5 minutes) > > > to > > > > > learn > > > > > >> what's new this week. > > > > > >> Any contributions and feedback are welcome. > > > > > >> Link to the previous version: > > > > > >> http://lists.openstack.org/pipermail/openstack-dev/2018-June/ > > > 1314 > > > > > 26.html > > > > > >> > > > > > >> +---------------------------------+ > > > > > >> | General announcements | > > > > > >> +---------------------------------+ > > > > > >> > > > > > >> +--> Rocky Milestone 3 is next week. After, any feature code > > > will > > > > > require > > > > > >> Feature Freeze Exception (FFE), asked on the mailing-list. > > > We'll > > > > > enter a > > > > > >> bug-fix only and stabilization period, until we can push the > > > > > first stable > > > > > >> version of Rocky. > > > > > > > > > > > > > > > > > > Hey guys, > > > > > > > > > > > > I would like to ask for FFE for backup and restore, where we > > > > > ended up > > > > > > deciding where is the best place for the code base for this > > > > > project (please > > > > > > see [1] for details). We believe that B&R support for > > > overcloud > > > > > control > > > > > > plane will be good addition to a rocky release, but we started > > > > > with this > > > > > > initiative quite late indeed. The final result should the > > > support > > > > > in > > > > > > openstack client, where "openstack overcloud (backup|restore)" > > > > > would work as > > > > > > a charm. Thanks in advance for considering this feature. > > > > > > > > > > > > > > > > Was there a blueprint/spec for this effort? Additionally do we > > > have > > > > > a > > > > > list of the outstanding work required for this? If it's just > > > these > > > > > two > > > > > playbooks, it might be ok for an FFE. But if there's additional > > > > > tripleoclient related changes, I wouldn't necessarily feel > > > > > comfortable > > > > > with these unless we have a complete list of work. Just as a > > > side > > > > > note, I'm not sure putting these in tripleo-common is going to > > > be > > > > > the > > > > > ideal place for this. > > > > > > Was it this review? https://review.openstack.org/#/c/582453/ > > > > > > For Stein we'll have an ansible role[0] and playbook repo[1] where > > > these > > > types of tasks should live. > > > > > > [0] https://github.com/openstack/ansible-role-openstack-operations > > > [1] https://review.openstack.org/#/c/583415/ > > Thanks Jill! The issue is, we want to be able to backport this to > > Queens once merged. With the new repos you're mentioning would this be > > possible? If no, then this wont work for us unfortunately. > > > > We wouldn't backport the new packages to Queens, however the repos will > be on github and available to clone and use. This would be far > preferable than adding them to tripleo-common so late in the rocky cycle > then having to break them back out right away in stein. > Understood. To extend this further, we will need to integrate these into tripleoclient. That way a user can just run $ openstack overcloud backup - and get all the data backendup instead of running the play books manually. Would this be possible with keeping these in a separate tripleo ansible repo? How do we currently handle undercloud backup. Where do we currently keep those playbooks? > > > > > > > > > > > > > > > > > > > Thanks Alex. For Rocky, if we can ship the playbooks with relevant > > > > docs we should be good. We will integrated with client in Stein > > > > release with restore logic included. Regarding putting tripleo- > > > common, > > > > we're open to suggestions. I think Dan just submitted the review > > > so we > > > > can get some eyes on the playbooks. Where do you suggest is better > > > > place for these instead? > > > > > > > > > > > > > > Thanks, > > > > > -Alex > > > > > > > > > > > Regards, > > > > > > Martin > > > > > > > > > > > > [1] https://review.openstack.org/#/c/582453/ > > > > > > > > > > > >> > > > > > >> +--> Next PTG will be in Denver, please propose topics: > > > > > >> https://etherpad.openstack.org/p/tripleoci-ptg-stein > > > > > >> +--> Multiple squads are currently brainstorming a framework > > > to > > > > > provide > > > > > >> validations pre/post upgrades - stay in touch! > > > > > >> > > > > > >> +------------------------------+ > > > > > >> | Continuous Integration | > > > > > >> +------------------------------+ > > > > > >> > > > > > >> +--> Sprint theme: migration to Zuul v3 (More on > > > > > >> https://trello.com/c/vyWXcKOB/841-sprint-16-goals) > > > > > >> +--> Sagi is the rover and Chandan is the ruck. Please tell > > > them > > > > > any CI > > > > > >> issue. > > > > > >> +--> Promotion on master is 4 days, 0 days on Queens and Pike > > > and > > > > > 1 day on > > > > > >> Ocata. > > > > > >> +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad- > > > meet > > > > > ing > > > > > >> > > > > > >> +-------------+ > > > > > >> | Upgrades | > > > > > >> +-------------+ > > > > > >> > > > > > >> +--> Good progress on major upgrades workflow, need reviews! > > > > > >> +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-s > > > quad > > > > > -status > > > > > >> > > > > > >> +---------------+ > > > > > >> | Containers | > > > > > >> +---------------+ > > > > > >> > > > > > >> +--> We switched python-tripleoclient to deploy containerized > > > > > undercloud > > > > > >> by default! > > > > > >> +--> Image prepare via workflow is still work in progress. > > > > > >> +--> More: > > > > > >> https://etherpad.openstack.org/p/tripleo-containers-squad-sta > > > tus > > > > > >> > > > > > >> +----------------------+ > > > > > >> | config-download | > > > > > >> +----------------------+ > > > > > >> > > > > > >> +--> UI integration is almost done (need review) > > > > > >> +--> Bug with failure listing is being fixed: > > > > > >> https://bugs.launchpad.net/tripleo/+bug/1779093 > > > > > >> +--> More: > > > > > >> https://etherpad.openstack.org/p/tripleo-config-download-squa > > > d-st > > > > > atus > > > > > >> > > > > > >> +--------------+ > > > > > >> | Integration | > > > > > >> +--------------+ > > > > > >> > > > > > >> +--> We're enabling decoupled deployment plans e.g for > > > OpenShift, > > > > > DPDK > > > > > >> etc: > > > > > >> https://review.openstack.org/#/q/topic:alternate_plans+(statu > > > s:op > > > > > en+OR+status:merged) > > > > > >> (need reviews). > > > > > >> +--> More: > > > > > >> https://etherpad.openstack.org/p/tripleo-integration-squad-st > > > atus > > > > > >> > > > > > >> +---------+ > > > > > >> | UI/CLI | > > > > > >> +---------+ > > > > > >> > > > > > >> +--> Good progress on network configuration via UI > > > > > >> +--> Config-download patches are being reviewed and a lot of > > > > > testing is > > > > > >> going on. > > > > > >> +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-sq > > > uad- > > > > > status > > > > > >> > > > > > >> +---------------+ > > > > > >> | Validations | > > > > > >> +---------------+ > > > > > >> > > > > > >> +--> Working on OpenShift validations, need reviews. > > > > > >> +--> More: > > > > > >> https://etherpad.openstack.org/p/tripleo-validations-squad-st > > > atus > > > > > >> > > > > > >> +---------------+ > > > > > >> | Networking | > > > > > >> +---------------+ > > > > > >> > > > > > >> +--> No updates this week. > > > > > >> +--> More: > > > > > >> https://etherpad.openstack.org/p/tripleo-networking-squad-sta > > > tus > > > > > >> > > > > > >> +--------------+ > > > > > >> | Workflows | > > > > > >> +--------------+ > > > > > >> > > > > > >> +--> No updates this week. > > > > > >> +--> More: https://etherpad.openstack.org/p/tripleo-workflows > > > -squ > > > > > ad-status > > > > > >> > > > > > >> +-----------+ > > > > > >> | Security | > > > > > >> +-----------+ > > > > > >> > > > > > >> +--> Working on Secrets management and Limit TripleO users > > > > > efforts > > > > > >> +--> More: https://etherpad.openstack.org/p/tripleo-security- > > > squa > > > > > d > > > > > >> > > > > > >> +------------+ > > > > > >> | Owl fact | > > > > > >> +------------+ > > > > > >> Elf owls live in a cacti. They are the smallest owls, and > > > live in > > > > > the > > > > > >> southwestern United States and Mexico. It will sometimes make > > > its > > > > > home in > > > > > >> the giant saguaro cactus, nesting in holes made by other > > > animals. > > > > > However, > > > > > >> the elf owl isn’t picky and will also live in trees or on > > > > > telephone poles. > > > > > >> > > > > > >> Source: > > > > > >> http://mentalfloss.com/article/68473/15-mysterious-facts-abou > > > t-ow > > > > > ls > > > > > >> > > > > > >> Thank you all for reading and stay tuned! > > > > > >> -- > > > > > >> Your fellow reporter, Emilien Macchi > > > > > >> > > > > > >> > > > > > > > > ____________________________________________________________________ > > > > > ______ > > > > > >> OpenStack Development Mailing List (not for usage questions) > > > > > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subjec > > > t:un > > > > > subscribe > > > > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > -dev > > > > > >> > > > > > > > > > > > > > > > > > > > > ____________________________________________________________________ > > > > > ______ > > > > > > OpenStack Development Mailing List (not for usage questions) > > > > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject > > > :uns > > > > > ubscribe > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack- > > > dev > > > > > > > > > > > > > > > > > > > ____________________________________________________________________ > > > > > ______ > > > > > OpenStack Development Mailing List (not for usage questions) > > > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:u > > > nsub > > > > > scribe > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-de > > > v > > > > > > > > > > > > > -- > > > > Cheers, > > > > ~ Prad > > > > > > > ____________________________________________________________________ > > > __ > > > > ____ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:uns > > > ubsc > > > > ribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_ > > > ____________________________________________________________________ > > > _____ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsub > > > scribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > -- > > Cheers, > > ~ Prad > > ______________________________________________________________________ > > ____ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubsc > > ribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cheers, ~ Prad -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Thu Aug 2 17:40:59 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 2 Aug 2018 12:40:59 -0500 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <156b4688-aba3-2a01-3917-b42a209f0ecd@gmail.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> <156b4688-aba3-2a01-3917-b42a209f0ecd@gmail.com> Message-ID: <6c636d43-d0cc-f805-cf20-96acbc61b139@fried.cc> Jay et al- > And what I'm referring to is doing a single query per "related > resource/trait placement request group" -- which is pretty much what > we're heading towards anyway. > > If we had a request for: > > GET /allocation_candidates? >  resources0=VCPU:1& >  required0=HW_CPU_X86_AVX2,!HW_CPU_X86_VMX& >  resources1=MEMORY_MB:1024 > > and logged something like this: > > DEBUG: [placement request ID XXX] request group 1 of 2 for 1 PCPU, > requiring HW_CPU_X86_AVX2, forbidding HW_CPU_X86_VMX, returned 10 matches > > DEBUG: [placement request ID XXX] request group 2 of 2 for 1024 > MEMORY_MB returned 3 matches > > that would at least go a step towards being more friendly for debugging > a particular request's results. Well, that's easy [1] (but I'm sure you knew that when you suggested it). Produces logs like [2]. This won't be backportable, I'm afraid. [1] https://review.openstack.org/#/c/588350/ [2] http://paste.openstack.org/raw/727165/ From openstack at fried.cc Thu Aug 2 17:47:15 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 2 Aug 2018 12:47:15 -0500 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <6c636d43-d0cc-f805-cf20-96acbc61b139@fried.cc> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> <156b4688-aba3-2a01-3917-b42a209f0ecd@gmail.com> <6c636d43-d0cc-f805-cf20-96acbc61b139@fried.cc> Message-ID: I should have made it clear that this is a tiny incremental improvement, to a code path that almost nobody is even going to see until Stein. In no way was it intended to close this topic. Thanks, efried On 08/02/2018 12:40 PM, Eric Fried wrote: > Jay et al- > >> And what I'm referring to is doing a single query per "related >> resource/trait placement request group" -- which is pretty much what >> we're heading towards anyway. >> >> If we had a request for: >> >> GET /allocation_candidates? >>  resources0=VCPU:1& >>  required0=HW_CPU_X86_AVX2,!HW_CPU_X86_VMX& >>  resources1=MEMORY_MB:1024 >> >> and logged something like this: >> >> DEBUG: [placement request ID XXX] request group 1 of 2 for 1 PCPU, >> requiring HW_CPU_X86_AVX2, forbidding HW_CPU_X86_VMX, returned 10 matches >> >> DEBUG: [placement request ID XXX] request group 2 of 2 for 1024 >> MEMORY_MB returned 3 matches >> >> that would at least go a step towards being more friendly for debugging >> a particular request's results. > > Well, that's easy [1] (but I'm sure you knew that when you suggested > it). Produces logs like [2]. > > This won't be backportable, I'm afraid. > > [1] https://review.openstack.org/#/c/588350/ > [2] http://paste.openstack.org/raw/727165/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jaypipes at gmail.com Thu Aug 2 17:51:16 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 2 Aug 2018 13:51:16 -0400 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <6c636d43-d0cc-f805-cf20-96acbc61b139@fried.cc> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> <156b4688-aba3-2a01-3917-b42a209f0ecd@gmail.com> <6c636d43-d0cc-f805-cf20-96acbc61b139@fried.cc> Message-ID: <625fd356-c5a1-5818-80f1-a8f8c570d830@gmail.com> On 08/02/2018 01:40 PM, Eric Fried wrote: > Jay et al- > >> And what I'm referring to is doing a single query per "related >> resource/trait placement request group" -- which is pretty much what >> we're heading towards anyway. >> >> If we had a request for: >> >> GET /allocation_candidates? >>  resources0=VCPU:1& >>  required0=HW_CPU_X86_AVX2,!HW_CPU_X86_VMX& >>  resources1=MEMORY_MB:1024 >> >> and logged something like this: >> >> DEBUG: [placement request ID XXX] request group 1 of 2 for 1 PCPU, >> requiring HW_CPU_X86_AVX2, forbidding HW_CPU_X86_VMX, returned 10 matches >> >> DEBUG: [placement request ID XXX] request group 2 of 2 for 1024 >> MEMORY_MB returned 3 matches >> >> that would at least go a step towards being more friendly for debugging >> a particular request's results. > > Well, that's easy [1] (but I'm sure you knew that when you suggested > it). Produces logs like [2]. > > This won't be backportable, I'm afraid. > > [1] https://review.openstack.org/#/c/588350/ > [2] http://paste.openstack.org/raw/727165/ Yes. And we could do the same kind of approach with the non-granular request groups by reducing the single large SQL statement that is used for all resources and all traits (and all agg associations) into separate SELECT statements. It could be slightly less performance-optimized but more readable and easier to output debug logs like those above. -jay From fungi at yuggoth.org Thu Aug 2 17:56:23 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 2 Aug 2018 17:56:23 +0000 Subject: [openstack-dev] [infra][horizon][osc] ClientImpact tag automation In-Reply-To: <20180802150947.GA1359@sm-workstation> References: <20180802150947.GA1359@sm-workstation> Message-ID: <20180802175622.p775m644j4ehm7gd@yuggoth.org> On 2018-08-02 10:09:48 -0500 (-0500), Sean McGinnis wrote: [...] > I was able to find part of how that is implemented in jeepyb: > > http://git.openstack.org/cgit/openstack-infra/jeepyb/tree/jeepyb/cmd/notify_impact.py [...] As for the nuts and bolts here, the script you found is executed from a Gerrit hook every time a change merges: https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/files/gerrit/change-merged Gerrit hooks are a bit fragile but also terribly opaque (the only way to troubleshoot a failure is a Gerrit admin pouring over a noisy log file on the server looking for a Java backtrace). If you decide to do something automated to open bugs/stories when changes merge, I recommend a Zuul job. We don't currently have a pipeline definition which generates a distinct build set for every merged change (the post and promote pipelines do supercedent queuing rather than independent queuing these days) but it would be easy to add one that does. It _could_ also be a candidate for a Gerrit ITS plug-in (there's one for SB but not for LP as far as I know), but implementing this would mean spending more time in Java than most of us care to experience. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jillr at redhat.com Thu Aug 2 18:14:39 2018 From: jillr at redhat.com (Jill Rouleau) Date: Thu, 02 Aug 2018 11:14:39 -0700 Subject: [openstack-dev] [tripleo] The Weekly Owl - 25th Edition In-Reply-To: References: <1532974544.5688.10.camel@redhat.com> <1533161102.7169.12.camel@redhat.com> Message-ID: <1533233679.4788.5.camel@redhat.com> On Thu, 2018-08-02 at 13:30 -0400, Pradeep Kilambi wrote: > > > On Wed, Aug 1, 2018 at 6:06 PM Jill Rouleau wrote: > > On Tue, 2018-07-31 at 07:38 -0400, Pradeep Kilambi wrote: > > >  > > >  > > > On Mon, Jul 30, 2018 at 2:17 PM Jill Rouleau > > wrote: > > > > On Mon, 2018-07-30 at 11:35 -0400, Pradeep Kilambi wrote: > > > > >  > > > > >  > > > > > On Mon, Jul 30, 2018 at 10:42 AM Alex Schultz > .com > > > > > > > > > > wrote: > > > > > > On Mon, Jul 30, 2018 at 8:32 AM, Martin Magr > om> > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > > On Tue, Jul 17, 2018 at 6:12 PM, Emilien Macchi > edha > > > > t.co > > > > > > m> wrote: > > > > > > >> > > > > > > >> Your fellow reporter took a break from writing, but is > > now > > > > back > > > > > > on his > > > > > > >> pen. > > > > > > >> > > > > > > >> Welcome to the twenty-fifth edition of a weekly update in > > > > TripleO > > > > > > world! > > > > > > >> The goal is to provide a short reading (less than 5 > > minutes) > > > > to > > > > > > learn > > > > > > >> what's new this week. > > > > > > >> Any contributions and feedback are welcome. > > > > > > >> Link to the previous version: > > > > > > >> http://lists.openstack.org/pipermail/openstack-dev/2018-J > > une/ > > > > 1314 > > > > > > 26.html > > > > > > >> > > > > > > >> +---------------------------------+ > > > > > > >> | General announcements | > > > > > > >> +---------------------------------+ > > > > > > >> > > > > > > >> +--> Rocky Milestone 3 is next week. After, any feature > > code > > > > will > > > > > > require > > > > > > >> Feature Freeze Exception (FFE), asked on the mailing- > > list. > > > > We'll > > > > > > enter a > > > > > > >> bug-fix only and stabilization period, until we can push > > the > > > > > > first stable > > > > > > >> version of Rocky. > > > > > > > > > > > > > > > > > > > > > Hey guys, > > > > > > > > > > > > > >   I would like to ask for FFE for backup and restore, > > where we > > > > > > ended up > > > > > > > deciding where is the best place for the code base for > > this > > > > > > project (please > > > > > > > see [1] for details). We believe that B&R support for > > > > overcloud > > > > > > control > > > > > > > plane will be good addition to a rocky release, but we > > started > > > > > > with this > > > > > > > initiative quite late indeed. The final result should the > > > > support > > > > > > in > > > > > > > openstack client, where "openstack overcloud > > (backup|restore)" > > > > > > would work as > > > > > > > a charm. Thanks in advance for considering this feature. > > > > > > > > > > > > >  > > > > > > Was there a blueprint/spec for this effort?  Additionally do > > we > > > > have > > > > > > a > > > > > > list of the outstanding work required for this? If it's just > > > > these > > > > > > two > > > > > > playbooks, it might be ok for an FFE. But if there's > > additional > > > > > > tripleoclient related changes, I wouldn't necessarily feel > > > > > > comfortable > > > > > > with these unless we have a complete list of work.  Just as > > a > > > > side > > > > > > note, I'm not sure putting these in tripleo-common is going > > to > > > > be > > > > > > the > > > > > > ideal place for this. > > > >  > > > > Was it this review? https://review.openstack.org/#/c/582453/ > > > >  > > > > For Stein we'll have an ansible role[0] and playbook repo[1] > > where > > > > these > > > > types of tasks should live. > > > >  > > > > [0] https://github.com/openstack/ansible-role-openstack-operatio > > ns  > > > > [1] https://review.openstack.org/#/c/583415/ > > > Thanks Jill! The issue is, we want to be able to backport this to > > > Queens once merged. With the new repos you're mentioning would > > this be > > > possible? If no, then this wont work for us unfortunately. > > >  > > > > We wouldn't backport the new packages to Queens, however the repos > > will > > be on github and available to clone and use.  This would be far > > preferable than adding them to tripleo-common so late in the rocky > > cycle > > then having to break them back out right away in stein. > > Understood. To extend this further, we will need to integrate these > into tripleoclient. That way a user can just run $ openstack overcloud > backup - and get all the data backendup instead of running the play > books manually. Would this be possible with keeping these in a > separate tripleo ansible repo? How do we currently handle undercloud > backup. Where do we currently keep those playbooks?  >   We're not currently providing backup playbooks, this is a new feature.  So it would be great if there were spec we could organize around. Cedric is working on a patch for running ansible playbooks via tripleoclient that should help:  https://review.openstack.org/#/c/586538 / > > > > >  > > >   > > > >  > > > >  > > > > >  > > > > > Thanks Alex. For Rocky, if we can ship the playbooks with > > relevant > > > > > docs we should be good. We will integrated with client in > > Stein > > > > > release with restore logic included. Regarding putting > > tripleo- > > > > common,  > > > > > we're open to suggestions. I think Dan just submitted the > > review > > > > so we > > > > > can get some eyes on the playbooks. Where do you suggest is > > better > > > > > place for these instead? > > > > >   > > > > > >  > > > > > > Thanks, > > > > > > -Alex > > > > > >  > > > > > > > Regards, > > > > > > > Martin > > > > > > > > > > > > > > [1] https://review.openstack.org/#/c/582453/ > > > > > > > > > > > > > >> > > > > > > >> +--> Next PTG will be in Denver, please propose topics: > > > > > > >> https://etherpad.openstack.org/p/tripleoci-ptg-stein > > > > > > >> +--> Multiple squads are currently brainstorming a > > framework > > > > to > > > > > > provide > > > > > > >> validations pre/post upgrades - stay in touch! > > > > > > >> > > > > > > >> +------------------------------+ > > > > > > >> | Continuous Integration | > > > > > > >> +------------------------------+ > > > > > > >> > > > > > > >> +--> Sprint theme: migration to Zuul v3 (More on > > > > > > >> https://trello.com/c/vyWXcKOB/841-sprint-16-goals) > > > > > > >> +--> Sagi is the rover and Chandan is the ruck. Please > > tell > > > > them > > > > > > any CI > > > > > > >> issue. > > > > > > >> +--> Promotion on master is 4 days, 0 days on Queens and > > Pike > > > > and > > > > > > 1 day on > > > > > > >> Ocata. > > > > > > >> +--> More: https://etherpad.openstack.org/p/tripleo-ci-sq > > uad- > > > > meet > > > > > > ing > > > > > > >> > > > > > > >> +-------------+ > > > > > > >> | Upgrades | > > > > > > >> +-------------+ > > > > > > >> > > > > > > >> +--> Good progress on major upgrades workflow, need > > reviews! > > > > > > >> +--> More: https://etherpad.openstack.org/p/tripleo-upgra > > de-s > > > > quad > > > > > > -status > > > > > > >> > > > > > > >> +---------------+ > > > > > > >> | Containers | > > > > > > >> +---------------+ > > > > > > >> > > > > > > >> +--> We switched python-tripleoclient to deploy > > containerized > > > > > > undercloud > > > > > > >> by default! > > > > > > >> +--> Image prepare via workflow is still work in > > progress. > > > > > > >> +--> More: > > > > > > >> https://etherpad.openstack.org/p/tripleo-containers-squad > > -sta > > > > tus > > > > > > >> > > > > > > >> +----------------------+ > > > > > > >> | config-download | > > > > > > >> +----------------------+ > > > > > > >> > > > > > > >> +--> UI integration is almost done (need review) > > > > > > >> +--> Bug with failure listing is being fixed: > > > > > > >> https://bugs.launchpad.net/tripleo/+bug/1779093 > > > > > > >> +--> More: > > > > > > >> https://etherpad.openstack.org/p/tripleo-config-download- > > squa > > > > d-st > > > > > > atus > > > > > > >> > > > > > > >> +--------------+ > > > > > > >> | Integration | > > > > > > >> +--------------+ > > > > > > >> > > > > > > >> +--> We're enabling decoupled deployment plans e.g for > > > > OpenShift, > > > > > > DPDK > > > > > > >> etc: > > > > > > >> https://review.openstack.org/#/q/topic:alternate_plans+(s > > tatu > > > > s:op > > > > > > en+OR+status:merged) > > > > > > >> (need reviews). > > > > > > >> +--> More: > > > > > > >> https://etherpad.openstack.org/p/tripleo-integration-squa > > d-st > > > > atus > > > > > > >> > > > > > > >> +---------+ > > > > > > >> | UI/CLI | > > > > > > >> +---------+ > > > > > > >> > > > > > > >> +--> Good progress on network configuration via UI > > > > > > >> +--> Config-download patches are being reviewed and a lot > > of > > > > > > testing is > > > > > > >> going on. > > > > > > >> +--> More: https://etherpad.openstack.org/p/tripleo-ui-cl > > i-sq > > > > uad- > > > > > > status > > > > > > >> > > > > > > >> +---------------+ > > > > > > >> | Validations | > > > > > > >> +---------------+ > > > > > > >> > > > > > > >> +--> Working on OpenShift validations, need reviews. > > > > > > >> +--> More: > > > > > > >> https://etherpad.openstack.org/p/tripleo-validations-squa > > d-st > > > > atus > > > > > > >> > > > > > > >> +---------------+ > > > > > > >> | Networking | > > > > > > >> +---------------+ > > > > > > >> > > > > > > >> +--> No updates this week. > > > > > > >> +--> More: > > > > > > >> https://etherpad.openstack.org/p/tripleo-networking-squad > > -sta > > > > tus > > > > > > >> > > > > > > >> +--------------+ > > > > > > >> | Workflows | > > > > > > >> +--------------+ > > > > > > >> > > > > > > >> +--> No updates this week. > > > > > > >> +--> More: https://etherpad.openstack.org/p/tripleo-workf > > lows > > > > -squ > > > > > > ad-status > > > > > > >> > > > > > > >> +-----------+ > > > > > > >> | Security | > > > > > > >> +-----------+ > > > > > > >> > > > > > > >> +--> Working on Secrets management and Limit TripleO > > users > > > > > > efforts > > > > > > >> +--> More: https://etherpad.openstack.org/p/tripleo-secur > > ity- > > > > squa > > > > > > d > > > > > > >> > > > > > > >> +------------+ > > > > > > >> | Owl fact  | > > > > > > >> +------------+ > > > > > > >> Elf owls live in a cacti. They are the smallest owls, and > > > > live in > > > > > > the > > > > > > >> southwestern United States and Mexico. It will sometimes > > make > > > > its > > > > > > home in > > > > > > >> the giant saguaro cactus, nesting in holes made by other > > > > animals. > > > > > > However, > > > > > > >> the elf owl isn’t picky and will also live in trees or on > > > > > > telephone poles. > > > > > > >> > > > > > > >> Source: > > > > > > >> http://mentalfloss.com/article/68473/15-mysterious-facts- > > abou > > > > t-ow > > > > > > ls > > > > > > >> > > > > > > >> Thank you all for reading and stay tuned! > > > > > > >> -- > > > > > > >> Your fellow reporter, Emilien Macchi > > > > > > >> > > > > > > >> > > > > > > > > > > > > ____________________________________________________________________ > > > > > > ______ > > > > > > >> OpenStack Development Mailing List (not for usage > > questions) > > > > > > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?su > > bjec > > > > t:un > > > > > > subscribe > > > > > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/opens > > tack > > > > -dev > > > > > > >> > > > > > > > > > > > > > > > > > > > > > > > > > > ____________________________________________________________________ > > > > > > ______ > > > > > > > OpenStack Development Mailing List (not for usage > > questions) > > > > > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?sub > > ject > > > > :uns > > > > > > ubscribe > > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openst > > ack- > > > > dev > > > > > > > > > > > > >  > > > > > > > > > > > > ____________________________________________________________________ > > > > > > ______ > > > > > > OpenStack Development Mailing List (not for usage questions) > > > > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subje > > ct:u > > > > nsub > > > > > > scribe > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac > > k-de > > > > v > > > > > >  > > > > >  > > > > > --  > > > > > Cheers, > > > > > ~ Prad > > > > > > > > > > > ____________________________________________________________________ > > > > __ > > > > > ____ > > > > > OpenStack Development Mailing List (not for usage questions) > > > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject > > :uns > > > > ubsc > > > > > ribe > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack- > > dev_ > > > > > > ____________________________________________________________________ > > > > _____ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:u > > nsub > > > > scribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-de > > v > > > >  > > >  > > > --  > > > Cheers, > > > ~ Prad > > > > > ____________________________________________________________________ > > __ > > > ____ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:uns > > ubsc > > > ribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_ > > ____________________________________________________________________ > > _____ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsub > > scribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > --  > Cheers, > ~ Prad > ______________________________________________________________________ > ____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubsc > ribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From openstack at fried.cc Thu Aug 2 18:20:43 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 2 Aug 2018 13:20:43 -0500 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <625fd356-c5a1-5818-80f1-a8f8c570d830@gmail.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> <156b4688-aba3-2a01-3917-b42a209f0ecd@gmail.com> <6c636d43-d0cc-f805-cf20-96acbc61b139@fried.cc> <625fd356-c5a1-5818-80f1-a8f8c570d830@gmail.com> Message-ID: > And we could do the same kind of approach with the non-granular request > groups by reducing the single large SQL statement that is used for all > resources and all traits (and all agg associations) into separate SELECT > statements. > > It could be slightly less performance-optimized but more readable and > easier to output debug logs like those above. Okay, but first we should define the actual problem(s) we're trying to solve, as Chris says, so we can assert that it's worth the (possible) perf hit and (definite) dev resources, not to mention the potential for injecting bugs. That said, it might be worth doing what you suggest purely for the sake of being able to read and understand the code... efried From melwittt at gmail.com Thu Aug 2 19:04:41 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 2 Aug 2018 12:04:41 -0700 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> <156b4688-aba3-2a01-3917-b42a209f0ecd@gmail.com> <6c636d43-d0cc-f805-cf20-96acbc61b139@fried.cc> <625fd356-c5a1-5818-80f1-a8f8c570d830@gmail.com> Message-ID: <171010d9-0cc8-da77-b51f-292ad8e2cb26@gmail.com> On Thu, 2 Aug 2018 13:20:43 -0500, Eric Fried wrote: >> And we could do the same kind of approach with the non-granular request >> groups by reducing the single large SQL statement that is used for all >> resources and all traits (and all agg associations) into separate SELECT >> statements. >> >> It could be slightly less performance-optimized but more readable and >> easier to output debug logs like those above. > > Okay, but first we should define the actual problem(s) we're trying to > solve, as Chris says, so we can assert that it's worth the (possible) > perf hit and (definite) dev resources, not to mention the potential for > injecting bugs. The problem is an infamous one, which is, your users are trying to boot instances and they get "No Valid Host" and an instance in ERROR state. They contact support, and now support is trying to determine why NoValidHost happened. In the past, they would turn on DEBUG log level on the nova-scheduler, try another request, and take a look at the scheduler logs. They'd see a message, for example, "DiskFilter [start: 2, end: 0]" (there were 2 candidates before DiskFilter ran and there were 0 after it ran) when the scheduling fails, indicating that scheduling failed because no computes were reporting enough disk to fulfill the request. The key thing here is they could see which resource was not available in their cluster. Now, with placement, all the resources are checked in one go and support can't tell which resource or trait was rejected, assuming it wasn't all of them. They want to know what resource or trait was rejected in order to help them find the problematic compute host or configuration or other and fix it. At present, I think the only approach support could take is to query a view of resource providers with their resource and trait availability and compare against the request flavor that failed, to figure out which resources or traits don't pass what's reported as available. Hope that helps. -melanie From jimmy at openstack.org Thu Aug 2 19:07:11 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 02 Aug 2018 14:07:11 -0500 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: <5B63349F.4010204@openstack.org> References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> <5B4DF6B4.9030501@openstack.org> <7f053a98-718b-0470-9ffa-934f0e715a76@gmail.com> <5B4E132E.5050607@openstack.org> <5B50A476.8010606@openstack.org> <5B5F295F.3090608@openstack.org> <1f5afd62cc3a9a8923586a404e707366@arcor.de> <16e69b47c8b71bf6f920ab8f3df61928@arcor.de> <5B63349F.4010204@openstack.org> Message-ID: <5B63565F.1010109@openstack.org> The Edge and Containers translations are now live. As new translations become available, we will add them to the page. https://www.openstack.org/containers/ https://www.openstack.org/edge-computing/ Note that the Chinese translation has not been added to Zanata at this time, so I've left the PDF download up on that page. Thanks everyone and please let me know if you have questions or concerns! Cheers! Jimmy Jimmy McArthur wrote: > Frank, > > We expect to have these papers up this afternoon. I'll update this > thread when we do. > > Thanks! > Jimmy > > Frank Kloeker wrote: >> Hi Sebastian, >> >> okay, it's translated now. In Edge whitepaper is the problem with >> XML-Parsing of the term AT&T. Don't know how to escape this. Maybe >> you will see the warning during import too. >> >> kind regards >> >> Frank >> >> Am 2018-07-30 20:09, schrieb Sebastian Marcet: >>> Hi Frank, >>> i was double checking pot file and realized that original pot missed >>> some parts of the original paper (subsections of the paper) apologizes >>> on that >>> i just re uploaded an updated pot file with missing subsections >>> >>> regards >>> >>> On Mon, Jul 30, 2018 at 2:20 PM, Frank Kloeker wrote: >>> >>>> Hi Jimmy, >>>> >>>> from the GUI I'll get this link: >>>> >>> https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >>> >>>> [1] >>>> >>>> paper version are only in container whitepaper: >>>> >>>> >>> https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >>> >>>> [2] >>>> >>>> In general there is no group named papers >>>> >>>> kind regards >>>> >>>> Frank >>>> >>>> Am 2018-07-30 17:06, schrieb Jimmy McArthur: >>>> Frank, >>>> >>>> We're getting a 404 when looking for the pot file on the Zanata API: >>>> >>> https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >>> >>>> [3] >>>> >>>> As a result, we can't pull the po files. Any idea what might be >>>> happening? >>>> >>>> Seeing the same thing with both papers... >>>> >>>> Thank you, >>>> Jimmy >>>> >>>> Frank Kloeker wrote: >>>> Hi Jimmy, >>>> >>>> Korean and German version are now done on the new format. Can you >>>> check publishing? >>>> >>>> thx >>>> >>>> Frank >>>> >>>> Am 2018-07-19 16:47, schrieb Jimmy McArthur: >>>> Hi all - >>>> >>>> Follow up on the Edge paper specifically: >>>> >>> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >>> >>>> [4] This is now available. As I mentioned on IRC this morning, it >>>> should >>>> be VERY close to the PDF. Probably just needs a quick review. >>>> >>>> Let me know if I can assist with anything. >>>> >>>> Thank you to i18n team for all of your help!!! >>>> >>>> Cheers, >>>> Jimmy >>>> >>>> Jimmy McArthur wrote: >>>> Ian raises some great points :) I'll try to address below... >>>> >>>> Ian Y. Choi wrote: >>>> Hello, >>>> >>>> When I saw overall translation source strings on container >>>> whitepaper, I would infer that new edge computing whitepaper >>>> source strings would include HTML markup tags. >>>> One of the things I discussed with Ian and Frank in Vancouver is >>>> the expense of recreating PDFs with new translations. It's >>>> prohibitively expensive for the Foundation as it requires design >>>> resources which we just don't have. As a result, we created the >>>> Containers whitepaper in HTML, so that it could be easily updated >>>> w/o working with outside design contractors. I indicated that we >>>> would also be moving the Edge paper to HTML so that we could prevent >>>> that additional design resource cost. >>>> On the other hand, the source strings of edge computing whitepaper >>>> which I18n team previously translated do not include HTML markup >>>> tags, since the source strings are based on just text format. >>>> The version that Akihiro put together was based on the Edge PDF, >>>> which we unfortunately didn't have the resources to implement in the >>>> same format. >>>> >>>> I really appreciate Akihiro's work on RST-based support on >>>> publishing translated edge computing whitepapers, since >>>> translators do not have to re-translate all the strings. >>>> I would like to second this. It took a lot of initiative to work on >>>> the RST-based translation. At the moment, it's just not usable for >>>> the reasons mentioned above. >>>> On the other hand, it seems that I18n team needs to investigate on >>>> translating similar strings of HTML-based edge computing whitepaper >>>> source strings, which would discourage translators. >>>> Can you expand on this? I'm not entirely clear on why the HTML >>>> based translation is more difficult. >>>> >>>> That's my point of view on translating edge computing whitepaper. >>>> >>>> For translating container whitepaper, I want to further ask the >>>> followings since *I18n-based tools* >>>> would mean for translators that translators can test and publish >>>> translated whitepapers locally: >>>> >>>> - How to build translated container whitepaper using original >>>> Silverstripe-based repository? >>>> https://docs.openstack.org/i18n/latest/tools.html [5] describes >>>> well how to build translated artifacts for RST-based OpenStack >>>> repositories >>>> but I could not find the way how to build translated container >>>> whitepaper with translated resources on Zanata. >>>> This is a little tricky. It's possible to set up a local version >>>> of the OpenStack website >>>> >>> (https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >>> >>>> [6]). However, we have to manually ingest the po files as they are >>>> completed and then push them out to production, so that wouldn't do >>>> much to help with your local build. I'm open to suggestions on how >>>> we can make this process easier for the i18n team. >>>> >>>> Thank you, >>>> Jimmy >>>> >>>> With many thanks, >>>> >>>> /Ian >>>> >>>> Jimmy McArthur wrote on 7/17/2018 11:01 PM: >>>> Frank, >>>> >>>> I'm sorry to hear about the displeasure around the Edge paper. As >>>> mentioned in a prior thread, the RST format that Akihiro worked did >>>> not work with the Zanata process that we have been using with our >>>> CMS. Additionally, the existing EDGE page is a PDF, so we had to >>>> build a new template to work with the new HTML whitepaper layout we >>>> created for the Containers paper. I outlined this in the thread " >>>> [OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge Computing >>>> Whitepaper Translation" on 6/25/18 and mentioned we would be ready >>>> with the template around 7/13. >>>> >>>> We completed the work on the new whitepaper template and then put >>>> out the pot files on Zanata so we can get the po language files >>>> back. If this process is too cumbersome for the translation team, >>>> I'm open to discussion, but right now our entire translation process >>>> is based on the official OpenStack Docs translation process outlined >>>> by the i18n team: >>>> https://docs.openstack.org/i18n/latest/en_GB/tools.html [7] >>>> >>>> Again, I realize Akihiro put in some work on his own proposing the >>>> new translation type. If the i18n team is moving to this format >>>> instead, we can work on redoing our process. >>>> >>>> Please let me know if I can clarify further. >>>> >>>> Thanks, >>>> Jimmy >>>> >>>> Frank Kloeker wrote: >>>> Hi Jimmy, >>>> >>>> permission was added for you and Sebastian. The Container Whitepaper >>>> is on the Zanata frontpage now. But we removed Edge Computing >>>> whitepaper last week because there is a kind of displeasure in the >>>> team since the results of translation are still not published beside >>>> Chinese version. It would be nice if we have a commitment from the >>>> Foundation that results are published in a specific timeframe. This >>>> includes your requirements until the translation should be >>>> available. >>>> >>>> thx Frank >>>> >>>> Am 2018-07-16 17:26, schrieb Jimmy McArthur: >>>> Sorry, I should have also added... we additionally need permissions >>>> so >>>> that we can add the a new version of the pot file to this project: >>>> >>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>> >>>> [8] Thanks! >>>> Jimmy >>>> >>>> Jimmy McArthur wrote: >>>> Hi all - >>>> >>>> We have both of the current whitepapers up and available for >>>> translation. Can we promote these on the Zanata homepage? >>>> >>>> >>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>> >>>> [9] >>>> >>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>> >>>> [10] Thanks all! >>>> Jimmy >>>> >>>> >>> __________________________________________________________________________ >>> >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> [12] >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [12] >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [12] >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [12] >>> >>> >>> >>> Links: >>> ------ >>> [1] >>> https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >>> >>> [2] >>> https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >>> >>> [3] >>> https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >>> >>> [4] >>> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >>> >>> [5] https://docs.openstack.org/i18n/latest/tools.html >>> [6] >>> https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >>> >>> [7] https://docs.openstack.org/i18n/latest/en_GB/tools.html >>> [8] >>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>> >>> [9] >>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>> >>> [10] >>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>> >>> [11] >>> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> [12] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean.mcginnis at gmx.com Thu Aug 2 19:16:10 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 2 Aug 2018 14:16:10 -0500 Subject: [openstack-dev] [infra][horizon][osc] ClientImpact tag automation In-Reply-To: <20180802175622.p775m644j4ehm7gd@yuggoth.org> References: <20180802150947.GA1359@sm-workstation> <20180802175622.p775m644j4ehm7gd@yuggoth.org> Message-ID: <20180802191610.GA11956@sm-workstation> On Thu, Aug 02, 2018 at 05:56:23PM +0000, Jeremy Stanley wrote: > On 2018-08-02 10:09:48 -0500 (-0500), Sean McGinnis wrote: > [...] > > I was able to find part of how that is implemented in jeepyb: > > > > http://git.openstack.org/cgit/openstack-infra/jeepyb/tree/jeepyb/cmd/notify_impact.py > [...] > > As for the nuts and bolts here, the script you found is executed > from a Gerrit hook every time a change merges: > > https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/files/gerrit/change-merged > Thanks, that's at least a place I can start looking! > Gerrit hooks are a bit fragile but also terribly opaque (the only > way to troubleshoot a failure is a Gerrit admin pouring over a noisy > log file on the server looking for a Java backtrace). If you decide > to do something automated to open bugs/stories when changes merge, I > recommend a Zuul job. We don't currently have a pipeline definition > which generates a distinct build set for every merged change (the > post and promote pipelines do supercedent queuing rather than > independent queuing these days) but it would be easy to add one that > does. > > It _could_ also be a candidate for a Gerrit ITS plug-in (there's one > for SB but not for LP as far as I know), but implementing this would > mean spending more time in Java than most of us care to experience. Interesting... I hadn't looked into Gerrit functionality enough to know about these. Looks like this is probably what you are referring to? https://gerrit.googlesource.com/plugins/its-storyboard/ It's been awhile since I did anything significant with Java, but that might be an option. Maybe a fun weekend project at least to see what it would take to create an its-launchpad plugin. Thanks for the pointers! From chris.friesen at windriver.com Thu Aug 2 20:04:10 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 2 Aug 2018 14:04:10 -0600 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <171010d9-0cc8-da77-b51f-292ad8e2cb26@gmail.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> <156b4688-aba3-2a01-3917-b42a209f0ecd@gmail.com> <6c636d43-d0cc-f805-cf20-96acbc61b139@fried.cc> <625fd356-c5a1-5818-80f1-a8f8c570d830@gmail.com> <171010d9-0cc8-da77-b51f-292ad8e2cb26@gmail.com> Message-ID: <5B6363BA.9000900@windriver.com> On 08/02/2018 01:04 PM, melanie witt wrote: > The problem is an infamous one, which is, your users are trying to boot > instances and they get "No Valid Host" and an instance in ERROR state. They > contact support, and now support is trying to determine why NoValidHost > happened. In the past, they would turn on DEBUG log level on the nova-scheduler, > try another request, and take a look at the scheduler logs. At a previous Summit[1] there were some operators that said they just always ran nova-scheduler with debug logging enabled in order to deal with this issue, but that it was a pain to isolate the useful logs from the not-useful ones. Chris [1] in a discussion related to https://blueprints.launchpad.net/nova/+spec/improve-sched-logging From fungi at yuggoth.org Thu Aug 2 20:31:22 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 2 Aug 2018 20:31:22 +0000 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <5B6363BA.9000900@windriver.com> References: <5B61F59A.1080502@windriver.com> <156b4688-aba3-2a01-3917-b42a209f0ecd@gmail.com> <6c636d43-d0cc-f805-cf20-96acbc61b139@fried.cc> <625fd356-c5a1-5818-80f1-a8f8c570d830@gmail.com> <171010d9-0cc8-da77-b51f-292ad8e2cb26@gmail.com> <5B6363BA.9000900@windriver.com> Message-ID: <20180802203121.2gxk2dllthqyykay@yuggoth.org> On 2018-08-02 14:04:10 -0600 (-0600), Chris Friesen wrote: [...] > At a previous Summit[1] there were some operators that said they just always > ran nova-scheduler with debug logging enabled in order to deal with this > issue, but that it was a pain to isolate the useful logs from the not-useful > ones. [...] Also, the OpenStack VMT doesn't prioritize information leaks which are limited to debug-level logging[*], so leaving debug logging enabled is perhaps more risky if you don't safeguard those logs. [*] https://security.openstack.org/vmt-process.html#incident-report-taxonomy -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Thu Aug 2 20:38:26 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 2 Aug 2018 20:38:26 +0000 Subject: [openstack-dev] [infra][horizon][osc] ClientImpact tag automation In-Reply-To: <20180802191610.GA11956@sm-workstation> References: <20180802150947.GA1359@sm-workstation> <20180802175622.p775m644j4ehm7gd@yuggoth.org> <20180802191610.GA11956@sm-workstation> Message-ID: <20180802203826.3lo2j6u7jlcdoyrk@yuggoth.org> On 2018-08-02 14:16:10 -0500 (-0500), Sean McGinnis wrote: [...] > Interesting... I hadn't looked into Gerrit functionality enough to know about > these. Looks like this is probably what you are referring to? > > https://gerrit.googlesource.com/plugins/its-storyboard/ Yes, that. Khai Do (zaro) did the bulk of the work implementing it for us but isn't around as much these days (we miss you!). > It's been awhile since I did anything significant with Java, but that might be > an option. Maybe a fun weekend project at least to see what it would take to > create an its-launchpad plugin. [...] Careful; if you let anyone know you've touched a Gerrit plug-in the requests for more help will never end. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sbaker at redhat.com Thu Aug 2 21:41:11 2018 From: sbaker at redhat.com (Steve Baker) Date: Fri, 3 Aug 2018 09:41:11 +1200 Subject: [openstack-dev] [TripleO] easily identifying how services are configured In-Reply-To: References: <8e58a5def5b66d229e9c304cbbc9f25c02d49967.camel@redhat.com> <927f5ff4ec528bdcc5877c7a1a5635c62f5f1cb5.camel@redhat.com> <5c220d66-d4e5-2b19-048c-af3a37c846a3@nemebean.com> <88d7f66c-4215-b032-0b98-2671f14dab21@redhat.com> Message-ID: On 02/08/18 13:03, Alex Schultz wrote: > On Mon, Jul 9, 2018 at 6:28 AM, Bogdan Dobrelya wrote: >> On 7/6/18 7:02 PM, Ben Nemec wrote: >>> >>> >>> On 07/05/2018 01:23 PM, Dan Prince wrote: >>>> On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote: >>>>> >>>>> I would almost rather see us organize the directories by service >>>>> name/project instead of implementation. >>>>> >>>>> Instead of: >>>>> >>>>> puppet/services/nova-api.yaml >>>>> puppet/services/nova-conductor.yaml >>>>> docker/services/nova-api.yaml >>>>> docker/services/nova-conductor.yaml >>>>> >>>>> We'd have: >>>>> >>>>> services/nova/nova-api-puppet.yaml >>>>> services/nova/nova-conductor-puppet.yaml >>>>> services/nova/nova-api-docker.yaml >>>>> services/nova/nova-conductor-docker.yaml >>>>> >>>>> (or perhaps even another level of directories to indicate >>>>> puppet/docker/ansible?) >>>> >>>> I'd be open to this but doing changes on this scale is a much larger >>>> developer and user impact than what I was thinking we would be willing >>>> to entertain for the issue that caused me to bring this up (i.e. how to >>>> identify services which get configured by Ansible). >>>> >>>> Its also worth noting that many projects keep these sorts of things in >>>> different repos too. Like Kolla fully separates kolla-ansible and >>>> kolla-kubernetes as they are quite divergent. We have been able to >>>> preserve some of our common service architectures but as things move >>>> towards kubernetes we may which to change things structurally a bit >>>> too. >>> >>> True, but the current directory layout was from back when we intended to >>> support multiple deployment tools in parallel (originally >>> tripleo-image-elements and puppet). Since I think it has become clear that >>> it's impractical to maintain two different technologies to do essentially >>> the same thing I'm not sure there's a need for it now. It's also worth >>> noting that kolla-kubernetes basically died because there wasn't enough >>> people to maintain both deployment methods, so we're not the only ones who >>> have found that to be true. If/when we move to kubernetes I would >>> anticipate it going like the initial containers work did - development for a >>> couple of cycles, then a switch to the new thing and deprecation of the old >>> thing, then removal of support for the old thing. >>> >>> That being said, because of the fact that the service yamls are >>> essentially an API for TripleO because they're referenced in user >> >> this ^^ >> >>> resource registries, I'm not sure it's worth the churn to move everything >>> either. I think that's going to be an issue either way though, it's just a >>> question of the scope. _Something_ is going to move around no matter how we >>> reorganize so it's a problem that needs to be addressed anyway. >> >> [tl;dr] I can foresee reorganizing that API becomes a nightmare for >> maintainers doing backports for queens (and the LTS downstream release based >> on it). Now imagine kubernetes support comes within those next a few years, >> before we can let the old API just go... >> >> I have an example [0] to share all that pain brought by a simple move of >> 'API defaults' from environments/services-docker to environments/services >> plus environments/services-baremetal. Each time a file changes contents by >> its old location, like here [1], I had to run a lot of sanity checks to >> rebase it properly. Like checking for the updated paths in resource >> registries are still valid or had to/been moved as well, then picking the >> source of truth for diverged old vs changes locations - all that to loose >> nothing important in progress. >> >> So I'd say please let's do *not* change services' paths/namespaces in t-h-t >> "API" w/o real need to do that, when there is no more alternatives left to >> that. >> > Ok so it's time to dig this thread back up. I'm currently looking at > the chrony support which will require a new service[0][1]. Rather than > add it under puppet, we'll likely want to leverage ansible. So I guess > the question is where do we put services going forward? Additionally > as we look to truly removing the baremetal deployment options and > puppet service deployment, it seems like we need to consolidate under > a single structure. Given that we don't want force too much churn, > does this mean that we should align to the docker/services/*.yaml > structure or should we be proposing a new structure that we can try to > align on. > > There is outstanding tech-debt around the nested stacks and references > within these services when we added the container deployments so it's > something that would be beneficial to start tackling sooner rather > than later. Personally I think we're always going to have the issue > when we rename files that could have been referenced by custom > templates, but I don't think we can continue to carry the outstanding > tech debt around these static locations. Should we be investing in > coming up with some sort of mappings that we can use/warn a user on > when we move files? When Stein development starts, the puppet services will have been deprecated for an entire cycle. Can I suggest we use this reorganization as the time we delete the puppet services files? This would release us of the burden of maintaining a deployment method that we no longer use. Also we'll gain a deployment speedup by removing a nested stack for each docker based service. Then I'd suggest doing an "mv docker/services services" and moving any remaining files in the puppet directory into that. This is basically the naming that James suggested, except we wouldn't have to suffix the files with -puppet.yaml, -docker.yaml unless we still had more than one deployment method for that service. Finally, we could consider symlinking docker/services to services for a cycle. I'm not sure how a swift-stored plan would handle this, but this would be a great reason to land Ian's plan speedup patch[1] which stores tripleo-heat-templates in a tarball :) [1] http://lists.openstack.org/pipermail/openstack-dev/2018-August/132768.html > Thanks, > -Alex > > [0] https://review.openstack.org/#/c/586679/ > [1] https://review.openstack.org/#/c/588111/ From lbragstad at gmail.com Thu Aug 2 22:06:21 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 2 Aug 2018 17:06:21 -0500 Subject: [openstack-dev] [keystone] Prospective RC1 Bugs Message-ID: <0710541d-3039-d544-d8e8-88d81a633995@gmail.com> Hey all, I went through all bugs opened during the Rocky release and came up with a list of ones that might be good to fix before next week [0]. The good news is that more than half are in progress and none of them are release blockers, just ones that would be good to get in. Let me know if you see anything reported this week that needs to get fixed. [0] https://bit.ly/2MeXN0L -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From michael.glasgow at oracle.com Thu Aug 2 22:18:33 2018 From: michael.glasgow at oracle.com (Michael Glasgow) Date: Thu, 2 Aug 2018 17:18:33 -0500 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <5B6363BA.9000900@windriver.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> <156b4688-aba3-2a01-3917-b42a209f0ecd@gmail.com> <6c636d43-d0cc-f805-cf20-96acbc61b139@fried.cc> <625fd356-c5a1-5818-80f1-a8f8c570d830@gmail.com> <171010d9-0cc8-da77-b51f-292ad8e2cb26@gmail.com> <5B6363BA.9000900@windriver.com> Message-ID: On 08/02/18 15:04, Chris Friesen wrote: > On 08/02/2018 01:04 PM, melanie witt wrote: > >> The problem is an infamous one, which is, your users are trying to boot >> instances and they get "No Valid Host" and an instance in ERROR state. >> They contact support, and now support is trying to determine why >> NoValidHost happened. In the past, they would turn on DEBUG log level >> on the nova-scheduler, try another request, and take a look at the >> scheduler logs. > > At a previous Summit[1] there were some operators that said they just > always ran nova-scheduler with debug logging enabled in order to deal > with this issue, but that it was a pain [...] I would go a bit further and say it's likely to be unacceptable on a large cluster. It's expensive to deal with all those logs and to manually comb through them for troubleshooting this issue type, which can happen frequently with some setups. Secondarily there are performance and security concerns with leaving debug on all the time. As to "defining the problem", I think it's what Melanie said. It's about asking for X and the system saying, "sorry, can't give you X" with no further detail or even means of discovering it. More generally, any time a service fails to deliver a resource which it is primarily designed to deliver, it seems to me at this stage that should probably be taken a bit more seriously than just "check the log file, maybe there's something in there?" From the user's perspective, if nova fails to produce an instance, or cinder fails to produce a volume, or neutron fails to build a subnet, that's kind of a big deal, right? In such cases, would it be possible to generate a detailed exception object which contains all the necessary info to ascertain why that specific failure occurred? Ideally the operator should be able to correlate those exceptions with associated objects, e.g. the instance in ERROR state in this case, so that given that failed instance ID they can quickly remedy the user's problem without reading megabytes of log files. If there's a way to make this error handling generic across services to some extent, that seems like it would be great for operators. Such a framework might eventually hook into internal ticketing systems, maintenance reporting, or provide a starting point for self healing mechanisms, but initially the aim would just be to provide the operator with the bare minimum info necessary for more efficient break-fix. It could be a big investment, but it also doesn't seem like "optional" functionality from a large operator's perspective. "Enable debug and try again" is just not good enough IMHO. -- Michael Glasgow From tony at bakeyournoodle.com Thu Aug 2 23:09:29 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 3 Aug 2018 09:09:29 +1000 Subject: [openstack-dev] [designate][stable] Stable Core Team Updates In-Reply-To: <1fd50f5e-9aa4-8c6d-729f-eecac4d7d5e6@ham.ie> References: <1fd50f5e-9aa4-8c6d-729f-eecac4d7d5e6@ham.ie> Message-ID: <20180802230928.GI15918@thor.bakeyournoodle.com> On Tue, Jul 31, 2018 at 06:39:36PM +0100, Graham Hayes wrote: > Hi Stable Team, > > I would like to nominate 2 new stable core reviewers for Designate. > > * Erik Olof Gunnar Andersson > * Jens Harbott (frickler) > > Erik has been doing a lot of stable reviews recently, and Jens has shown > that he understands the policy in other reviews (and has stable rights > on other repositories (like DevStack) already). Done. Jens doesn't seem to be doing active stable reviews but I've added them anyway. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From pabelanger at redhat.com Fri Aug 3 00:01:46 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 2 Aug 2018 20:01:46 -0400 Subject: [openstack-dev] [barbican][ara][helm][tempest] Removal of fedora-27 nodes Message-ID: <20180803000146.GA23278@localhost.localdomain> Greetings, We've had fedora-28 nodes online for some time in openstack-infra, I'd like to finish the migration process and remove fedora-27 images. Please take a moment to review and approve the following patches[1]. We'll be using the fedora-latest nodeset now, which make is a little easier for openstack-infra to migrate to newer versions of fedora. Next time around, we'll send out an email to the ML once fedora-29 is online to give projects some time to test before we make the change. Thanks - Paul [1] https://review.openstack.org/#/q/topic:fedora-latest From jaypipes at gmail.com Fri Aug 3 00:27:22 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 2 Aug 2018 20:27:22 -0400 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> <156b4688-aba3-2a01-3917-b42a209f0ecd@gmail.com> <6c636d43-d0cc-f805-cf20-96acbc61b139@fried.cc> <625fd356-c5a1-5818-80f1-a8f8c570d830@gmail.com> <171010d9-0cc8-da77-b51f-292ad8e2cb26@gmail.com> <5B6363BA.9000900@windriver.com> Message-ID: <97bfe7dc-eb25-bf30-7a84-6ef29105324e@gmail.com> On 08/02/2018 06:18 PM, Michael Glasgow wrote: > On 08/02/18 15:04, Chris Friesen wrote: >> On 08/02/2018 01:04 PM, melanie witt wrote: >> >>> The problem is an infamous one, which is, your users are trying to boot >>> instances and they get "No Valid Host" and an instance in ERROR >>> state. They contact support, and now support is trying to determine >>> why NoValidHost happened. In the past, they would turn on DEBUG log >>> level on the nova-scheduler, try another request, and take a look at >>> the scheduler logs. >> >> At a previous Summit[1] there were some operators that said they just >> always ran nova-scheduler with debug logging enabled in order to deal >> with this issue, but that it was a pain [...] > > I would go a bit further and say it's likely to be unacceptable on a > large cluster.  It's expensive to deal with all those logs and to > manually comb through them for troubleshooting this issue type, which > can happen frequently with some setups.  Secondarily there are > performance and security concerns with leaving debug on all the time. > > As to "defining the problem", I think it's what Melanie said.  It's > about asking for X and the system saying, "sorry, can't give you X" with > no further detail or even means of discovering it. > > More generally, any time a service fails to deliver a resource which it > is primarily designed to deliver, it seems to me at this stage that > should probably be taken a bit more seriously than just "check the log > file, maybe there's something in there?"  From the user's perspective, > if nova fails to produce an instance, or cinder fails to produce a > volume, or neutron fails to build a subnet, that's kind of a big deal, > right? > > In such cases, would it be possible to generate a detailed exception > object which contains all the necessary info to ascertain why that > specific failure occurred? It's not an exception. It's normal course of events. NoValidHosts means there were no compute nodes that met the requested resource amounts. There's plenty of ways the operator can get usage and trait information and determine if there are providers that meet the requested amounts and required/forbidden traits. What we're talking about here is debugging information, plain and simple. If a SELECT statement against an Oracle DB returns 0 rows, is that an exception? No. Would an operator need to re-send the SELECT statement with an EXPLAIN SELECT in order to get information about what indexes were used to winnow the result set (to zero)? Yes. Either that, or the operator would need to gradually re-execute smaller SELECT statements containing fewer filters in order to determine which join or predicate caused a result set to contain zero rows. That's exactly what we're talking about here. It's not an exception. It's debugging information. Best, -jay From jiapei2 at lenovo.com Fri Aug 3 02:36:40 2018 From: jiapei2 at lenovo.com (Pei Pei2 Jia) Date: Fri, 3 Aug 2018 02:36:40 +0000 Subject: [openstack-dev] [openstack-infra][openstack-third-party-ci][nodepool][ironic] nodepool can't ssh to the VM it created Message-ID: <7155A01359422A448E2E280E0E57B1429CDFF544@CNMAILEX02.lenovo.com> Hi all, I’m now encountering a strange problem when using nodepool 0.5.0 to manage openstack cloud. It can create VM successfully in the openstack could, but can’t ssh to it. The nodepool.log is: >nodepool.log << END 2018-08-02 22:25:36,152 ERROR nodepool.utils: Failed to negotiate SSH: Signature verification (ssh-rsa) failed. Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/nodepool/nodeutils.py", line 55, in ssh_connect client = SSHClient(ip, username, **connect_kwargs) File "/usr/local/lib/python2.7/dist-packages/nodepool/sshclient.py", line 33, in __init__ allow_agent=allow_agent) File "/usr/local/lib/python2.7/dist-packages/paramiko/client.py", line 353, in connect t.start_client(timeout=timeout) File "/usr/local/lib/python2.7/dist-packages/paramiko/transport.py", line 494, in start_client raise e SSHException: Signature verification (ssh-rsa) failed. END And when I check the VM start log, I find it shows A start job is running for unbound.service (3min 37s / 8min 28s) And my nodepool.yml is: providers: - name: cloud_183 region-name: 'RegionOne' cloud: cloud_183 max-servers: 2 boot-timeout: 240 launch-timeout: 600 networks: - name: tenant clean-floating-ips: True images: - name: ubuntu-xenial min-ram: 2048 diskimage: ubuntu-xenial username: jenkins key-name: nodepool private-key: '/home/nodepool/.ssh/id_rsa' Could anyone happen to know this? Thank you in advance. Jeremy Jia (贾培) Software Developer, Lenovo Cloud Technology Center 5F, Zhangjiang Mansion, 560 SongTao Rd. Pudong, Shanghai jiapei2 at lenovo.com Ph: 8621- Mobile: 8618116119081 www.lenovo.com / www.lenovo.com Forums | Blogs | Twitter | Facebook | Flickr Print only when necessary -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Fri Aug 3 04:20:40 2018 From: iwienand at redhat.com (Ian Wienand) Date: Fri, 3 Aug 2018 14:20:40 +1000 Subject: [openstack-dev] [all][docs] ACTION REQUIRED for projects using readthedocs Message-ID: <625a06b2-0a64-6018-8a3b-d2d8df419190@redhat.com> Hello, tl;dr : any projects using the "docs-on-readthedocs" job template to trigger a build of their documentation in readthedocs needs to: 1) add the "openstackci" user as a maintainer of the RTD project 2) generate a webhook integration URL for the project via RTD 3) provide the unique webhook ID value in the "rtd_webhook_id" project variable See https://docs.openstack.org/infra/openstack-zuul-jobs/project-templates.html#project_template-docs-on-readthedocs -- readthedocs has recently updated their API for triggering a documentation build. In the old API, anyone could POST to a known URL for the project and it would trigger a build. This end-point has stopped responding and we now need to use an authenticated webhook to trigger documentation builds. Since this is only done in the post and release pipelines, projects probably haven't had great feedback that current methods are failing and this may be a surprise. To check your publishing, you can go to the zuul builds page [1] and filter by your project and the "post" pipeline to find recent runs. There is now some setup required which can only be undertaken by a current maintainer of the RTD project. In short; add the "openstackci" user as a maintainer, add a "generic webhook" integration to the project, find the last bit of the URL from that and put it in the project variable "rtd_webhook_id". Luckily OpenStack infra keeps a team of highly skilled digital artists on retainer and they have produced a handy visual guide available at https://imgur.com/a/Pp4LH31 Once the RTD project is setup, you must provide the webhook ID value in your project variables. This will look something like: - project: templates: - docs-on-readthedocs - publish-to-pypi vars: rtd_webhook_id: '12345' check: jobs: ... For actual examples; see pbrx [2] which keeps its config in tree, or gerrit-dash-creator which has its configuration in project-config [3]. Happy to help if anyone is having issues, via mail or #openstack-infra Thanks! -i p.s. You don't *have* to use the jobs from the docs-on-readthedocs templates and hence add infra as a maintainer; you can setup your own credentials with zuul secrets in tree and write your playbooks and jobs to use the generic role [4]. We're always happy to discuss any concerns. [1] https://zuul.openstack.org/builds.html [2] https://git.openstack.org/cgit/openstack/pbrx/tree/.zuul.yaml#n17 [3] https://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul.d/projects.yaml [4] https://zuul-ci.org/docs/zuul-jobs/roles.html#role-trigger-readthedocs From cjeanner at redhat.com Fri Aug 3 05:32:35 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Fri, 3 Aug 2018 07:32:35 +0200 Subject: [openstack-dev] [TripleO] easily identifying how services are configured In-Reply-To: References: <8e58a5def5b66d229e9c304cbbc9f25c02d49967.camel@redhat.com> <927f5ff4ec528bdcc5877c7a1a5635c62f5f1cb5.camel@redhat.com> <5c220d66-d4e5-2b19-048c-af3a37c846a3@nemebean.com> <88d7f66c-4215-b032-0b98-2671f14dab21@redhat.com> Message-ID: <38e79c59-a0f8-4d76-4005-db4637dffa5d@redhat.com> On 08/02/2018 11:41 PM, Steve Baker wrote: > > > On 02/08/18 13:03, Alex Schultz wrote: >> On Mon, Jul 9, 2018 at 6:28 AM, Bogdan Dobrelya >> wrote: >>> On 7/6/18 7:02 PM, Ben Nemec wrote: >>>> >>>> >>>> On 07/05/2018 01:23 PM, Dan Prince wrote: >>>>> On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote: >>>>>> >>>>>> I would almost rather see us organize the directories by service >>>>>> name/project instead of implementation. >>>>>> >>>>>> Instead of: >>>>>> >>>>>> puppet/services/nova-api.yaml >>>>>> puppet/services/nova-conductor.yaml >>>>>> docker/services/nova-api.yaml >>>>>> docker/services/nova-conductor.yaml >>>>>> >>>>>> We'd have: >>>>>> >>>>>> services/nova/nova-api-puppet.yaml >>>>>> services/nova/nova-conductor-puppet.yaml >>>>>> services/nova/nova-api-docker.yaml >>>>>> services/nova/nova-conductor-docker.yaml >>>>>> >>>>>> (or perhaps even another level of directories to indicate >>>>>> puppet/docker/ansible?) >>>>> >>>>> I'd be open to this but doing changes on this scale is a much larger >>>>> developer and user impact than what I was thinking we would be willing >>>>> to entertain for the issue that caused me to bring this up (i.e. >>>>> how to >>>>> identify services which get configured by Ansible). >>>>> >>>>> Its also worth noting that many projects keep these sorts of things in >>>>> different repos too. Like Kolla fully separates kolla-ansible and >>>>> kolla-kubernetes as they are quite divergent. We have been able to >>>>> preserve some of our common service architectures but as things move >>>>> towards kubernetes we may which to change things structurally a bit >>>>> too. >>>> >>>> True, but the current directory layout was from back when we >>>> intended to >>>> support multiple deployment tools in parallel (originally >>>> tripleo-image-elements and puppet).  Since I think it has become >>>> clear that >>>> it's impractical to maintain two different technologies to do >>>> essentially >>>> the same thing I'm not sure there's a need for it now.  It's also worth >>>> noting that kolla-kubernetes basically died because there wasn't enough >>>> people to maintain both deployment methods, so we're not the only >>>> ones who >>>> have found that to be true.  If/when we move to kubernetes I would >>>> anticipate it going like the initial containers work did - >>>> development for a >>>> couple of cycles, then a switch to the new thing and deprecation of >>>> the old >>>> thing, then removal of support for the old thing. >>>> >>>> That being said, because of the fact that the service yamls are >>>> essentially an API for TripleO because they're referenced in user >>> >>> this ^^ >>> >>>> resource registries, I'm not sure it's worth the churn to move >>>> everything >>>> either.  I think that's going to be an issue either way though, it's >>>> just a >>>> question of the scope.  _Something_ is going to move around no >>>> matter how we >>>> reorganize so it's a problem that needs to be addressed anyway. >>> >>> [tl;dr] I can foresee reorganizing that API becomes a nightmare for >>> maintainers doing backports for queens (and the LTS downstream >>> release based >>> on it). Now imagine kubernetes support comes within those next a few >>> years, >>> before we can let the old API just go... >>> >>> I have an example [0] to share all that pain brought by a simple move of >>> 'API defaults' from environments/services-docker to >>> environments/services >>> plus environments/services-baremetal. Each time a file changes >>> contents by >>> its old location, like here [1], I had to run a lot of sanity checks to >>> rebase it properly. Like checking for the updated paths in resource >>> registries are still valid or had to/been moved as well, then picking >>> the >>> source of truth for diverged old vs changes locations - all that to >>> loose >>> nothing important in progress. >>> >>> So I'd say please let's do *not* change services' paths/namespaces in >>> t-h-t >>> "API" w/o real need to do that, when there is no more alternatives >>> left to >>> that. >>> >> Ok so it's time to dig this thread back up. I'm currently looking at >> the chrony support which will require a new service[0][1]. Rather than >> add it under puppet, we'll likely want to leverage ansible. So I guess >> the question is where do we put services going forward?  Additionally >> as we look to truly removing the baremetal deployment options and >> puppet service deployment, it seems like we need to consolidate under >> a single structure.  Given that we don't want force too much churn, >> does this mean that we should align to the docker/services/*.yaml >> structure or should we be proposing a new structure that we can try to >> align on. >> >> There is outstanding tech-debt around the nested stacks and references >> within these services when we added the container deployments so it's >> something that would be beneficial to start tackling sooner rather >> than later.  Personally I think we're always going to have the issue >> when we rename files that could have been referenced by custom >> templates, but I don't think we can continue to carry the outstanding >> tech debt around these static locations.  Should we be investing in >> coming up with some sort of mappings that we can use/warn a user on >> when we move files? > > When Stein development starts, the puppet services will have been > deprecated for an entire cycle. Can I suggest we use this reorganization > as the time we delete the puppet services files? This would release us > of the burden of maintaining a deployment method that we no longer use. > Also we'll gain a deployment speedup by removing a nested stack for each > docker based service. > > Then I'd suggest doing an "mv docker/services services" and moving any > remaining files in the puppet directory into that. This is basically the > naming that James suggested, except we wouldn't have to suffix the files > with -puppet.yaml, -docker.yaml unless we still had more than one > deployment method for that service. We must be cuatious, as a tree change might prevent backporting things when we need them in older releases. That was also discussed during the latter thread regarding reorganization - although I'm also all for a "simplify that repository" thing, it might become tricky in some cases :/. > > Finally, we could consider symlinking docker/services to services for a > cycle. I'm not sure how a swift-stored plan would handle this, but this > would be a great reason to land Ian's plan speedup patch[1] which stores > tripleo-heat-templates in a tarball :) Might be worth a try. Might also allow to backport things, as the "original files" would stay in the "old" location, making the new tree compatible with older release like Newton (hey, yes, LTS for Red Hat). I think the templates are aggregated and generated prior the upload, meaning it should not create new issues. Hopefully. Maybe shardy can jump in and provide some more info? > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-August/132768.html > >> Thanks, >> -Alex >> >> [0] https://review.openstack.org/#/c/586679/ >> [1] https://review.openstack.org/#/c/588111/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From duonghq at vn.fujitsu.com Fri Aug 3 07:40:51 2018 From: duonghq at vn.fujitsu.com (Ha Quang, Duong) Date: Fri, 3 Aug 2018 07:40:51 +0000 Subject: [openstack-dev] [kolla] ptl non candidacy In-Reply-To: References: Message-ID: <99da66e078b64ee584bfbb136ae59056@G07SGEXCMSGPS05.g07.fujitsu.local> Hi Jeffrey, Thank you for your works as PTL in Rocky cycle and release liaison from many cycle ago (at least I joined Kolla community, you are already release liaison). Hope that we still see you around then. Regards, Duong > From: Jeffrey Zhang [mailto:zhang.lei.fly at gmail.com] > Sent: Wednesday, July 25, 2018 10:48 AM > To: OpenStack Development Mailing List > Subject: [openstack-dev] [kolla] ptl non candidacy > > Hi all, > > I just wanna to say I am not running PTL for Stein cycle. I have been involved in Kolla project for almost 3 years. And recently my work changes a little, too. So > I may not have much time in the community in the future. Kolla is a great project and the community is also awesome. I would encourage everyone in the > community to consider for running.  > Thanks for your support :D. > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me From pratapagoutham at gmail.com Fri Aug 3 08:27:39 2018 From: pratapagoutham at gmail.com (Goutham Pratapa) Date: Fri, 3 Aug 2018 13:57:39 +0530 Subject: [openstack-dev] [kolla] ptl non candidacy In-Reply-To: <99da66e078b64ee584bfbb136ae59056@G07SGEXCMSGPS05.g07.fujitsu.local> References: <99da66e078b64ee584bfbb136ae59056@G07SGEXCMSGPS05.g07.fujitsu.local> Message-ID: Hi Jeffrey, Thank you for your works as a PTL in OpenStack-kolla. You were always friendly, Helpful and always a ready to approach guy Thank you for all the help and support. Thanks Goutham Pratapa. On Fri, Aug 3, 2018 at 1:10 PM, Ha Quang, Duong wrote: > Hi Jeffrey, > > Thank you for your works as PTL in Rocky cycle and release liaison from > many cycle ago (at least I joined Kolla community, you are already release > liaison). > > Hope that we still see you around then. > > Regards, > Duong > > > > From: Jeffrey Zhang [mailto:zhang.lei.fly at gmail.com] > > Sent: Wednesday, July 25, 2018 10:48 AM > > To: OpenStack Development Mailing List openstack.org> > > Subject: [openstack-dev] [kolla] ptl non candidacy > > > > Hi all, > > > > I just wanna to say I am not running PTL for Stein cycle. I have been > involved in Kolla project for almost 3 years. And recently my work changes > a little, too. So > I may not have much time in the community in the > future. Kolla is a great project and the community is also awesome. I would > encourage everyone in the > community to consider for running. > > > Thanks for your support :D. > > -- > > Regards, > > Jeffrey Zhang > > Blog: http://xcodest.me > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cheers !!! Goutham Pratapa -------------- next part -------------- An HTML attachment was scrubbed... URL: From eumel at arcor.de Fri Aug 3 09:19:17 2018 From: eumel at arcor.de (Frank Kloeker) Date: Fri, 03 Aug 2018 11:19:17 +0200 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: <5B63565F.1010109@openstack.org> References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> <5B4DF6B4.9030501@openstack.org> <7f053a98-718b-0470-9ffa-934f0e715a76@gmail.com> <5B4E132E.5050607@openstack.org> <5B50A476.8010606@openstack.org> <5B5F295F.3090608@openstack.org> <1f5afd62cc3a9a8923586a404e707366@arcor.de> <16e69b47c8b71bf6f920ab8f3df61928@arcor.de> <5B63349F.4010204@openstack.org> <5B63565F.1010109@openstack.org> Message-ID: Hi Jimmy, thanks for announcement. Great stuff! It looks really great and it's easy to navigate. I think a special thanks goes to Sebastian for designing the pages. One small remark: have you tried text-align: justify? I think it would be a little bit more readable, like a science paper (German word is: Ordnung) I put the projects again on the frontpage of the translation platform, so we'll get more translations shortly. kind regards Frank Am 2018-08-02 21:07, schrieb Jimmy McArthur: > The Edge and Containers translations are now live. As new > translations become available, we will add them to the page. > > https://www.openstack.org/containers/ > https://www.openstack.org/edge-computing/ > > Note that the Chinese translation has not been added to Zanata at this > time, so I've left the PDF download up on that page. > > Thanks everyone and please let me know if you have questions or > concerns! > > Cheers! > Jimmy > > Jimmy McArthur wrote: >> Frank, >> >> We expect to have these papers up this afternoon. I'll update this >> thread when we do. >> >> Thanks! >> Jimmy >> >> Frank Kloeker wrote: >>> Hi Sebastian, >>> >>> okay, it's translated now. In Edge whitepaper is the problem with >>> XML-Parsing of the term AT&T. Don't know how to escape this. Maybe >>> you will see the warning during import too. >>> >>> kind regards >>> >>> Frank >>> >>> Am 2018-07-30 20:09, schrieb Sebastian Marcet: >>>> Hi Frank, >>>> i was double checking pot file and realized that original pot missed >>>> some parts of the original paper (subsections of the paper) >>>> apologizes >>>> on that >>>> i just re uploaded an updated pot file with missing subsections >>>> >>>> regards >>>> >>>> On Mon, Jul 30, 2018 at 2:20 PM, Frank Kloeker >>>> wrote: >>>> >>>>> Hi Jimmy, >>>>> >>>>> from the GUI I'll get this link: >>>>> >>>> https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >>>>> [1] >>>>> >>>>> paper version are only in container whitepaper: >>>>> >>>>> >>>> https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >>>>> [2] >>>>> >>>>> In general there is no group named papers >>>>> >>>>> kind regards >>>>> >>>>> Frank >>>>> >>>>> Am 2018-07-30 17:06, schrieb Jimmy McArthur: >>>>> Frank, >>>>> >>>>> We're getting a 404 when looking for the pot file on the Zanata >>>>> API: >>>>> >>>> https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >>>>> [3] >>>>> >>>>> As a result, we can't pull the po files. Any idea what might be >>>>> happening? >>>>> >>>>> Seeing the same thing with both papers... >>>>> >>>>> Thank you, >>>>> Jimmy >>>>> >>>>> Frank Kloeker wrote: >>>>> Hi Jimmy, >>>>> >>>>> Korean and German version are now done on the new format. Can you >>>>> check publishing? >>>>> >>>>> thx >>>>> >>>>> Frank >>>>> >>>>> Am 2018-07-19 16:47, schrieb Jimmy McArthur: >>>>> Hi all - >>>>> >>>>> Follow up on the Edge paper specifically: >>>>> >>>> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >>>>> [4] This is now available. As I mentioned on IRC this morning, it >>>>> should >>>>> be VERY close to the PDF. Probably just needs a quick review. >>>>> >>>>> Let me know if I can assist with anything. >>>>> >>>>> Thank you to i18n team for all of your help!!! >>>>> >>>>> Cheers, >>>>> Jimmy >>>>> >>>>> Jimmy McArthur wrote: >>>>> Ian raises some great points :) I'll try to address below... >>>>> >>>>> Ian Y. Choi wrote: >>>>> Hello, >>>>> >>>>> When I saw overall translation source strings on container >>>>> whitepaper, I would infer that new edge computing whitepaper >>>>> source strings would include HTML markup tags. >>>>> One of the things I discussed with Ian and Frank in Vancouver is >>>>> the expense of recreating PDFs with new translations. It's >>>>> prohibitively expensive for the Foundation as it requires design >>>>> resources which we just don't have. As a result, we created the >>>>> Containers whitepaper in HTML, so that it could be easily updated >>>>> w/o working with outside design contractors. I indicated that we >>>>> would also be moving the Edge paper to HTML so that we could >>>>> prevent >>>>> that additional design resource cost. >>>>> On the other hand, the source strings of edge computing whitepaper >>>>> which I18n team previously translated do not include HTML markup >>>>> tags, since the source strings are based on just text format. >>>>> The version that Akihiro put together was based on the Edge PDF, >>>>> which we unfortunately didn't have the resources to implement in >>>>> the >>>>> same format. >>>>> >>>>> I really appreciate Akihiro's work on RST-based support on >>>>> publishing translated edge computing whitepapers, since >>>>> translators do not have to re-translate all the strings. >>>>> I would like to second this. It took a lot of initiative to work on >>>>> the RST-based translation. At the moment, it's just not usable for >>>>> the reasons mentioned above. >>>>> On the other hand, it seems that I18n team needs to investigate on >>>>> translating similar strings of HTML-based edge computing whitepaper >>>>> source strings, which would discourage translators. >>>>> Can you expand on this? I'm not entirely clear on why the HTML >>>>> based translation is more difficult. >>>>> >>>>> That's my point of view on translating edge computing whitepaper. >>>>> >>>>> For translating container whitepaper, I want to further ask the >>>>> followings since *I18n-based tools* >>>>> would mean for translators that translators can test and publish >>>>> translated whitepapers locally: >>>>> >>>>> - How to build translated container whitepaper using original >>>>> Silverstripe-based repository? >>>>> https://docs.openstack.org/i18n/latest/tools.html [5] describes >>>>> well how to build translated artifacts for RST-based OpenStack >>>>> repositories >>>>> but I could not find the way how to build translated container >>>>> whitepaper with translated resources on Zanata. >>>>> This is a little tricky. It's possible to set up a local version >>>>> of the OpenStack website >>>>> >>>> (https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >>>>> [6]). However, we have to manually ingest the po files as they are >>>>> completed and then push them out to production, so that wouldn't do >>>>> much to help with your local build. I'm open to suggestions on how >>>>> we can make this process easier for the i18n team. >>>>> >>>>> Thank you, >>>>> Jimmy >>>>> >>>>> With many thanks, >>>>> >>>>> /Ian >>>>> >>>>> Jimmy McArthur wrote on 7/17/2018 11:01 PM: >>>>> Frank, >>>>> >>>>> I'm sorry to hear about the displeasure around the Edge paper. As >>>>> mentioned in a prior thread, the RST format that Akihiro worked did >>>>> not work with the Zanata process that we have been using with our >>>>> CMS. Additionally, the existing EDGE page is a PDF, so we had to >>>>> build a new template to work with the new HTML whitepaper layout we >>>>> created for the Containers paper. I outlined this in the thread " >>>>> [OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge Computing >>>>> Whitepaper Translation" on 6/25/18 and mentioned we would be ready >>>>> with the template around 7/13. >>>>> >>>>> We completed the work on the new whitepaper template and then put >>>>> out the pot files on Zanata so we can get the po language files >>>>> back. If this process is too cumbersome for the translation team, >>>>> I'm open to discussion, but right now our entire translation >>>>> process >>>>> is based on the official OpenStack Docs translation process >>>>> outlined >>>>> by the i18n team: >>>>> https://docs.openstack.org/i18n/latest/en_GB/tools.html [7] >>>>> >>>>> Again, I realize Akihiro put in some work on his own proposing the >>>>> new translation type. If the i18n team is moving to this format >>>>> instead, we can work on redoing our process. >>>>> >>>>> Please let me know if I can clarify further. >>>>> >>>>> Thanks, >>>>> Jimmy >>>>> >>>>> Frank Kloeker wrote: >>>>> Hi Jimmy, >>>>> >>>>> permission was added for you and Sebastian. The Container >>>>> Whitepaper >>>>> is on the Zanata frontpage now. But we removed Edge Computing >>>>> whitepaper last week because there is a kind of displeasure in the >>>>> team since the results of translation are still not published >>>>> beside >>>>> Chinese version. It would be nice if we have a commitment from the >>>>> Foundation that results are published in a specific timeframe. This >>>>> includes your requirements until the translation should be >>>>> available. >>>>> >>>>> thx Frank >>>>> >>>>> Am 2018-07-16 17:26, schrieb Jimmy McArthur: >>>>> Sorry, I should have also added... we additionally need permissions >>>>> so >>>>> that we can add the a new version of the pot file to this project: >>>>> >>>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>>>> [8] Thanks! >>>>> Jimmy >>>>> >>>>> Jimmy McArthur wrote: >>>>> Hi all - >>>>> >>>>> We have both of the current whitepapers up and available for >>>>> translation. Can we promote these on the Zanata homepage? >>>>> >>>>> >>>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>>>> [9] >>>>> >>>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>>>> [10] Thanks all! >>>>> Jimmy >>>>> >>>>> >>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> [12] >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> [12] >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> [12] >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> [12] >>>> >>>> >>>> >>>> Links: >>>> ------ >>>> [1] >>>> https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >>>> [2] >>>> https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >>>> [3] >>>> https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >>>> [4] >>>> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >>>> [5] https://docs.openstack.org/i18n/latest/tools.html >>>> [6] >>>> https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >>>> [7] https://docs.openstack.org/i18n/latest/en_GB/tools.html >>>> [8] >>>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>>> [9] >>>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>>> [10] >>>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>>> [11] >>>> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> [12] >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dougal at redhat.com Fri Aug 3 09:45:20 2018 From: dougal at redhat.com (Dougal Matthews) Date: Fri, 3 Aug 2018 10:45:20 +0100 Subject: [openstack-dev] [mistral] Clearing out old gerrit reviews In-Reply-To: References: Message-ID: On 9 July 2018 at 16:13, Dougal Matthews wrote: > Hey folks, > > I'd like to propose that we start abandoning old Gerrit reviews. > > This report shows how stale and out of date some of the reviews are: > http://stackalytics.com/report/reviews/mistral-group/open > > I would like to initially abandon anything without any activity for a > year, but we might want to consider a shorter limit - maybe 6 months. > Reviews can be restored, so the risk is low. > > What do you think? Any objections or counter suggestions? > > If I don't hear any complaints, I'll go ahead with this next week (or > maybe the following week). > That time line was ambitious. I didn't get started :-) However, I did decide it would be best to formalise this plan somewhere. So I quickly wrote up the plan in a Mistral policy spec. If we can agree there and merge it, then I'll go ahead and start the cleanup. https://review.openstack.org/#/c/588492/ > > Cheers, > Dougal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougal at redhat.com Fri Aug 3 09:57:31 2018 From: dougal at redhat.com (Dougal Matthews) Date: Fri, 3 Aug 2018 10:57:31 +0100 Subject: [openstack-dev] [mistral] Removing Inactive Cores Message-ID: Hey, As we are approaching the end of Rocky I am doing some house keeping. The people below have been removed from the Mistral core team due to reviewing inactivity in the last 180 days[1]. I would like to thank them for their contributions and they are welcome to re-join the Mistral core team if they become active in the future. - Lingxian Kong - Winson Chan [1] http://stackalytics.com/report/contribution/mistral-group/180 Thanks, Dougal -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Fri Aug 3 10:13:36 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Fri, 3 Aug 2018 22:13:36 +1200 Subject: [openstack-dev] [mistral] Removing Inactive Cores In-Reply-To: References: Message-ID: +1 for me, i am still watching mistral :-) Cheers, Lingxian Kong On Fri, Aug 3, 2018 at 9:58 PM Dougal Matthews wrote: > Hey, > > As we are approaching the end of Rocky I am doing some house keeping. > > The people below have been removed from the Mistral core team due to > reviewing inactivity in the last 180 days[1]. I would like to thank them > for their contributions and they are welcome to re-join the Mistral core > team if they become active in the future. > > - Lingxian Kong > - Winson Chan > > [1] http://stackalytics.com/report/contribution/mistral-group/180 > > Thanks, > Dougal > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgolovat at redhat.com Fri Aug 3 10:46:49 2018 From: sgolovat at redhat.com (Sergii Golovatiuk) Date: Fri, 3 Aug 2018 12:46:49 +0200 Subject: [openstack-dev] [tripleo] Proposing Lukas Bezdicka core on TripleO In-Reply-To: References: Message-ID: +1 On Thu, Aug 2, 2018 at 7:45 AM, Marios Andreou wrote: > +1 ! > > > > On Wed, Aug 1, 2018 at 2:31 PM, Giulio Fidente wrote: >> >> Hi, >> >> I would like to propose Lukas Bezdicka core on TripleO. >> >> Lukas did a lot work in our tripleoclient, tripleo-common and >> tripleo-heat-templates repos to make FFU possible. >> >> FFU, which is meant to permit upgrades from Newton to Queens, requires >> in depth understanding of many TripleO components (for example Heat, >> Mistral and the TripleO client) but also of specific TripleO features >> which were added during the course of the three releases (for example >> config-download and upgrade tasks). I believe his FFU work to have been >> very challenging. >> >> Given his broad understanding, more recently Lukas started helping doing >> reviews in other areas. >> >> I am so sure he'll be a great addition to our group that I am not even >> looking for comments, just votes :D >> -- >> Giulio Fidente >> GPG KEY: 08D733BA >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best Regards, Sergii Golovatiuk From pierre at stackhpc.com Fri Aug 3 11:48:38 2018 From: pierre at stackhpc.com (Pierre Riteau) Date: Fri, 3 Aug 2018 12:48:38 +0100 Subject: [openstack-dev] [Blazar] PTL non candidacy In-Reply-To: References: Message-ID: Hi Masahito, Thank you very much for leading the Blazar project successfully! We wouldn't have accomplished so much without your dedication. Pierre On 31 July 2018 at 11:58, Masahito MUROI wrote: > Hi Blazar folks, > > I just want to announce that I'm not running the PTL for the Stein cycle. I > have been running this position from the Ocata cycle when we revived the > project. We've been done lots of successful activities in the last 4 > cycles. > > I think it's time to change the position to someone else to move the Blazar > project further forward. I'll still be around the project and try to make > the Blazar project great. > > Thanks for lots of your supports. > > best regards, > Masahito > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From renat.akhmerov at gmail.com Fri Aug 3 12:14:44 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Fri, 3 Aug 2018 19:14:44 +0700 Subject: [openstack-dev] [mistral] Removing Inactive Cores In-Reply-To: References: Message-ID: Lingxian, and any time welcome back as an active contributor if you wish! :) I want to thank you for all contribution and achievements you made for our project! Renat Akhmerov @Nokia On 3 Aug 2018, 17:14 +0700, Lingxian Kong , wrote: > +1 for me, i am still watching mistral :-) > > Cheers, > Lingxian Kong > > > > On Fri, Aug 3, 2018 at 9:58 PM Dougal Matthews wrote: > > > Hey, > > > > > > As we are approaching the end of Rocky I am doing some house keeping. > > > > > > The people below have been removed from the Mistral core team due to reviewing inactivity in the last 180 days[1]. I would like to thank them for their contributions and they are welcome to re-join the Mistral core team if they become active in the future. > > > > > > - Lingxian Kong > > > - Winson Chan > > > > > > [1] http://stackalytics.com/report/contribution/mistral-group/180 > > > > > > Thanks, > > > Dougal > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Fri Aug 3 12:15:51 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Fri, 3 Aug 2018 19:15:51 +0700 Subject: [openstack-dev] [mistral] Clearing out old gerrit reviews In-Reply-To: References: Message-ID: <38b41b4c-5e4a-46ab-ad93-718e44596be2@Spark> Dougal, the policy looks good for me. I gave it the second +2 but didn’t approve yet so that others could also review (e.g. Adriano and Vitalii). Thanks Renat Akhmerov @Nokia On 3 Aug 2018, 16:46 +0700, Dougal Matthews , wrote: > > On 9 July 2018 at 16:13, Dougal Matthews wrote: > > > Hey folks, > > > > > > I'd like to propose that we start abandoning old Gerrit reviews. > > > > > > This report shows how stale and out of date some of the reviews are: > > > http://stackalytics.com/report/reviews/mistral-group/open > > > > > > I would like to initially abandon anything without any activity for a year, but we might want to consider a shorter limit - maybe 6 months. Reviews can be restored, so the risk is low. > > > > > > What do you think? Any objections or counter suggestions? > > > > > > If I don't hear any complaints, I'll go ahead with this next week (or maybe the following week). > > > > That time line was ambitious. I didn't get started :-) > > > > However, I did decide it would be best to formalise this plan somewhere. So I quickly wrote up the plan in a Mistral policy spec. If we can agree there and merge it, then I'll go ahead and start the cleanup. > > > > https://review.openstack.org/#/c/588492/ > > > > > > > > > > Cheers, > > > Dougal > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From apetrich at redhat.com Fri Aug 3 12:36:02 2018 From: apetrich at redhat.com (Adriano Petrich) Date: Fri, 3 Aug 2018 13:36:02 +0100 Subject: [openstack-dev] [mistral] Clearing out old gerrit reviews In-Reply-To: <38b41b4c-5e4a-46ab-ad93-718e44596be2@Spark> References: <38b41b4c-5e4a-46ab-ad93-718e44596be2@Spark> Message-ID: Same. On 3 August 2018 at 13:15, Renat Akhmerov wrote: > Dougal, the policy looks good for me. I gave it the second +2 but didn’t > approve yet so that others could also review (e.g. Adriano and Vitalii). > > Thanks > > Renat Akhmerov > @Nokia > On 3 Aug 2018, 16:46 +0700, Dougal Matthews , wrote: > > On 9 July 2018 at 16:13, Dougal Matthews wrote: > >> Hey folks, >> >> I'd like to propose that we start abandoning old Gerrit reviews. >> >> This report shows how stale and out of date some of the reviews are: >> http://stackalytics.com/report/reviews/mistral-group/open >> >> I would like to initially abandon anything without any activity for a >> year, but we might want to consider a shorter limit - maybe 6 months. >> Reviews can be restored, so the risk is low. >> >> What do you think? Any objections or counter suggestions? >> >> If I don't hear any complaints, I'll go ahead with this next week (or >> maybe the following week). >> > > That time line was ambitious. I didn't get started :-) > > However, I did decide it would be best to formalise this plan somewhere. > So I quickly wrote up the plan in a Mistral policy spec. If we can agree > there and merge it, then I'll go ahead and start the cleanup. > > https://review.openstack.org/#/c/588492/ > > > >> >> Cheers, >> Dougal >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From liam.young at canonical.com Fri Aug 3 12:42:23 2018 From: liam.young at canonical.com (Liam Young) Date: Fri, 3 Aug 2018 13:42:23 +0100 Subject: [openstack-dev] [nova] Guests not getting metadata in a Cellsv2 deploy In-Reply-To: References: Message-ID: fwiw this appears to be due to a bug in nova. I've raised https://bugs.launchpad.net/nova/+bug/1785235 and proposed a fix https://review.openstack.org/588520 On Thu, Aug 2, 2018 at 5:47 PM Liam Young wrote: > Hi, > > I have a fresh pike deployment and the guests are not getting metadata. To > investigate it further it would really help me to understand what the > metadata flow is supposed to look like. > > In my deployment the guest receives a 404 when hitting > http://169.254.169.254/latest/meta-data. I have added some logging to > expose the messages passing via amqp and I see the nova-api-metadata > service making a call to the super-conductor asking for an InstanceMapping. > The super-conductor sends a reply detailing which cell the instance is in > and the urls for both mysql and rabbit. The nova-api-metadata service then > sends a second message to the superconductor this time asking for > an Instance obj. The super-conductor fails to find the instance and returns > a failure with a "InstanceNotFound: Instance could not be found" > message, the nova-api-metadata service then sends a 404 to the original > requester. > > I think the super-conductor is looking in the wrong database for the > instance information. I believe it is looking in cell0 when it should > actually be connecting to an entirely different instance of mysql which is > associated with the cell that the instance is in. > > Should the super-conductor even be trying to retrieve the instance > information or should the nova-api-metadata service actually be messaging > the conductor in the compute cell? > > Any pointers gratefully received! > Thanks > Liam > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.se Fri Aug 3 12:46:02 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Fri, 3 Aug 2018 14:46:02 +0200 Subject: [openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes Message-ID: <282a7bf1-ae3e-335a-e1a1-69996276f731@binero.se> Hello, I'm testing around with Magnum and have so far only had issues. I've tried deploying Docker Swarm (on Fedora Atomic 27, Fedora Atomic 28) and Kubernetes (on Fedora Atomic 27) and haven't been able to get it working. Running Queens, is there any information about supported images? Is Magnum maintained to support Fedora Atomic still? What is in charge of population the certificates inside the instances, because this seems to be the root of all issues, I'm not using Barbican but the x509keypair driver is that the reason? Perhaps I missed some documentation that x509keypair does not support what I'm trying to do? I've seen the following issues: Docker: * Master does not start and listen on TCP because of certificate issues dockerd-current[1909]: Could not load X509 key pair (cert: "/etc/docker/server.crt", key: "/etc/docker/server.key") * Node does not start with: Dependency failed for Docker Application Container Engine. docker.service: Job docker.service/start failed with result 'dependency'. Kubernetes: * Master etcd does not start because /run/etcd does not exist ** When that is created it fails to start because of certificate 2018-08-03 12:41:16.554257 C | etcdmain: open /etc/etcd/certs/server.crt: no such file or directory * Master kube-apiserver does not start because of certificate unable to load server certificate: open /etc/kubernetes/certs/server.crt: no such file or directory * Master heat script just sleeps forever waiting for port 8080 to become available (kube-apiserver) so it can never kubectl apply the final steps. * Node does not even start and times out when Heat deploys it, probably because master never finishes Any help is appreciated perhaps I've missed something crucial, I've not tested Kubernetes on CoreOS yet. Best regards Tobias From lbragstad at gmail.com Fri Aug 3 13:21:22 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 3 Aug 2018 08:21:22 -0500 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 30 July 2018 Message-ID: # Keystone Team Update - Week of 30 July 2018 ## News This week was relatively quiet, but we're working towards RC1 as our next deadline. ## Recently Merged Changes Search query: https://bit.ly/2IACk3F We merged 20 changes this week. Mainly changes to continue moving APIs to flask and we landed a huge token provider API refactor. ## Changes that need Attention Search query: https://bit.ly/2wv7QLK There are 43 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. Reminder that we're in soft string freeze and past the 3rd milestone so prioritizing bug fixes is beneficial. ## Bugs This week we opened 4 new bugs, closed 1, and fixed 3. The main concern with fixing https://bugs.launchpad.net/keystone/+bug/1778945 was that it will impact downstream providers, hence the release note. Otherwise it's cleaned up a ton of technical debt (I appreciate the reviews here). ## Milestone Outlook This upcoming week is going to be RC1, which we will plan to cut by Friday unless critical bugs emerge. We do have a list of bugs to target to RC, but none of them are blockers. If it comes down to it, they can likely be pushed to Stein. If you notice anything that comes up as a release blocker, please let me know. https://bit.ly/2MeXN0L https://releases.openstack.org/rocky/schedule.html ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From cdent+os at anticdent.org Fri Aug 3 13:45:13 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 3 Aug 2018 14:45:13 +0100 (BST) Subject: [openstack-dev] [nova] [placement] placement update 18-31 Message-ID: HTML: https://anticdent.org/placement-update-18-31.html This is placement update 18-31, a weekly update of ongoing development related to the [OpenStack](https://www.openstack.org/) [placement service](https://developer.openstack.org/api-ref/placement/). # Most Important We are a week past feature freeze for the Rocky cycle, so finding and fixing bugs through testing and watching launchpad remains the big deal. Progress is also being made on making sure the Reshaper stack (see below) and using consumer generations in the report client are ready as soon as Stein opens. # What's Changed A fair few bug fixes and refactorings have merged in the past week, thanks to everyone chipping in. The functional differences you might see include: * Writing allocations is retried server side up to ten times. * Placement functional tests are using some of their own fixtures for output, log, and warning capture. This may lead to different output when tests fail. We should fix issues as they come up. * Stats handling in the resource tracker is now per-node, meaning it is both more correct and more efficient. * Resource provider generation conflict handling in the report client is much improved. * When using force_hosts or force_nodes, limit is not used when doing GET /allocation_candidates. * You can no longer use unexpected fields when writing allocations. * The install guide has been updated to include instructions about the placement database. # Bugs * Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 16, +2 from last week. * [In progress placement bugs](https://goo.gl/vzGGDQ) 12, -1 on last week. # Main Themes ## Documentation Now that we are feature frozen we better document all the stuff. And more than likely we'll find some bugs while doing that documenting. Matt pointed out in response to last week's pupdate that the two bullets that had been listed here are no longer valid because we punted on most of the functionality (fully working shared and nested providers) that needed the docs. However, that doesn't mean we're in the clear. A good review of existing docs is warranted. ## Consumer Generations These are in place on the placement side. There's pending work on the client side, and a semantic fix on the server side, but neither are going to merge this cycle. * return 404 when no consumer found in allocs * Use placement 1.28 in scheduler report client (1.28 is consumer gens, which we hope to have ready for immediate Stein merge) ## Reshape Provider Trees Work has restarted on framing in the use of the reshaper from the compute manage. It won't merge for Rocky but we want it ready as soon as Stein opens. It's all at: ## Extraction A lot of test changes were made to prepare for the extraction of placement. Most of the remaining "uses of nova" in placement are things that will need to wait to post-extraction, but it is useful and informative to look at imports as there are some thing remaining. On the [PTG etherpad](https://etherpad.openstack.org/p/nova-ptg-stein) I've proposed that we consider stopping forward feature progress on Placement in Stein so that: * We can given nova some time to catch up and find bugs in existing placement features. * We can do the extraction and large backlog of refactoring work that we'd like to do. That is at a list item of 'What does it take to declare placement "done"?' # Other Going to start this list with the 5 that remains from the 11 (nice work!) that were listed last week. After that will be anything else I can find. * Add unit test for non-placement resize * Use placement.inventory.inuse in report client * Delete allocations when it is re-allocated (This is addressing a TODO in the report client) * Remove Ocata comments which expires now * Ignore some updates from virt driver * Neutron work related to minimum bandwidth handling with placement * Resource provider examples (in osc-placement) * Get resource provider by uuid or name (in osc-placement) * Provide a useful message in the case of 500-error (in osc-placement) * Add image link in README.rst (in osc-placement) * Random names for [osc-placement] functional tests * Fix nits in resource_provider.py * [placement] Debug log per granular request group * Consider forbidden traits in early exit of _get_by_one_request * Enable nested allocation candidates in scheduler * Placement fixture refactorings and cleanups * PCPU: Define numa dedicated CPU resource class * Imposing restrictions on resource providers create uuid # End This is the last one of these I'm going to do for a while. It's less useful at the end and beginning of the cycle when there are often plenty of other resources shaping our attention. Also, I pretty badly need a break and an opportunity to more narrowly focus on fewer things for a while (you can translate that as "get things done rather than tracking things"). Unless someone else would like to pick up the mantle, I expect to pick it back up sometime in September. Ideally someone else would do it. It's been a very useful tool for me, and I hope for others, so it's not my wish that it go away. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From chris.friesen at windriver.com Fri Aug 3 14:14:14 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Fri, 3 Aug 2018 08:14:14 -0600 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <97bfe7dc-eb25-bf30-7a84-6ef29105324e@gmail.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> <156b4688-aba3-2a01-3917-b42a209f0ecd@gmail.com> <6c636d43-d0cc-f805-cf20-96acbc61b139@fried.cc> <625fd356-c5a1-5818-80f1-a8f8c570d830@gmail.com> <171010d9-0cc8-da77-b51f-292ad8e2cb26@gmail.com> <5B6363BA.9000900@windriver.com> <97bfe7dc-eb25-bf30-7a84-6ef29105324e@gmail.com> Message-ID: <5B646336.8070001@windriver.com> On 08/02/2018 06:27 PM, Jay Pipes wrote: > On 08/02/2018 06:18 PM, Michael Glasgow wrote: >> More generally, any time a service fails to deliver a resource which it is >> primarily designed to deliver, it seems to me at this stage that should >> probably be taken a bit more seriously than just "check the log file, maybe >> there's something in there?" From the user's perspective, if nova fails to >> produce an instance, or cinder fails to produce a volume, or neutron fails to >> build a subnet, that's kind of a big deal, right? >> >> In such cases, would it be possible to generate a detailed exception object >> which contains all the necessary info to ascertain why that specific failure >> occurred? > > It's not an exception. It's normal course of events. NoValidHosts means there > were no compute nodes that met the requested resource amounts. I'm of two minds here. On the one hand, you have the case where the end user has accidentally requested some combination of things that isn't normally available, and they need to be able to ask the provider what they did wrong. I agree that this case is not really an exception, those resources were never available in the first place. On the other hand, suppose the customer issues a valid request and it works, and then issues the same request again and it fails, leading to a violation of that customers SLA. In this case I would suggest that it could be considered an exception since the system is not delivering the service that it was intended to deliver. Chris From mtreinish at kortar.org Fri Aug 3 14:34:43 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Fri, 3 Aug 2018 10:34:43 -0400 Subject: [openstack-dev] [stestr?][tox?][infra?] Unexpected success isn't a failure In-Reply-To: <20180710191614.GC19605@sinanju.localdomain> References: <20180710030347.GA11011@sinanju.localdomain> <20180710191614.GC19605@sinanju.localdomain> Message-ID: <20180803143443.GA9706@zeong> On Tue, Jul 10, 2018 at 03:16:14PM -0400, Matthew Treinish wrote: > On Tue, Jul 10, 2018 at 10:16:37AM +0100, Chris Dent wrote: > > On Mon, 9 Jul 2018, Matthew Treinish wrote: > > > > > It's definitely a bug, and likely a bug in stestr (or one of the lower level > > > packages like testtools or python-subunit), because that's what's generating > > > the return code. Tox just looks at the return code from the commands to figure > > > out if things were successful or not. I'm a bit surprised by this though I > > > thought we covered the unxsuccess and xfail cases because I would have expected > > > cdent to file a bug if it didn't. Looking at the stestr tests we don't have > > > coverage for the unxsuccess case so I can see how this slipped through. > > > > This was reported on testrepository some years ago and a bit of > > analysis was done: https://bugs.launchpad.net/testrepository/+bug/1429196 > > > > This actually helps a lot, because I was seeing the same issue when I tried > writing a quick patch to address this. When I manually poked the TestResult > object it didn't have anything in the unxsuccess list. So instead of relying > on that I wrote this patch: > > https://github.com/mtreinish/stestr/pull/188 > > which uses the output filter's internal function for counting results to > find unxsuccess tests. It's still not perfect though because if someone > runs with the --no-subunit-trace flag it still doesn't work (because that > call path never gets run) but it's at least a starting point. I've > marked it as WIP for now, but I'm thinking we could merge it as is and > leave the --no-subunit-trace and unxsuccess as a known issues for now, > since xfail and unxsuccess are pretty uncommon in practice. (gabbi is the > only thing I've seen really use it) > > > > > So yeah, I did file a bug but it fell off the radar during those > > dark times. > > > Just following up here, after digging some more and getting a detailed bug filed by electrofelix [1] I was able to throw together a different patch that should solve this in a better way: https://github.com/mtreinish/stestr/pull/190 Once that lands I can push a bugfix release to get it out there so people can actually use the fix. -Matt Treinish -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From aschultz at redhat.com Fri Aug 3 14:50:29 2018 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 3 Aug 2018 08:50:29 -0600 Subject: [openstack-dev] [TripleO] easily identifying how services are configured In-Reply-To: <38e79c59-a0f8-4d76-4005-db4637dffa5d@redhat.com> References: <8e58a5def5b66d229e9c304cbbc9f25c02d49967.camel@redhat.com> <927f5ff4ec528bdcc5877c7a1a5635c62f5f1cb5.camel@redhat.com> <5c220d66-d4e5-2b19-048c-af3a37c846a3@nemebean.com> <88d7f66c-4215-b032-0b98-2671f14dab21@redhat.com> <38e79c59-a0f8-4d76-4005-db4637dffa5d@redhat.com> Message-ID: On Thu, Aug 2, 2018 at 11:32 PM, Cédric Jeanneret wrote: > > > On 08/02/2018 11:41 PM, Steve Baker wrote: >> >> >> On 02/08/18 13:03, Alex Schultz wrote: >>> On Mon, Jul 9, 2018 at 6:28 AM, Bogdan Dobrelya >>> wrote: >>>> On 7/6/18 7:02 PM, Ben Nemec wrote: >>>>> >>>>> >>>>> On 07/05/2018 01:23 PM, Dan Prince wrote: >>>>>> On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote: >>>>>>> >>>>>>> I would almost rather see us organize the directories by service >>>>>>> name/project instead of implementation. >>>>>>> >>>>>>> Instead of: >>>>>>> >>>>>>> puppet/services/nova-api.yaml >>>>>>> puppet/services/nova-conductor.yaml >>>>>>> docker/services/nova-api.yaml >>>>>>> docker/services/nova-conductor.yaml >>>>>>> >>>>>>> We'd have: >>>>>>> >>>>>>> services/nova/nova-api-puppet.yaml >>>>>>> services/nova/nova-conductor-puppet.yaml >>>>>>> services/nova/nova-api-docker.yaml >>>>>>> services/nova/nova-conductor-docker.yaml >>>>>>> >>>>>>> (or perhaps even another level of directories to indicate >>>>>>> puppet/docker/ansible?) >>>>>> >>>>>> I'd be open to this but doing changes on this scale is a much larger >>>>>> developer and user impact than what I was thinking we would be willing >>>>>> to entertain for the issue that caused me to bring this up (i.e. >>>>>> how to >>>>>> identify services which get configured by Ansible). >>>>>> >>>>>> Its also worth noting that many projects keep these sorts of things in >>>>>> different repos too. Like Kolla fully separates kolla-ansible and >>>>>> kolla-kubernetes as they are quite divergent. We have been able to >>>>>> preserve some of our common service architectures but as things move >>>>>> towards kubernetes we may which to change things structurally a bit >>>>>> too. >>>>> >>>>> True, but the current directory layout was from back when we >>>>> intended to >>>>> support multiple deployment tools in parallel (originally >>>>> tripleo-image-elements and puppet). Since I think it has become >>>>> clear that >>>>> it's impractical to maintain two different technologies to do >>>>> essentially >>>>> the same thing I'm not sure there's a need for it now. It's also worth >>>>> noting that kolla-kubernetes basically died because there wasn't enough >>>>> people to maintain both deployment methods, so we're not the only >>>>> ones who >>>>> have found that to be true. If/when we move to kubernetes I would >>>>> anticipate it going like the initial containers work did - >>>>> development for a >>>>> couple of cycles, then a switch to the new thing and deprecation of >>>>> the old >>>>> thing, then removal of support for the old thing. >>>>> >>>>> That being said, because of the fact that the service yamls are >>>>> essentially an API for TripleO because they're referenced in user >>>> >>>> this ^^ >>>> >>>>> resource registries, I'm not sure it's worth the churn to move >>>>> everything >>>>> either. I think that's going to be an issue either way though, it's >>>>> just a >>>>> question of the scope. _Something_ is going to move around no >>>>> matter how we >>>>> reorganize so it's a problem that needs to be addressed anyway. >>>> >>>> [tl;dr] I can foresee reorganizing that API becomes a nightmare for >>>> maintainers doing backports for queens (and the LTS downstream >>>> release based >>>> on it). Now imagine kubernetes support comes within those next a few >>>> years, >>>> before we can let the old API just go... >>>> >>>> I have an example [0] to share all that pain brought by a simple move of >>>> 'API defaults' from environments/services-docker to >>>> environments/services >>>> plus environments/services-baremetal. Each time a file changes >>>> contents by >>>> its old location, like here [1], I had to run a lot of sanity checks to >>>> rebase it properly. Like checking for the updated paths in resource >>>> registries are still valid or had to/been moved as well, then picking >>>> the >>>> source of truth for diverged old vs changes locations - all that to >>>> loose >>>> nothing important in progress. >>>> >>>> So I'd say please let's do *not* change services' paths/namespaces in >>>> t-h-t >>>> "API" w/o real need to do that, when there is no more alternatives >>>> left to >>>> that. >>>> >>> Ok so it's time to dig this thread back up. I'm currently looking at >>> the chrony support which will require a new service[0][1]. Rather than >>> add it under puppet, we'll likely want to leverage ansible. So I guess >>> the question is where do we put services going forward? Additionally >>> as we look to truly removing the baremetal deployment options and >>> puppet service deployment, it seems like we need to consolidate under >>> a single structure. Given that we don't want force too much churn, >>> does this mean that we should align to the docker/services/*.yaml >>> structure or should we be proposing a new structure that we can try to >>> align on. >>> >>> There is outstanding tech-debt around the nested stacks and references >>> within these services when we added the container deployments so it's >>> something that would be beneficial to start tackling sooner rather >>> than later. Personally I think we're always going to have the issue >>> when we rename files that could have been referenced by custom >>> templates, but I don't think we can continue to carry the outstanding >>> tech debt around these static locations. Should we be investing in >>> coming up with some sort of mappings that we can use/warn a user on >>> when we move files? >> >> When Stein development starts, the puppet services will have been >> deprecated for an entire cycle. Can I suggest we use this reorganization >> as the time we delete the puppet services files? This would release us >> of the burden of maintaining a deployment method that we no longer use. >> Also we'll gain a deployment speedup by removing a nested stack for each >> docker based service. >> >> Then I'd suggest doing an "mv docker/services services" and moving any >> remaining files in the puppet directory into that. This is basically the >> naming that James suggested, except we wouldn't have to suffix the files >> with -puppet.yaml, -docker.yaml unless we still had more than one >> deployment method for that service. > > We must be cuatious, as a tree change might prevent backporting things > when we need them in older releases. That was also discussed during the > latter thread regarding reorganization - although I'm also all for a > "simplify that repository" thing, it might become tricky in some cases :/. > Yes there will be pain in back porting issues, but that shouldn't stop us from addressing this long standing tech debt. Right now I think we have so many performance related problems due to the structure that it's getting to a point where we have to address it. Over the course of the last 3-4 cycles, our deployment jobs have started to hit the 3 hour mark where they used to be be 2 hours. As I mentioned earlier some of this is related to the nested stacks from this structure. I'm less concerned about the back ports and more concerned about the user impact on upgrades as we move files around. It's hard to know what users were actually referencing and we keep bumping into the same problem whenever we move files around in THT. If we completed the reorg in one cycle, then back ports to Rocky would hit be harder, but any backports for < Rocky would just be the Rocky version. As folks back porting changes, they only have to figure out this transition once. We already had several of these back port related complexities with puppet/docker (Ocata/Pike) and the structure changes between Mitaka/Newton so I'm not completely sure it's that big of an issue. >> >> Finally, we could consider symlinking docker/services to services for a >> cycle. I'm not sure how a swift-stored plan would handle this, but this >> would be a great reason to land Ian's plan speedup patch[1] which stores >> tripleo-heat-templates in a tarball :) > > Might be worth a try. Might also allow to backport things, as the > "original files" would stay in the "old" location, making the new tree > compatible with older release like Newton (hey, yes, LTS for Red Hat). I > think the templates are aggregated and generated prior the upload, > meaning it should not create new issues. Hopefully. > > Maybe shardy can jump in and provide some more info? > >> >> [1] >> http://lists.openstack.org/pipermail/openstack-dev/2018-August/132768.html >> >>> Thanks, >>> -Alex >>> >>> [0] https://review.openstack.org/#/c/586679/ >>> [1] https://review.openstack.org/#/c/588111/ >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > Cédric Jeanneret > Software Engineer > DFG:DF > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From andrea.franceschini.rm at gmail.com Fri Aug 3 14:57:06 2018 From: andrea.franceschini.rm at gmail.com (Andrea Franceschini) Date: Fri, 3 Aug 2018 16:57:06 +0200 Subject: [openstack-dev] [tricircle] Tricircle or Trio2o In-Reply-To: <7ed0df37.65a5.164f8954cf9.Coremail.linghucongsong@163.com> References: <7ed0df37.65a5.164f8954cf9.Coremail.linghucongsong@163.com> Message-ID: Hello Ling, thank you for answering, I'm glad to see that Trio2o project will be revived in the near future. Meanwhile it would be nice to know what approach people use to deploy multi-site openstack. I mean, I've read somewhere about solutions using something like a multi-site heat, but I failed to dig into this as I couldn't find any resource. Thanks, Andrea Il giorno gio 2 ago 2018 alle ore 05:01 linghucongsong ha scritto: > > HI Andrea ! > Yes, just as you said.The tricircle is now only work for network.Because the trio2o do not > as the openstack official project. so it is a long time nobody contribute to it. > But recently In the next openstack stein circle. we have plan to make tricircle and > trio2o work together in the tricircle stein plan. see below link: > https://etherpad.openstack.org/p/tricircle-stein-plan > After this fininsh we can play tricircle and tri2o2 together and make multisite openstack > solutions more effictive. > > > > > > At 2018-08-02 00:55:30, "Andrea Franceschini" wrote: > >Hello All, > > > >While I was looking for multisite openstack solutions I stumbled on > >Tricircle project which seemed fairly perfect for the job except that > >l it was split in two parts, tricircle itself for the network part and > >Trio2o for all the rest. > > > >Now it seems that the Trio2o project is no longer maintained and I'm > >wondering what other options exist for multisite openstack, stated > >that tricircle seems more NFV oriented. > > > >Actually a heat multisite solution would work too, but I cannot find > >any reference to this kind of solutions. > > > >Do you have any idea/advice? > > > >Thanks, > > > >Andrea > > > >__________________________________________________________________________ > >OpenStack Development Mailing List (not for usage questions) > >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > From MM9745 at att.com Fri Aug 3 15:05:52 2018 From: MM9745 at att.com (MCEUEN, MATT) Date: Fri, 3 Aug 2018 15:05:52 +0000 Subject: [openstack-dev] [openstack-helm] [vote] Core Reviewer nomination for Chris Wedgwood Message-ID: <7C64A75C21BB8D43BD75BB18635E4D896C958551@MOSTLS1MSGUSRFF.ITServices.sbc.com> OpenStack-Helm core reviewer team, I would like to nominate Chris Wedgwood as core review for the OpenStack-Helm. Chris is one of the most prolific reviewers in the OSH community, but more importantly is a very thorough and helpful reviewer. Many of my most insightful reviews are thanks to him, and I know the same is true for many other team members. In addition, he is an accomplished OSH engineer and has contributed features that run the gamut, including Ceph integration, Calico support, Neutron configuration, Gating, and core Helm-Toolkit functionality. Please consider this email my +1 vote. A +1 vote indicates that you are in favor of his core reviewer candidacy, and a -1 is a veto. Voting will be open for the next seven days (closing 8/10) or until all OpenStack-Helm core reviewers cast their vote. Thank you, Matt McEuen From openstack at fried.cc Fri Aug 3 15:12:16 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 3 Aug 2018 10:12:16 -0500 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <5B646336.8070001@windriver.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> <156b4688-aba3-2a01-3917-b42a209f0ecd@gmail.com> <6c636d43-d0cc-f805-cf20-96acbc61b139@fried.cc> <625fd356-c5a1-5818-80f1-a8f8c570d830@gmail.com> <171010d9-0cc8-da77-b51f-292ad8e2cb26@gmail.com> <5B6363BA.9000900@windriver.com> <97bfe7dc-eb25-bf30-7a84-6ef29105324e@gmail.com> <5B646336.8070001@windriver.com> Message-ID: > I'm of two minds here. > > On the one hand, you have the case where the end user has accidentally > requested some combination of things that isn't normally available, and > they need to be able to ask the provider what they did wrong.  I agree > that this case is not really an exception, those resources were never > available in the first place. > > On the other hand, suppose the customer issues a valid request and it > works, and then issues the same request again and it fails, leading to a > violation of that customers SLA.  In this case I would suggest that it > could be considered an exception since the system is not delivering the > service that it was intended to deliver. While the case can be made for this being an exception from *nova* (I'm not getting into that), it is not an exception from the point of view of *placement*. You asked a service "list the ways I can do X". The first time, there were three ways. The second time, zero. It would be like saying: # This is the "placement" part results = [x for x in l if ] # It is up to the placement *consumer* (e.g. nova) to do this, or not if len(results) == 0: raise Something() The hard point, which I'm not disputing, is that the end user needs a way to understand *why* len(results) == 0. efried . From wilkers.steve at gmail.com Fri Aug 3 15:38:51 2018 From: wilkers.steve at gmail.com (Steve Wilkerson) Date: Fri, 3 Aug 2018 10:38:51 -0500 Subject: [openstack-dev] [openstack-helm] [vote] Core Reviewer nomination for Chris Wedgwood In-Reply-To: <7C64A75C21BB8D43BD75BB18635E4D896C958551@MOSTLS1MSGUSRFF.ITServices.sbc.com> References: <7C64A75C21BB8D43BD75BB18635E4D896C958551@MOSTLS1MSGUSRFF.ITServices.sbc.com> Message-ID: +1 On Fri, Aug 3, 2018 at 10:05 AM, MCEUEN, MATT wrote: > OpenStack-Helm core reviewer team, > > I would like to nominate Chris Wedgwood as core review for the > OpenStack-Helm. > > Chris is one of the most prolific reviewers in the OSH community, but more > importantly is a very thorough and helpful reviewer. Many of my most > insightful reviews are thanks to him, and I know the same is true for many > other team members. In addition, he is an accomplished OSH engineer and > has contributed features that run the gamut, including Ceph integration, > Calico support, Neutron configuration, Gating, and core Helm-Toolkit > functionality. > > Please consider this email my +1 vote. > > A +1 vote indicates that you are in favor of his core reviewer candidacy, > and a -1 is a veto. Voting will be open for the next seven days (closing > 8/10) or until all OpenStack-Helm core reviewers cast their vote. > > Thank you, > Matt McEuen > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Fri Aug 3 15:39:29 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 3 Aug 2018 10:39:29 -0500 Subject: [openstack-dev] [neutron] Stein PTG etherpad Message-ID: Dear Stackers, I have started an etherpad to collect topic proposals to be discussed during the PTG in Denver, September 10th - 14th: https://etherpad.openstack.org/p/neutron-stein-ptg . Please feel free to add your proposals under the "Proposed topics to be scheduled" section. Please also sign in under the "Attendance at the PTG" if you plan to be in Denver, indicating the days you will be there. I am looking forward to see many of you in Denver and have a very productive PTG! Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Fri Aug 3 15:40:30 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 3 Aug 2018 10:40:30 -0500 Subject: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to VF mirroring In-Reply-To: References: <6345119E91D5C843A93D64F498ACFA136999ECF2@SHSMSX101.ccr.corp.intel.com> Message-ID: Forrest, Manjeet, Here you go: https://etherpad.openstack.org/p/neutron-stein-ptg Best regards On Wed, Aug 1, 2018 at 11:49 AM, Bhatia, Manjeet S < manjeet.s.bhatia at intel.com> wrote: > Hi, > > > > Yes, we need to refine spec for sure, once a consensus is reached focus > will be on implementation, > > Here’s implementation patch (WIP) https://review.openstack.org/#/c/584892/ > , we can’t really > > review api part until spec if finalized but, other stuff like config and > common issues can > > still be pointed out and progress can be made until consensus on api is > reached. Miguel, I think > > this will be added to etherpad for PTG discussions as well ? > > > > Thanks and Regards ! > > Manjeet > > > > > > > > > > *From:* Miguel Lavalle [mailto:miguel at mlavalle.com] > *Sent:* Tuesday, July 31, 2018 10:26 AM > *To:* Zhao, Forrest > *Cc:* OpenStack Development Mailing List openstack.org> > *Subject:* Re: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to > VF mirroring > > > > Hi Forrest, > > > > Yes, in my email, I was precisely referring to the work around > https://review.openstack.org/#/c/574477. Now that we are wrapping up > Rocky, I wanted to raise the visibility of this spec. I am glad you > noticed. This week we are going to cut our RC-1 and I don't anticipate that > we will will have a RC-2 for Rocky. So starting next week, let's go back to > the spec and refine it, so we can start implementing in Stein as soon as > possible. Depending on how much progress we make in the spec, we may need > to schedule a discussion during the PTG in Denver, September 10 - 14, in > case face to face time is needed to reach an agreement. I know that Manjeet > is going to attend the PTG and he has already talked to me about this spec > in the recent past. So maybe Manjeet could be the conduit to represent this > spec in Denver, in case we need to talk about it there > > > > Best regards > > > > Miguel > > > > On Tue, Jul 31, 2018 at 4:12 AM, Zhao, Forrest > wrote: > > Hi Miguel, > > > > In your mail “PTL candidacy for the Stein cycle”, it mentioned that “port > mirroring for SR-IOV VF to VF mirroring” is within Stein goal. > > > > Could you tell where is the place to discuss the design for this feature? > Mailing list, IRC channel, weekly meeting or others? > > > > I was involved in its spec review at https://review.openstack.org/# > /c/574477/; but it has not been updated for a while. > > > > Thanks, > > Forrest > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Fri Aug 3 15:41:57 2018 From: aspiers at suse.com (Adam Spiers) Date: Fri, 3 Aug 2018 16:41:57 +0100 Subject: [openstack-dev] [sig][upgrades][ansible][charms][tripleo][kolla][airship] reboot or poweroff? In-Reply-To: References: Message-ID: <20180803154157.7h33v5pxdbbcmdtx@pacific.linksys.moosehall> [Adding openstack-sigs list too; apologies for the extreme cross-posting, but I think in this case the discussion deserves wide visibility. Happy to be corrected if there's a better way to handle this.] Hi James, James Page wrote: >Hi All > >tl;dr we (the original founders) have not managed to invest the time to get >the Upgrades SIG booted - time to hit reboot or time to poweroff? TL;DR response: reboot, absolutely no question! My full response is below. >Since Vancouver, two of the original SIG chairs have stepped down leaving >me in the hot seat with minimal participation from either deployment >projects or operators in the IRC meetings. In addition I've only been able >to make every 3rd IRC meeting, so they have generally not being happening. > >I think the current timing is not good for a lot of folk so finding a >better slot is probably a must-have if the SIG is going to continue - and >maybe moving to a monthly or bi-weekly schedule rather than the weekly slot >we have now. > >In addition I need some willing folk to help with leadership in the SIG. >If you have an interest and would like to help please let me know! > >I'd also like to better engage with all deployment projects - upgrades is >something that deployment tools should be looking to encapsulate as >features, so it would be good to get deployment projects engaged in the SIG >with nominated representatives. > >Based on the attendance in upgrades sessions in Vancouver and >developer/operator appetite to discuss all things upgrade at said sessions >I'm assuming that there is still interest in having a SIG for Upgrades but >I may be wrong! > >Thoughts? As a SIG leader in a similar position (albeit with one other very helpful person on board), let me throw my £0.02 in ... With both upgrades and self-healing I think there is a big disparity between supply (developers with time to work on the functionality) and demand (operators who need the functionality). And perhaps also the high demand leads to a lot of developers being interested in the topic whilst not having much spare time to help out. That is probably why we both see high attendance at the summit / PTG events but relatively little activity in between. I also freely admit that the inevitable conflicts with downstream requirements mean that I have struggled to find time to be as proactive with driving momentum as I had wanted, although I'm hoping to pick this up again over the next weeks leading up to the PTG. It sounds like maybe you have encountered similar challenges. That said, I strongly believe that both of these SIGs offer a *lot* of value, and even if we aren't yet seeing the level of online activity that we would like, I think it's really important that they both continue. If for no other reasons, the offline sessions at the summits and PTGs are hugely useful for helping converge the community on common approaches, and the associated repositories / wikis serve as a great focal point too. Regarding online collaboration, yes, building momentum for IRC meetings is tough, especially with the timezone challenges. Maybe a monthly cadence is a reasonable starting point, or twice a month in alternating timezones - but maybe with both meetings within ~24 hours of each other, to reduce accidental creation of geographic silos. Another possibility would be to offer "open clinic" office hours, like the TC and other projects have done. If the TC or anyone else has established best practices in this space, it'd be great to hear them. Either way, I sincerely hope that you decide to continue with the SIG, and that other people step up to help out. These things don't develop overnight but it is a tremendously worthwhile initiative; after all, everyone needs to upgrade OpenStack. Keep the faith! ;-) Cheers, Adam From richwellum at gmail.com Fri Aug 3 16:04:01 2018 From: richwellum at gmail.com (Richard Wellum) Date: Fri, 3 Aug 2018 12:04:01 -0400 Subject: [openstack-dev] [openstack-helm] [vote] Core Reviewer nomination for Chris Wedgwood In-Reply-To: References: <7C64A75C21BB8D43BD75BB18635E4D896C958551@MOSTLS1MSGUSRFF.ITServices.sbc.com> Message-ID: +1 On Fri, Aug 3, 2018 at 11:39 AM Steve Wilkerson wrote: > +1 > > On Fri, Aug 3, 2018 at 10:05 AM, MCEUEN, MATT wrote: > >> OpenStack-Helm core reviewer team, >> >> I would like to nominate Chris Wedgwood as core review for the >> OpenStack-Helm. >> >> Chris is one of the most prolific reviewers in the OSH community, but >> more importantly is a very thorough and helpful reviewer. Many of my most >> insightful reviews are thanks to him, and I know the same is true for many >> other team members. In addition, he is an accomplished OSH engineer and >> has contributed features that run the gamut, including Ceph integration, >> Calico support, Neutron configuration, Gating, and core Helm-Toolkit >> functionality. >> >> Please consider this email my +1 vote. >> >> A +1 vote indicates that you are in favor of his core reviewer candidacy, >> and a -1 is a veto. Voting will be open for the next seven days (closing >> 8/10) or until all OpenStack-Helm core reviewers cast their vote. >> >> Thank you, >> Matt McEuen >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Aug 3 16:23:56 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 3 Aug 2018 11:23:56 -0500 Subject: [openstack-dev] [release] Release countdown for week R-3, August 6-10 Message-ID: <20180803162355.GA29171@sm-workstation> Development Focus ----------------- The Release Candidate (RC) deadline is this Thursday, the 9th. Work should be focused on any release-critical bugs and wrapping up and remaining feature work. General Information ------------------- All cycle-with-milestones and cycle-with-intermediary projects should cut their stable/rocky branch by the end of the week. This branch will track the Rocky release. Once stable/rocky has been created, master will will be ready to switch to Stein development. While master will no longer be frozen, please prioritize any work necessary for completing Rocky plans. Please also keep in mind there will be rocky patches competing with any new Stein work to make it through the gate. Changes can be merged into stable/rocky as needed if deemed necessary for an RC2. Once Rocky is released, stable/rocky will also be ready for any stable point releases. Whether fixing something for another RC, or in preparation of a future stable release, fixes must be merged to master first, then backported to stable/rocky. Actions ------- cycle-with-milestones deliverables should post an RC1 to openstack/releases using the version format X.Y.Z.0rc1 along with branch creation from this point. The deliverable changes should look something like: releases: - projects: - hash: 90f3ed251084952b43b89a172895a005182e6970 repo: openstack/example version: 1.0.0.0rc1 branches: - name: stable/rocky location: 1.0.0.0rc1 Other cycle deliverables (not cycle-with-milestones) will look the same, but with your normal versioning. And another reminder, please add what highlights you want for your project team in the cycle highlights: http://lists.openstack.org/pipermail/openstack-dev/2017-December/125613.html Upcoming Deadlines & Dates -------------------------- RC1 deadline: August 9 Stein PTG: September 10-14 -- Sean McGinnis (smcginnis) From sean.mcginnis at gmx.com Fri Aug 3 16:40:38 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 3 Aug 2018 11:40:38 -0500 Subject: [openstack-dev] [release] Release countdown for week R-3, August 6-10 In-Reply-To: <20180803162355.GA29171@sm-workstation> References: <20180803162355.GA29171@sm-workstation> Message-ID: <20180803164037.GB29171@sm-workstation> On Fri, Aug 03, 2018 at 11:23:56AM -0500, Sean McGinnis wrote: > ----------------- > More information on deadlines since we appear to have some conflicting information documented. According to the published release schedule: https://releases.openstack.org/rocky/schedule.html#r-finalrc we stated intermediary releases had to be done by the final RC date. So based on that, cycle-with-intermediary projects have until August 20 to do their final release. Of course, doing before that deadline is highly encouraged to make sure there are not any last minute problems to work through, if at all possible. > > Upcoming Deadlines & Dates > -------------------------- > > RC1 deadline: August 9 cycle-with-intermediary deadline: August 20 > From sean.mcginnis at gmx.com Fri Aug 3 16:52:06 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 3 Aug 2018 11:52:06 -0500 Subject: [openstack-dev] [releease][ptl] Missing and forced releases Message-ID: <20180803165205.GC29171@sm-workstation> Today the release team reviewed the rocky deliverables and their releases done so far this cycle. There are a few areas of concern right now. Unreleased cycle-with-intermediary ================================== There is a much longer list than we would like to see of cycle-with-intermediary deliverables that have not done any releases so far in Rocky. These deliverables should not wait until the very end of the cycle to release so that pending changes can be made available earlier and there are no last minute surprises. For owners of cycle-with-intermediary deliverables, please take a look at what you have merged that has not been released and consider doing a release ASAP. We are not far from the final deadline for these projects, but it would still be good to do a release ahead of that to be safe. Deliverables that miss the final deadline will be at risk of being dropped from the Rocky coordinated release. Unrelease client libraries ========================== The following client libraries have not done a release: python-cloudkittyclient python-designateclient python-karborclient python-magnumclient python-searchlightclient* python-senlinclient python-tricircleclient The deadline for client library releases was last Thursday, July 26. This coming Monday the release team will force a release on HEAD for these clients. * python-searchlight client is currently planned on being dropped due to searchlight itself not having met the minimum of two milestone releases during the rocky cycle. Missing milestone 3 =================== The following projects missed tagging a milestone 3 release: cinder designate freezer mistral searchlight Following policy, a milestone 3 tag will be forced on HEAD for these deliverables on Monday. Freezer and searchlight missed previous milestone deadlines and will be dropped from the Rocky coordinated release. If there are any questions or concerns, please respond here or get ahold of someone from the release management team in the #openstack-release channel. -- Sean McGinnis (smcginnis) From openstack at nemebean.com Fri Aug 3 16:58:29 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 3 Aug 2018 11:58:29 -0500 Subject: [openstack-dev] [oslo] Proposing Zane Bitter as oslo.service core Message-ID: Hi, Zane has been doing some good work in oslo.service recently and I would like to add him to the core team. I know he's got a lot on his plate already, but he has taken the time to propose and review patches in oslo.service and has demonstrated an understanding of the code. Please respond with +1 or any concerns you may have. Thanks. -Ben From jdennis at redhat.com Fri Aug 3 17:00:31 2018 From: jdennis at redhat.com (John Dennis) Date: Fri, 3 Aug 2018 13:00:31 -0400 Subject: [openstack-dev] [oslo] Proposing Zane Bitter as oslo.service core In-Reply-To: References: Message-ID: <202c02fe-9c5d-7463-23c7-349a1dce8bf8@redhat.com> On 08/03/2018 12:58 PM, Ben Nemec wrote: > Hi, > > Zane has been doing some good work in oslo.service recently and I would > like to add him to the core team.  I know he's got a lot on his plate > already, but he has taken the time to propose and review patches in > oslo.service and has demonstrated an understanding of the code. > > Please respond with +1 or any concerns you may have.  Thanks. +1 -- John Dennis From doug at doughellmann.com Fri Aug 3 17:06:51 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 03 Aug 2018 13:06:51 -0400 Subject: [openstack-dev] [oslo] Proposing Zane Bitter as oslo.service core In-Reply-To: References: Message-ID: <1533315981-sup-6258@lrrr.local> Excerpts from Ben Nemec's message of 2018-08-03 11:58:29 -0500: > Hi, > > Zane has been doing some good work in oslo.service recently and I would > like to add him to the core team. I know he's got a lot on his plate > already, but he has taken the time to propose and review patches in > oslo.service and has demonstrated an understanding of the code. > > Please respond with +1 or any concerns you may have. Thanks. > > -Ben > +1, and thanks, Zane! From jungleboyj at gmail.com Fri Aug 3 17:55:10 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Fri, 3 Aug 2018 12:55:10 -0500 Subject: [openstack-dev] [oslo] Proposing Zane Bitter as oslo.service core In-Reply-To: References: Message-ID: On 8/3/2018 11:58 AM, Ben Nemec wrote: > Hi, > > Zane has been doing some good work in oslo.service recently and I > would like to add him to the core team.  I know he's got a lot on his > plate already, but he has taken the time to propose and review patches > in oslo.service and has demonstrated an understanding of the code. > > Please respond with +1 or any concerns you may have.  Thanks. > > -Ben > Not an Oslo Core but wanted to share my +1.  :-) > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From kgiusti at gmail.com Fri Aug 3 18:21:14 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Fri, 3 Aug 2018 14:21:14 -0400 Subject: [openstack-dev] [oslo] Proposing Zane Bitter as oslo.service core In-Reply-To: References: Message-ID: +1! On Fri, Aug 3, 2018 at 12:58 PM, Ben Nemec wrote: > Hi, > > Zane has been doing some good work in oslo.service recently and I would like > to add him to the core team. I know he's got a lot on his plate already, > but he has taken the time to propose and review patches in oslo.service and > has demonstrated an understanding of the code. > > Please respond with +1 or any concerns you may have. Thanks. > > -Ben > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Ken Giusti (kgiusti at gmail.com) From davanum at gmail.com Fri Aug 3 18:58:45 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Fri, 3 Aug 2018 14:58:45 -0400 Subject: [openstack-dev] [oslo] Proposing Zane Bitter as oslo.service core In-Reply-To: References: Message-ID: +1 from me! On Fri, Aug 3, 2018 at 12:58 PM Ben Nemec wrote: > > Hi, > > Zane has been doing some good work in oslo.service recently and I would > like to add him to the core team. I know he's got a lot on his plate > already, but he has taken the time to propose and review patches in > oslo.service and has demonstrated an understanding of the code. > > Please respond with +1 or any concerns you may have. Thanks. > > -Ben > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims From doug at doughellmann.com Fri Aug 3 19:16:43 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 03 Aug 2018 15:16:43 -0400 Subject: [openstack-dev] [freezer][tc] removing freezer from governance Message-ID: <1533323716-sup-9361@lrrr.local> Based on the fact that the Freezer team missed the Rocky release and Stein PTL elections, I have proposed a patch to remove the project from governance. If the project is still being actively maintained and someone wants to take over leadership, please let us know here in this thread or on the patch. Doug https://review.openstack.org/#/c/588645/ From doug at doughellmann.com Fri Aug 3 19:17:44 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 03 Aug 2018 15:17:44 -0400 Subject: [openstack-dev] [searchlight][tc] removing searchlight from governance Message-ID: <1533323836-sup-8335@lrrr.local> Based on the fact that the Searchlight team missed the Rocky release and Stein PTL elections, I have proposed a patch to remove the project from governance. If the project is still being actively maintained and someone wants to take over leadership, please let us know here in this thread or on the patch. Doug https://review.openstack.org/#/c/588644/ From melwittt at gmail.com Fri Aug 3 19:35:35 2018 From: melwittt at gmail.com (melanie witt) Date: Fri, 3 Aug 2018 12:35:35 -0700 Subject: [openstack-dev] [nova][ptg] Stein PTG planning and Rocky retrospective etherpads Message-ID: Howdy folks, I think I forgot to send an email to alert everyone that we have a planning etherpad [1] for the Stein PTG where we're collecting topics of interest for discussion at the PTG. Please add your topics and include your nick with your topics and comments so we know who to talk to about the topics. In usual style, we also have a Rocky retrospective etherpad [2] where we can fill in "what went well" and "what went not so well" to discuss at the PTG and see if we've made improvements in areas of concern from last time and gather concrete actions we can take to improve going forward for things we are not doing as well as we could. Cheers, -melanie [1] https://etherpad.openstack.org/p/nova-ptg-stein [2] https://etherpad.openstack.org/p/nova-rocky-retrospective From bogdan.katynski at workday.com Sat Aug 4 00:29:05 2018 From: bogdan.katynski at workday.com (Bogdan Katynski) Date: Sat, 4 Aug 2018 00:29:05 +0000 Subject: [openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes In-Reply-To: <282a7bf1-ae3e-335a-e1a1-69996276f731@binero.se> References: <282a7bf1-ae3e-335a-e1a1-69996276f731@binero.se> Message-ID: > On 3 Aug 2018, at 13:46, Tobias Urdin wrote: > > Kubernetes: > * Master etcd does not start because /run/etcd does not exist This could be an issue with etcd rpm. With Systemd, /run is an in-memory tmpfs and is wiped on reboots. We’ve come across a similar issue in mariadb rpm on CentOS 7: https://bugzilla.redhat.com/show_bug.cgi?id=1538066 If the etcd rpm only creates /run/etcd during installation, that directory will not survive reboots. The rpm should also drop a file in /usr/lib/tmpfiles.d/etcd.conf with contents similar to d /run/etcd 0755 etcd etcd - - -- Bogdan Katyński freenode: bodgix -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From amy at demarco.com Sat Aug 4 01:15:30 2018 From: amy at demarco.com (Amy Marrich) Date: Fri, 3 Aug 2018 20:15:30 -0500 Subject: [openstack-dev] New AUC Criteria Message-ID: *Are you an Active User Contributor (AUC)? Well you may be and not even know it! Historically, AUCs met the following criteria: - Organizers of Official OpenStack User Groups: from the Groups Portal- Active members and contributors to functional teams and/or working groups (currently also manually calculated for WGs not using IRC): from IRC logs- Moderators of any of the operators official meet-up sessions: Currently manually calculated.- Contributors to any repository under the UC governance: from Gerrit- Track chairs for OpenStack summits: from the Track Chair tool- Contributors to Superuser (articles, interviews, user stories, etc.): from the Superuser backend- Active moderators on ask.openstack.org : from Ask OpenStackIn July, the User Committee (UC) voted to add the following criteria to becoming an AUC in order to meet the needs of the evolving OpenStack Community. So in addition to the above ways, you can now earn AUC status by meeting the following: - User survey participants who completed a deployment survey- Ops midcycle session moderators- OpenStack Days organizers- SIG Members nominated by SIG leaders- Active Women of OpenStack participants- Active Diversity WG participantsWell that’s great you have met the requirements to become an AUC but what does that mean? AUCs can run for open UC positions and can vote in the elections. AUCs also receive a discounted $300 ticket for OpenStack Summit as well as having the coveted AUC insignia on your badge!* And remember nominations for the User Committee open on Monday, August 6 and end on August, 17 with voting August 20 to August 24. Amy Marrich (spotz) User Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From forrest.zhao at intel.com Sat Aug 4 01:18:37 2018 From: forrest.zhao at intel.com (Zhao, Forrest) Date: Sat, 4 Aug 2018 01:18:37 +0000 Subject: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to VF mirroring In-Reply-To: References: <6345119E91D5C843A93D64F498ACFA136999ECF2@SHSMSX101.ccr.corp.intel.com> Message-ID: <6345119E91D5C843A93D64F498ACFA13699A0EE8@SHSMSX101.ccr.corp.intel.com> Hi Miguel, Can we put the proposed topics to this PTG etherpad directly? Or we should first discuss it in weekly Neutron project meeting? Please advise; then we’ll follow the process to propose the PTG topics. Thanks, Forrest From: Miguel Lavalle [mailto:miguel at mlavalle.com] Sent: Friday, August 3, 2018 11:41 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to VF mirroring Forrest, Manjeet, Here you go: https://etherpad.openstack.org/p/neutron-stein-ptg Best regards On Wed, Aug 1, 2018 at 11:49 AM, Bhatia, Manjeet S > wrote: Hi, Yes, we need to refine spec for sure, once a consensus is reached focus will be on implementation, Here’s implementation patch (WIP) https://review.openstack.org/#/c/584892/ , we can’t really review api part until spec if finalized but, other stuff like config and common issues can still be pointed out and progress can be made until consensus on api is reached. Miguel, I think this will be added to etherpad for PTG discussions as well ? Thanks and Regards ! Manjeet From: Miguel Lavalle [mailto:miguel at mlavalle.com] Sent: Tuesday, July 31, 2018 10:26 AM To: Zhao, Forrest > Cc: OpenStack Development Mailing List > Subject: Re: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to VF mirroring Hi Forrest, Yes, in my email, I was precisely referring to the work around https://review.openstack.org/#/c/574477. Now that we are wrapping up Rocky, I wanted to raise the visibility of this spec. I am glad you noticed. This week we are going to cut our RC-1 and I don't anticipate that we will will have a RC-2 for Rocky. So starting next week, let's go back to the spec and refine it, so we can start implementing in Stein as soon as possible. Depending on how much progress we make in the spec, we may need to schedule a discussion during the PTG in Denver, September 10 - 14, in case face to face time is needed to reach an agreement. I know that Manjeet is going to attend the PTG and he has already talked to me about this spec in the recent past. So maybe Manjeet could be the conduit to represent this spec in Denver, in case we need to talk about it there Best regards Miguel On Tue, Jul 31, 2018 at 4:12 AM, Zhao, Forrest > wrote: Hi Miguel, In your mail “PTL candidacy for the Stein cycle”, it mentioned that “port mirroring for SR-IOV VF to VF mirroring” is within Stein goal. Could you tell where is the place to discuss the design for this feature? Mailing list, IRC channel, weekly meeting or others? I was involved in its spec review at https://review.openstack.org/#/c/574477/; but it has not been updated for a while. Thanks, Forrest __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From aaronzhu1121 at gmail.com Sat Aug 4 02:33:33 2018 From: aaronzhu1121 at gmail.com (Rong Zhu) Date: Sat, 4 Aug 2018 10:33:33 +0800 Subject: [openstack-dev] [freezer][tc] removing freezer from governance In-Reply-To: <1533323716-sup-9361@lrrr.local> References: <1533323716-sup-9361@lrrr.local> Message-ID: Hi, all I think backup restore and disaster recovery is one the import things in OpenStack, And our company(ZTE) has already integrated freezer in our production. And did some features base on freezer, we could push those features to community. Could you give us a chance to take over freezer in Stein cycle, If things still no progress, we cloud do this action after Stein cycle. Thank you for your consideration. -- Thanks, Rong Zhu On Sat, Aug 4, 2018 at 3:16 AM Doug Hellmann wrote: > Based on the fact that the Freezer team missed the Rocky release and > Stein PTL elections, I have proposed a patch to remove the project from > governance. If the project is still being actively maintained and > someone wants to take over leadership, please let us know here in this > thread or on the patch. > > Doug > > https://review.openstack.org/#/c/588645/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe at topjian.net Sat Aug 4 16:17:59 2018 From: joe at topjian.net (Joe Topjian) Date: Sat, 4 Aug 2018 10:17:59 -0600 Subject: [openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes In-Reply-To: <282a7bf1-ae3e-335a-e1a1-69996276f731@binero.se> References: <282a7bf1-ae3e-335a-e1a1-69996276f731@binero.se> Message-ID: We recently deployed Magnum and I've been making my way through getting both Swarm and Kubernetes running. I also ran into some initial issues. These notes may or may not help, but thought I'd share them in case: * We're using Barbican for SSL. I have not tried with the internal x509keypair. * I was only able to get things running with Fedora Atomic 27, specifically the version used in the Magnum docs: https://docs.openstack.org/magnum/latest/install/launch-instance.html Anything beyond that wouldn't even boot in my cloud. I haven't dug into this. * Kubernetes requires a Cluster Template to have a label of cert_manager_api=true set in order for the cluster to fully come up (at least, it didn't work for me until I set this). As far as troubleshooting methods go, check the cloud-init logs on the individual instances to see if any of the "parts" have failed to run. Manually re-run the parts on the command-line to get a better idea of why they failed. Review the actual script, figure out the variable interpolation and how it relates to the Cluster Template being used. Eventually I was able to get clusters running with the stock driver/templates, but wanted to tune them in order to better fit in our cloud, so I've "forked" them. This is in no way a slight against the existing drivers/templates nor do I recommend doing this until you reach a point where the stock drivers won't meet your needs. But I mention it because it's possible to do and it's not terribly hard. This is still a work-in-progress and a bit hacky: https://github.com/cybera/magnum-templates Hope that helps, Joe On Fri, Aug 3, 2018 at 6:46 AM, Tobias Urdin wrote: > Hello, > > I'm testing around with Magnum and have so far only had issues. > I've tried deploying Docker Swarm (on Fedora Atomic 27, Fedora Atomic 28) > and Kubernetes (on Fedora Atomic 27) and haven't been able to get it > working. > > Running Queens, is there any information about supported images? Is Magnum > maintained to support Fedora Atomic still? > What is in charge of population the certificates inside the instances, > because this seems to be the root of all issues, I'm not using Barbican but > the x509keypair driver > is that the reason? > > Perhaps I missed some documentation that x509keypair does not support what > I'm trying to do? > > I've seen the following issues: > > Docker: > * Master does not start and listen on TCP because of certificate issues > dockerd-current[1909]: Could not load X509 key pair (cert: > "/etc/docker/server.crt", key: "/etc/docker/server.key") > > * Node does not start with: > Dependency failed for Docker Application Container Engine. > docker.service: Job docker.service/start failed with result 'dependency'. > > Kubernetes: > * Master etcd does not start because /run/etcd does not exist > ** When that is created it fails to start because of certificate > 2018-08-03 12:41:16.554257 C | etcdmain: open /etc/etcd/certs/server.crt: > no such file or directory > > * Master kube-apiserver does not start because of certificate > unable to load server certificate: open /etc/kubernetes/certs/server.crt: > no such file or directory > > * Master heat script just sleeps forever waiting for port 8080 to become > available (kube-apiserver) so it can never kubectl apply the final steps. > > * Node does not even start and times out when Heat deploys it, probably > because master never finishes > > Any help is appreciated perhaps I've missed something crucial, I've not > tested Kubernetes on CoreOS yet. > > Best regards > Tobias > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Sat Aug 4 21:25:58 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 4 Aug 2018 16:25:58 -0500 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <5B6363BA.9000900@windriver.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> <156b4688-aba3-2a01-3917-b42a209f0ecd@gmail.com> <6c636d43-d0cc-f805-cf20-96acbc61b139@fried.cc> <625fd356-c5a1-5818-80f1-a8f8c570d830@gmail.com> <171010d9-0cc8-da77-b51f-292ad8e2cb26@gmail.com> <5B6363BA.9000900@windriver.com> Message-ID: <3f8c6957-f621-a8f1-6977-25974dfdd085@gmail.com> On 8/2/2018 3:04 PM, Chris Friesen wrote: > At a previous Summit[1] there were some operators that said they just > always ran nova-scheduler with debug logging enabled in order to deal > with this issue, but that it was a pain to isolate the useful logs from > the not-useful ones. Using CONF.trace [1] might eventually help to isolate / reduce some of that noise in the scheduler currently logged at DEBUG. [1] https://review.openstack.org/#/q/topic:bug/1620692+(status:open+OR+status:merged) -- Thanks, Matt From mriedemos at gmail.com Sat Aug 4 21:44:26 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 4 Aug 2018 16:44:26 -0500 Subject: [openstack-dev] [nova] tempest-full-py3 rename means we now run that job on test-only changes Message-ID: <26aed2af-f24f-24b9-ab9b-f83f11d3a63a@gmail.com> I've reported a nova bug for this: https://bugs.launchpad.net/nova/+bug/1785425 But I'm not sure what is the best way to fix it now with the zuul v3 hotness. We had an irrelevant-files entry in project-config for the tempest-full job but we don't have that for tempest-full-py3, so should we just rename that in project-config (guessing not)? Or should we do something in nova's .zuul.yaml like this (guessing yes): https://review.openstack.org/#/c/578878/ The former is easy and branchless but I'm guessing the latter is what we should do long-term (and would require backports to stable branches). -- Thanks, Matt From doug at doughellmann.com Sat Aug 4 22:59:39 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Sat, 04 Aug 2018 18:59:39 -0400 Subject: [openstack-dev] [nova] tempest-full-py3 rename means we now run that job on test-only changes In-Reply-To: <26aed2af-f24f-24b9-ab9b-f83f11d3a63a@gmail.com> References: <26aed2af-f24f-24b9-ab9b-f83f11d3a63a@gmail.com> Message-ID: <1533423128-sup-6737@lrrr.local> Excerpts from Matt Riedemann's message of 2018-08-04 16:44:26 -0500: > I've reported a nova bug for this: > > https://bugs.launchpad.net/nova/+bug/1785425 > > But I'm not sure what is the best way to fix it now with the zuul v3 > hotness. We had an irrelevant-files entry in project-config for the > tempest-full job but we don't have that for tempest-full-py3, so should > we just rename that in project-config (guessing not)? Or should we do > something in nova's .zuul.yaml like this (guessing yes): > > https://review.openstack.org/#/c/578878/ > > The former is easy and branchless but I'm guessing the latter is what we > should do long-term (and would require backports to stable branches). > We don't want to rename the job, because we still want to run both jobs for a time. For what it's worth, as soon as Stein opens up I'm going to be working with the rest of the goal champions to propose patches to move the zuul settings for almost all jobs out of project-config and into each project repo, including all of the stable branches. If you add the settings to the config for nova in project-config before we start that, those patches will include the new settings, for the branches where the jobs run. Doug From mriedemos at gmail.com Sat Aug 4 23:16:06 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 4 Aug 2018 18:16:06 -0500 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <51750e57-9a9a-de88-0ab5-e63d8e511524@nemebean.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <90a5944a-085b-4cd2-d1b2-b490fc466bee@gmail.com> <6b96c555-57d9-4fda-a061-10ae9cf49f09@nemebean.com> <39e76be8-f3d2-09b6-54a7-b6c127f0aeb1@gmail.com> <51750e57-9a9a-de88-0ab5-e63d8e511524@nemebean.com> Message-ID: <1312d132-bdc4-3fb0-c441-12e5a027ea48@gmail.com> On 8/2/2018 10:07 AM, Ben Nemec wrote: >>> >>> Now it seems like I need to do: >>> >>> 1) Change disk_allocation_ratio in nova.conf >>> 2) Restart nova-scheduler, nova-compute, and nova-placement (or some >>> subset of those?) >> >> Restarting the placement service wouldn't have any effect here. > > Wouldn't I need to restart it if I wanted new resource providers to use > the new default? Placement doesn't use those options, nova does. nova-compute creates the compute node resource providers in placement. Placement is, basically, a global place to throw inventory and usage data and then get it back out. But never fear, we'll add state and plenty of enterprise application gorp to it over time. -- Thanks, Matt From mriedemos at gmail.com Sat Aug 4 23:18:36 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 4 Aug 2018 18:18:36 -0500 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <5B646336.8070001@windriver.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> <156b4688-aba3-2a01-3917-b42a209f0ecd@gmail.com> <6c636d43-d0cc-f805-cf20-96acbc61b139@fried.cc> <625fd356-c5a1-5818-80f1-a8f8c570d830@gmail.com> <171010d9-0cc8-da77-b51f-292ad8e2cb26@gmail.com> <5B6363BA.9000900@windriver.com> <97bfe7dc-eb25-bf30-7a84-6ef29105324e@gmail.com> <5B646336.8070001@windriver.com> Message-ID: <3a56c071-dc17-fe88-a63f-832f907bd6bd@gmail.com> On 8/3/2018 9:14 AM, Chris Friesen wrote: > I'm of two minds here. > > On the one hand, you have the case where the end user has accidentally > requested some combination of things that isn't normally available, and > they need to be able to ask the provider what they did wrong.  I agree > that this case is not really an exception, those resources were never > available in the first place. > > On the other hand, suppose the customer issues a valid request and it > works, and then issues the same request again and it fails, leading to a > violation of that customers SLA.  In this case I would suggest that it > could be considered an exception since the system is not delivering the > service that it was intended to deliver. As I'm sure you're aware Chris, it looks like StarlingX has a kind of post-mortem query utility to try and figure out where requested resources didn't end up yielding a resource provider (for a compute node): https://github.com/starlingx-staging/stx-nova/commit/71acfeae0d1c59fdc77704527d763bd85a276f9a#diff-94f87e728df6465becce5241f3da53c8R330 But as you noted way earlier in this thread, it might not be the actual reasons at the time of the failure and in a busy cloud could quickly change. -- Thanks, Matt From michael.glasgow at oracle.com Sat Aug 4 23:35:44 2018 From: michael.glasgow at oracle.com (Michael Glasgow) Date: Sat, 4 Aug 2018 18:35:44 -0500 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <97bfe7dc-eb25-bf30-7a84-6ef29105324e@gmail.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> <156b4688-aba3-2a01-3917-b42a209f0ecd@gmail.com> <6c636d43-d0cc-f805-cf20-96acbc61b139@fried.cc> <625fd356-c5a1-5818-80f1-a8f8c570d830@gmail.com> <171010d9-0cc8-da77-b51f-292ad8e2cb26@gmail.com> <5B6363BA.9000900@windriver.com> <97bfe7dc-eb25-bf30-7a84-6ef29105324e@gmail.com> Message-ID: <4b9c4035-d706-7d55-c60c-567c405b5fe0@oracle.com> On 8/2/2018 7:27 PM, Jay Pipes wrote: > It's not an exception. It's normal course of events. NoValidHosts means > there were no compute nodes that met the requested resource amounts. To clarify, I didn't mean a python exception. I concede that I should've chosen a better word for the type of object I have in mind. > If a SELECT statement against an Oracle DB returns 0 rows, is that an > exception? No. Would an operator need to re-send the SELECT statement > with an EXPLAIN SELECT in order to get information about what indexes > were used to winnow the result set (to zero)? Yes. Either that, or the > operator would need to gradually re-execute smaller SELECT statements > containing fewer filters in order to determine which join or predicate > caused a result set to contain zero rows. I'm not sure if this analogy fully appreciates the perspective of the operator. You're correct of course that if you select on a db and the correct answer is zero rows, then zero rows is the right answer, 100% of the time. Whereas what I thought we meant when we talk about "debugging no valid host failures" is that zero rows is *not* the right answer, and yet you're getting zero rows anyway. So yes, absolutely with an Oracle DB you would get an ORA-XXXXX exception in that case, along with a trace file that told you where things went off the rails. Which is exactly what we don't have here. If I understand your perspective correctly, it's basically that placement is working as designed, so there's nothing more to do except pore over debug output. Can we consider: (1) that might not always be true if there are bugs (2) even when it is technically true, from the user's perspective, I'd posit that it's rare that a user requests an instance with the express intent of not launching an instance. (?) If they're "debugging" this issue, it means there's a misconfiguration or some unexpected state that they have to go find. So it is exceptional in that sense, and either the operator or the user is going to need to know why the request failed in a large majority of these cases. I would love to hear from any large operators on the list whether they feel that "turn on debug and try again" is really acceptable here. I'm not trying to be critical; I'm just convinced that once the cluster is of a certain size, that approach can start to become very expensive. -- Michael Glasgow From miguel at mlavalle.com Sun Aug 5 16:26:19 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sun, 5 Aug 2018 11:26:19 -0500 Subject: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to VF mirroring In-Reply-To: <6345119E91D5C843A93D64F498ACFA13699A0EE8@SHSMSX101.ccr.corp.intel.com> References: <6345119E91D5C843A93D64F498ACFA136999ECF2@SHSMSX101.ccr.corp.intel.com> <6345119E91D5C843A93D64F498ACFA13699A0EE8@SHSMSX101.ccr.corp.intel.com> Message-ID: Hi Forrest, Please place your name / irc nick next to the topics you propose Regards On Fri, Aug 3, 2018 at 8:18 PM, Zhao, Forrest wrote: > Hi Miguel, > > > > Can we put the proposed topics to this PTG etherpad directly? Or we > should first discuss it in weekly Neutron project meeting? > > > > Please advise; then we’ll follow the process to propose the PTG topics. > > > > Thanks, > > Forrest > > > > *From:* Miguel Lavalle [mailto:miguel at mlavalle.com] > *Sent:* Friday, August 3, 2018 11:41 PM > *To:* OpenStack Development Mailing List (not for usage questions) < > openstack-dev at lists.openstack.org> > > *Subject:* Re: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to > VF mirroring > > > > Forrest, Manjeet, > > > > Here you go: https://etherpad.openstack.org/p/neutron-stein-ptg > > > > Best regards > > > > On Wed, Aug 1, 2018 at 11:49 AM, Bhatia, Manjeet S < > manjeet.s.bhatia at intel.com> wrote: > > Hi, > > > > Yes, we need to refine spec for sure, once a consensus is reached focus > will be on implementation, > > Here’s implementation patch (WIP) https://review.openstack.org/#/c/584892/ > , we can’t really > > review api part until spec if finalized but, other stuff like config and > common issues can > > still be pointed out and progress can be made until consensus on api is > reached. Miguel, I think > > this will be added to etherpad for PTG discussions as well ? > > > > Thanks and Regards ! > > Manjeet > > > > > > > > > > *From:* Miguel Lavalle [mailto:miguel at mlavalle.com] > *Sent:* Tuesday, July 31, 2018 10:26 AM > *To:* Zhao, Forrest > *Cc:* OpenStack Development Mailing List openstack.org> > *Subject:* Re: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to > VF mirroring > > > > Hi Forrest, > > > > Yes, in my email, I was precisely referring to the work around > https://review.openstack.org/#/c/574477. Now that we are wrapping up > Rocky, I wanted to raise the visibility of this spec. I am glad you > noticed. This week we are going to cut our RC-1 and I don't anticipate that > we will will have a RC-2 for Rocky. So starting next week, let's go back to > the spec and refine it, so we can start implementing in Stein as soon as > possible. Depending on how much progress we make in the spec, we may need > to schedule a discussion during the PTG in Denver, September 10 - 14, in > case face to face time is needed to reach an agreement. I know that Manjeet > is going to attend the PTG and he has already talked to me about this spec > in the recent past. So maybe Manjeet could be the conduit to represent this > spec in Denver, in case we need to talk about it there > > > > Best regards > > > > Miguel > > > > On Tue, Jul 31, 2018 at 4:12 AM, Zhao, Forrest > wrote: > > Hi Miguel, > > > > In your mail “PTL candidacy for the Stein cycle”, it mentioned that “port > mirroring for SR-IOV VF to VF mirroring” is within Stein goal. > > > > Could you tell where is the place to discuss the design for this feature? > Mailing list, IRC channel, weekly meeting or others? > > > > I was involved in its spec review at https://review.openstack.org/# > /c/574477/; but it has not been updated for a while. > > > > Thanks, > > Forrest > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Sun Aug 5 16:27:41 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sun, 5 Aug 2018 11:27:41 -0500 Subject: [openstack-dev] [neutron] Stein PTG etherpad In-Reply-To: References: Message-ID: Dear Neutron members, I should have mentioned it before.... Please place your name / nick irc next to the topics you propose in the etherpad, so we can coordinate as the PTG approaches Thanks and regards Miguel On Fri, Aug 3, 2018 at 10:39 AM, Miguel Lavalle wrote: > Dear Stackers, > > I have started an etherpad to collect topic proposals to be discussed > during the PTG in Denver, September 10th - 14th: > https://etherpad.openstack.org/p/neutron-stein-ptg . Please feel free to > add your proposals under the "Proposed topics to be scheduled" section. > Please also sign in under the "Attendance at the PTG" if you plan to be in > Denver, indicating the days you will be there. > > I am looking forward to see many of you in Denver and have a very > productive PTG! > > Best regards > > Miguel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From glongwave at gmail.com Mon Aug 6 02:40:10 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Mon, 6 Aug 2018 10:40:10 +0800 Subject: [openstack-dev] [oslo] Proposing Zane Bitter as oslo.service core In-Reply-To: References: Message-ID: +1 2018-08-04 2:58 GMT+08:00 Davanum Srinivas : > +1 from me! > On Fri, Aug 3, 2018 at 12:58 PM Ben Nemec wrote: > > > > Hi, > > > > Zane has been doing some good work in oslo.service recently and I would > > like to add him to the core team. I know he's got a lot on his plate > > already, but he has taken the time to propose and review patches in > > oslo.service and has demonstrated an understanding of the code. > > > > Please respond with +1 or any concerns you may have. Thanks. > > > > -Ben > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Davanum Srinivas :: https://twitter.com/dims > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From reedip14 at gmail.com Mon Aug 6 04:35:35 2018 From: reedip14 at gmail.com (reedip banerjee) Date: Mon, 6 Aug 2018 10:05:35 +0530 Subject: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to VF mirroring In-Reply-To: References: <6345119E91D5C843A93D64F498ACFA136999ECF2@SHSMSX101.ccr.corp.intel.com> <6345119E91D5C843A93D64F498ACFA13699A0EE8@SHSMSX101.ccr.corp.intel.com> Message-ID: Wondering if Tap-as-a-Service would play a role here? :) Related patch : https://review.openstack.org/#/c/584892/1 On Sun, Aug 5, 2018 at 9:57 PM Miguel Lavalle wrote: > Hi Forrest, > > Please place your name / irc nick next to the topics you propose > > Regards > > > > > On Fri, Aug 3, 2018 at 8:18 PM, Zhao, Forrest > wrote: > >> Hi Miguel, >> >> >> >> Can we put the proposed topics to this PTG etherpad directly? Or we >> should first discuss it in weekly Neutron project meeting? >> >> >> >> Please advise; then we’ll follow the process to propose the PTG topics. >> >> >> >> Thanks, >> >> Forrest >> >> >> >> *From:* Miguel Lavalle [mailto:miguel at mlavalle.com] >> *Sent:* Friday, August 3, 2018 11:41 PM >> *To:* OpenStack Development Mailing List (not for usage questions) < >> openstack-dev at lists.openstack.org> >> >> *Subject:* Re: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to >> VF mirroring >> >> >> >> Forrest, Manjeet, >> >> >> >> Here you go: https://etherpad.openstack.org/p/neutron-stein-ptg >> >> >> >> Best regards >> >> >> >> On Wed, Aug 1, 2018 at 11:49 AM, Bhatia, Manjeet S < >> manjeet.s.bhatia at intel.com> wrote: >> >> Hi, >> >> >> >> Yes, we need to refine spec for sure, once a consensus is reached focus >> will be on implementation, >> >> Here’s implementation patch (WIP) >> https://review.openstack.org/#/c/584892/ , we can’t really >> >> review api part until spec if finalized but, other stuff like config and >> common issues can >> >> still be pointed out and progress can be made until consensus on api is >> reached. Miguel, I think >> >> this will be added to etherpad for PTG discussions as well ? >> >> >> >> Thanks and Regards ! >> >> Manjeet >> >> >> >> >> >> >> >> >> >> *From:* Miguel Lavalle [mailto:miguel at mlavalle.com] >> *Sent:* Tuesday, July 31, 2018 10:26 AM >> *To:* Zhao, Forrest >> *Cc:* OpenStack Development Mailing List < >> openstack-dev at lists.openstack.org> >> *Subject:* Re: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to >> VF mirroring >> >> >> >> Hi Forrest, >> >> >> >> Yes, in my email, I was precisely referring to the work around >> https://review.openstack.org/#/c/574477. Now that we are wrapping up >> Rocky, I wanted to raise the visibility of this spec. I am glad you >> noticed. This week we are going to cut our RC-1 and I don't anticipate that >> we will will have a RC-2 for Rocky. So starting next week, let's go back to >> the spec and refine it, so we can start implementing in Stein as soon as >> possible. Depending on how much progress we make in the spec, we may need >> to schedule a discussion during the PTG in Denver, September 10 - 14, in >> case face to face time is needed to reach an agreement. I know that Manjeet >> is going to attend the PTG and he has already talked to me about this spec >> in the recent past. So maybe Manjeet could be the conduit to represent this >> spec in Denver, in case we need to talk about it there >> >> >> >> Best regards >> >> >> >> Miguel >> >> >> >> On Tue, Jul 31, 2018 at 4:12 AM, Zhao, Forrest >> wrote: >> >> Hi Miguel, >> >> >> >> In your mail “PTL candidacy for the Stein cycle”, it mentioned that “port >> mirroring for SR-IOV VF to VF mirroring” is within Stein goal. >> >> >> >> Could you tell where is the place to discuss the design for this feature? >> Mailing list, IRC channel, weekly meeting or others? >> >> >> >> I was involved in its spec review at >> https://review.openstack.org/#/c/574477/; but it has not been updated >> for a while. >> >> >> >> Thanks, >> >> Forrest >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Thanks and Regards, Reedip Banerjee IRC: reedip -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Mon Aug 6 04:46:55 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Mon, 6 Aug 2018 12:46:55 +0800 Subject: [openstack-dev] [OpenStack-dev][heat][keystone][security sig][all] SSL option for keystone session Message-ID: Hi all I would like to trigger a discussion on providing directly SSL content for KeyStone session. Since all team using SSL, I believe this maybe concerns to other projects as well. As we consider to implement customize SSL option for Heat remote stack [3] (and multicloud support [1]), I'm trying to figure out what is the best solution for this. Current SSL option in KeyStone session didn't allow us to provide directly CERT/Key string, instead only allow us to provide CERT/Key file path. Which is actually a limitation of python with the version less than 3.7 ([2]). As we not gonna easily get ride of previous python versions, we try to figure out what is the best solution we can approach here. Some way, we can think about, like using pipeline, or create a file, encrypted it and send the file path out to KeyStone session. Would like to hear more from all for any advice or suggestion on how can we approach this. [1] https://etherpad.openstack.org/p/ptg-rocky-multi-cloud [2] https://www.python.org/dev/peps/pep-0543/ [3] https://review.openstack.org/#/c/480923/ -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Aug 6 07:32:18 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 06 Aug 2018 16:32:18 +0900 Subject: [openstack-dev] [nova] tempest-full-py3 rename means we now run that job on test-only changes In-Reply-To: <26aed2af-f24f-24b9-ab9b-f83f11d3a63a@gmail.com> References: <26aed2af-f24f-24b9-ab9b-f83f11d3a63a@gmail.com> Message-ID: <1650e26a759.c6896d9a33891.1629521529539046579@ghanshyammann.com> ---- On Sun, 05 Aug 2018 06:44:26 +0900 Matt Riedemann wrote ---- > I've reported a nova bug for this: > > https://bugs.launchpad.net/nova/+bug/1785425 > > But I'm not sure what is the best way to fix it now with the zuul v3 > hotness. We had an irrelevant-files entry in project-config for the > tempest-full job but we don't have that for tempest-full-py3, so should > we just rename that in project-config (guessing not)? Or should we do > something in nova's .zuul.yaml like this (guessing yes): > > https://review.openstack.org/#/c/578878/ > > The former is easy and branchless but I'm guessing the latter is what we > should do long-term (and would require backports to stable branches). Yeah, tempest-full-py3 does not have nova specific irreverent-file defined on project-config side. Just for background, same issue was for other job also like tempest-full and grenade job where tempest-full used to run on doc/test only changes also [1] which is fixed after making the 'files' and 'irrelevant-files' overridable in zuul [2]. IMO same solution can be done for tempest-full-py3 too, I pushed the patch for that [3]. For new job, i feel we should always plan to do it in nova's .zuul.yaml and old entry on project-config side can be move to nova side during job migration work. [1] https://bugs.launchpad.net/nova/+bug/1745405 https://bugs.launchpad.net/nova/+bug/1745431 [2] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131304.html [3] https://review.openstack.org/#/c/589039/ -gmann > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gmann at ghanshyammann.com Mon Aug 6 08:02:47 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 06 Aug 2018 17:02:47 +0900 Subject: [openstack-dev] Should we add a tempest-slow job? In-Reply-To: References: <9b338d82-bbcf-f6c0-9ba0-9a402838d958@gmail.com> Message-ID: <1650e428d6d.116a733da29855.5869427211847940037@ghanshyammann.com> ---- On Fri, 27 Jul 2018 00:14:04 +0900 Matt Riedemann wrote ---- > On 5/13/2018 9:06 PM, Ghanshyam Mann wrote: > >> +1 on idea. As of now slow marked tests are from nova, cinder and > >> neutron scenario tests and 2 API swift tests only [4]. I agree that > >> making a generic job in tempest is better for maintainability. We can > >> use existing job for that with below modification- > >> - We can migrate > >> "legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" job > >> zuulv3 in tempest repo > >> - We can see if we can move migration tests out of it and use > >> "nova-live-migration" job (in tempest check pipeline ) which is much > >> better in live migration env setup and controlled by nova. > >> - then it can be name something like > >> "tempest-scenario-multinode-lvm-multibackend". > >> - run this job in nova, cinder, neutron check pipeline instead of experimental. > > Like this -https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:scenario-tests-job > > > > That makes scenario job as generic with running all scenario tests > > including slow tests with concurrency 2. I made few cleanup and moved > > live migration tests out of it which is being run by > > 'nova-live-migration' job. Last patch making this job as voting on > > tempest side. > > > > If looks good, we can use this to run on project side pipeline as voting. > > > > -gmann > > > > I should have said something earlier, but I've said it on my original > nova change now: > > https://review.openstack.org/#/c/567697/ > > What was implemented in Tempest isn't really at all what I was going > for, especially since it doesn't run the API tests marked 'slow'. All I > want is a job like tempest-full (which excludes slow tests) to be > tempest-full which *only* runs slow tests. They would run a mutually > exclusive set of tests so we have that coverage. I don't care if the > scenario tests are run in parallel or serial (it's probably best to > start in serial like tempest-full today and then change to parallel > later if that settles down). > > But I think it's especially important given: > > https://review.openstack.org/#/c/567697/2 > > That we have a job which only runs slow tests because we're going to be > marking more tests as "slow" pretty soon and we don't need the overlap > with the existing tests that are run in tempest-full. Agree with your point. We have tempest-slow job now available on tempest side to use across projects[1]. I have updated this - https://review.openstack.org/#/c/567697 [1] https://github.com/openstack/tempest/blob/b2b666bd4b9aab08d0b7724c1f0b7465adde0d8d/.zuul.yaml#L146 -gmann > > -- > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dougal at redhat.com Mon Aug 6 09:23:04 2018 From: dougal at redhat.com (Dougal Matthews) Date: Mon, 6 Aug 2018 10:23:04 +0100 Subject: [openstack-dev] [mistral] Clearing out old gerrit reviews In-Reply-To: References: Message-ID: On 3 August 2018 at 10:45, Dougal Matthews wrote: > On 9 July 2018 at 16:13, Dougal Matthews wrote: > >> Hey folks, >> >> I'd like to propose that we start abandoning old Gerrit reviews. >> >> This report shows how stale and out of date some of the reviews are: >> http://stackalytics.com/report/reviews/mistral-group/open >> >> I would like to initially abandon anything without any activity for a >> year, but we might want to consider a shorter limit - maybe 6 months. >> Reviews can be restored, so the risk is low. >> >> What do you think? Any objections or counter suggestions? >> >> If I don't hear any complaints, I'll go ahead with this next week (or >> maybe the following week). >> > > That time line was ambitious. I didn't get started :-) > > However, I did decide it would be best to formalise this plan somewhere. > So I quickly wrote up the plan in a Mistral policy spec. If we can agree > there and merge it, then I'll go ahead and start the cleanup. > > https://review.openstack.org/#/c/588492/ > The spec merged today, so I did a first pass and abandoned 24 reviews that met the criteria. It will be interesting to see if any of them are restored. Dougal > > > >> >> Cheers, >> Dougal >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mchandras at suse.de Mon Aug 6 09:35:55 2018 From: mchandras at suse.de (Markos Chandras) Date: Mon, 6 Aug 2018 12:35:55 +0300 Subject: [openstack-dev] [openstack-ansible] Proposing Jonathan Rosser as core reviewer In-Reply-To: <58ff-5b5ec980-31-29cd3a00@223498964> References: <58ff-5b5ec980-31-29cd3a00@223498964> Message-ID: On 07/30/2018 11:16 AM, jean-philippe at evrard.me wrote: > Hello everyone, > > I'd like to propose Jonathan Rosser (jrosser) as core reviewer for OpenStack-Ansible. > The BBC team [1] has been very active recently across the board, but worked heavily in our ops repo, making sure the experience is complete for operators. > > I value Jonathan's opinion (I remember the storage backend conversations for lxc/systemd-nspawn!), and I'd like this positive trend to continue. On top of it Jonathan has been recently reviewing quite a series of patches, and is involved into some of our important work: bringing the Bionic support. > > Best regards, > Jean-Philippe Evrard (evrardjp) +1 Jonathan will be a valuable addition to the project. -- markos SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg From renat.akhmerov at gmail.com Mon Aug 6 09:37:35 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Mon, 6 Aug 2018 16:37:35 +0700 Subject: [openstack-dev] [mistral] Clearing out old gerrit reviews In-Reply-To: References: Message-ID: <2e3873be-f0ea-4e11-86f2-4023cfe60842@Spark> Awesome! We need to do it periodically :) It’ll be easier to sort out patches. Thanks Renat Akhmerov @Nokia On 6 Aug 2018, 16:24 +0700, Dougal Matthews , wrote: > > > > On 3 August 2018 at 10:45, Dougal Matthews wrote: > > > > On 9 July 2018 at 16:13, Dougal Matthews wrote: > > > > > Hey folks, > > > > > > > > > > I'd like to propose that we start abandoning old Gerrit reviews. > > > > > > > > > > This report shows how stale and out of date some of the reviews are: > > > > > http://stackalytics.com/report/reviews/mistral-group/open > > > > > > > > > > I would like to initially abandon anything without any activity for a year, but we might want to consider a shorter limit - maybe 6 months. Reviews can be restored, so the risk is low. > > > > > > > > > > What do you think? Any objections or counter suggestions? > > > > > > > > > > If I don't hear any complaints, I'll go ahead with this next week (or maybe the following week). > > > > > > > > That time line was ambitious. I didn't get started :-) > > > > > > > > However, I did decide it would be best to formalise this plan somewhere. So I quickly wrote up the plan in a Mistral policy spec. If we can agree there and merge it, then I'll go ahead and start the cleanup. > > > > > > > > https://review.openstack.org/#/c/588492/ > > > > The spec merged today, so I did a first pass and abandoned 24 reviews that met the criteria. > > > > It will be interesting to see if any of them are restored. > > > > Dougal > > > > > > > > > > > > > > > > > > > > > > Cheers, > > > > > Dougal > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Aug 6 10:23:33 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 06 Aug 2018 19:23:33 +0900 Subject: [openstack-dev] [opensatck-dev][qa][barbican][novajoin][networking-fortinet][vmware-nsx] Dependency of Tempest changes Message-ID: <1650ec36ecf.10b0df76a34653.8050329285896349825@ghanshyammann.com> Hi All, Tempest patch [1] removes the deprecated config option for volume v1 API and it has dependency on may plugins. I have proposed the patches to each plugins using that option [2] to stop using that option so that their gate will not be broken if Tempest patch merge. Also I have made Tempest patch dependency on each plugins commit. Many of those dependent patch has merged but 4 patches are still hanging around since long time which is blocking Tempest change to get merge. Below are the plugins which have not merged the changes: barbican-tempest-plugin - https://review.openstack.org/#/c/573174/ novajoin-tempest-plugin - https://review.openstack.org/#/c/573175/ networking-fortinet - https://review.openstack.org/#/c/573170/ vmware-nsx-tempest-plugin - https://review.openstack.org/#/c/573172/ I want to merge this tempest patch in Rocky release which I am planing to do in next week. To make that happen we have to merge the Tempest patch soon. If above patches are not merged by plugins team within 2-3 days which means those plugins might not be active or do not care for gate, I am going to remove their dependency on Tempest patch and merge that. [1] https://review.openstack.org/#/c/573135/ [2] https://review.openstack.org/#/q/topic:remove-support-of-cinder-v1-api+(status:open+OR+status:merged) -gmann From dtantsur at redhat.com Mon Aug 6 10:39:12 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 6 Aug 2018 12:39:12 +0200 Subject: [openstack-dev] [release] Release countdown for week R-3, August 6-10 In-Reply-To: <20180803164037.GB29171@sm-workstation> References: <20180803162355.GA29171@sm-workstation> <20180803164037.GB29171@sm-workstation> Message-ID: On 08/03/2018 06:40 PM, Sean McGinnis wrote: > On Fri, Aug 03, 2018 at 11:23:56AM -0500, Sean McGinnis wrote: >> ----------------- >> > > More information on deadlines since we appear to have some conflicting > information documented. According to the published release schedule: > > https://releases.openstack.org/rocky/schedule.html#r-finalrc > > we stated intermediary releases had to be done by the final RC date. So based > on that, cycle-with-intermediary projects have until August 20 to do their > final release. Another hint though: if your project uses grenade, you probably want to have stable/rocky at the same time as everyone else. > > Of course, doing before that deadline is highly encouraged to make sure there > are not any last minute problems to work through, if at all possible. > >> >> Upcoming Deadlines & Dates >> -------------------------- >> >> RC1 deadline: August 9 > cycle-with-intermediary deadline: August 20 > >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From juliaashleykreger at gmail.com Mon Aug 6 12:31:11 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 6 Aug 2018 08:31:11 -0400 Subject: [openstack-dev] [ironic] Stein PTG Planning Etherpad Message-ID: Greetings everyone, A few weeks ago I created an etherpad[1] to begin discussion of ideas and thoughts for items to discuss during the PTG. I've raised this during our meetings, but not yet raised it to the mailing list. If you are interested, please feel free to add discussion items, comment, or provide additional context if it is something you care about. Please do so by August 23rd. Thanks! -Julia [1]: https://etherpad.openstack.org/p/ironic-stein-ptg From doug at doughellmann.com Mon Aug 6 12:46:18 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 06 Aug 2018 08:46:18 -0400 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <1526302110-sup-4784@lrrr.local> References: <1521110096-sup-3634@lrrr.local> <1521662425-sup-1628@lrrr.local> <1521749386-sup-1944@lrrr.local> <1522007989-sup-4653@lrrr.local> <1526302110-sup-4784@lrrr.local> Message-ID: <1533559182-sup-1666@lrrr.local> Excerpts from Doug Hellmann's message of 2018-05-14 08:52:08 -0400: > Excerpts from Doug Hellmann's message of 2018-03-25 16:04:11 -0400: > > Excerpts from Doug Hellmann's message of 2018-03-22 16:16:06 -0400: > > > Excerpts from Doug Hellmann's message of 2018-03-21 16:02:06 -0400: > > > > Excerpts from Doug Hellmann's message of 2018-03-15 07:03:11 -0400: > > > > > > > > > > TL;DR > > > > > ----- > > > > > > > > > > Let's stop copying exact dependency specifications into all our > > > > > projects to allow them to reflect the actual versions of things > > > > > they depend on. The constraints system in pip makes this change > > > > > safe. We still need to maintain some level of compatibility, so the > > > > > existing requirements-check job (run for changes to requirements.txt > > > > > within each repo) will change a bit rather than going away completely. > > > > > We can enable unit test jobs to verify the lower constraint settings > > > > > at the same time that we're doing the other work. > > > > > > > > The new job definition is in https://review.openstack.org/555034 and I > > > > have updated the oslo.config patch I mentioned before to use the new job > > > > instead of one defined in the oslo.config repo (see > > > > https://review.openstack.org/550603). > > > > > > > > I'll wait for that job patch to be reviewed and approved before I start > > > > adding the job to a bunch of other repositories. > > > > > > > > Doug > > > > > > The job definition for openstack-tox-lower-constraints [1] was approved > > > today (thanks AJaegar and pabelenger). > > > > > > I have started proposing the patches to add that job to the repos listed > > > in openstack/requirements/projects.txt using the topic > > > "requirements-stop-syncing" [2]. I hope to have the rest of those > > > proposed by the end of the day tomorrow, but since they have to run in > > > batches I don't know if that will be possible. > > > > > > The patch to remove the update proposal job is ready for review [3]. > > > > > > As is the patch to allow project requirements to diverge by changing the > > > rules in the requirements-check job [4]. > > > > > > We ran into a snag with a few of the jobs for projects that rely on > > > having service projects installed. There have been a couple of threads > > > about that recently, but Monty has promised to start another one to > > > provide all of the necessary context so we can fix the issues and move > > > ahead. > > > > > > Doug > > > > > > > All of the patches to define the lower-constraints test jobs have been > > proposed [1], and many have already been approved and merged (thank you > > for your quick reviews). > > > > A few of the jobs are failing because the projects depend on installing > > some other service from source. We will work out what to do with those > > when we solve that problem in a more general way. > > > > A few of the jobs failed because the dependencies were wrong. In a few > > cases I was able to figure out what was wrong, but I can use some help > > from project teams more familiar with the code bases to debug the > > remaining failures. > > > > In a few cases projects didn't have python 3 unit test jobs, so I > > configured the new job to use python 2. Teams should add a step to their > > python 3 migration plan to update the version of python used in the new > > job, when that is possible. > > > > I believe we are now ready to proceed with updating the > > requirements-check job to relax the rules about which changes are > > allowed [2]. > > > > Doug > > > > [1] https://review.openstack.org/#/q/topic:requirements-stop-syncing+status:open > > [2] https://review.openstack.org/555402 > > We still have about 50 open patches related to adding the > lower-constraints test job. I'll keep those open until the third > milestone of the Rocky development cycle, and then abandon the rest to > clear my gerrit view so it is usable again. > > If you want to add lower-constraints tests to your project and have > an open patch in the list [1], please take it over and fix the > settings then approve the patch (the fix usually involves making > the values in lower-constraints.txt match the values in the various > requirements.txt files). > > If you don't want the job, please leave a comment on the patch to > tell me and I will abandon it. > > Doug As mentioned in my earlier email, I have abandoned the ~30 reviews that remained open this morning. Please do feel free to restore those and take them over if you want the job. Doug From bdobreli at redhat.com Mon Aug 6 13:19:11 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 6 Aug 2018 15:19:11 +0200 Subject: [openstack-dev] [tripleo] Proposing Lukas Bezdicka core on TripleO In-Reply-To: References: Message-ID: <9bd08898-b667-47c0-4b18-2e50de5ea406@redhat.com> +1 On 8/1/18 1:31 PM, Giulio Fidente wrote: > Hi, > > I would like to propose Lukas Bezdicka core on TripleO. > > Lukas did a lot work in our tripleoclient, tripleo-common and > tripleo-heat-templates repos to make FFU possible. > > FFU, which is meant to permit upgrades from Newton to Queens, requires > in depth understanding of many TripleO components (for example Heat, > Mistral and the TripleO client) but also of specific TripleO features > which were added during the course of the three releases (for example > config-download and upgrade tasks). I believe his FFU work to have been > very challenging. > > Given his broad understanding, more recently Lukas started helping doing > reviews in other areas. > > I am so sure he'll be a great addition to our group that I am not even > looking for comments, just votes :D > -- Best regards, Bogdan Dobrelya, Irc #bogdando From liliueecg at gmail.com Mon Aug 6 13:36:13 2018 From: liliueecg at gmail.com (Li Liu) Date: Mon, 6 Aug 2018 09:36:13 -0400 Subject: [openstack-dev] [cyborg] Cyborg Driver Sub Team Meeting on ZOOM this week Message-ID: Hi Team, The Cyborg Driver Sub Team Meeting will be using ZOOM this week at 10AM Eastern Time Monday The Joining url is -- Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From liliueecg at gmail.com Mon Aug 6 13:36:44 2018 From: liliueecg at gmail.com (Li Liu) Date: Mon, 6 Aug 2018 09:36:44 -0400 Subject: [openstack-dev] [cyborg] Cyborg Driver Sub Team Meeting on ZOOM this week Message-ID: Hi Team, The Cyborg Driver Sub Team Meeting will be using ZOOM this week at 10AM Eastern Time Monday The Joining url is https://zoom.us/j/236172152 -- Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From pratapagoutham at gmail.com Mon Aug 6 13:42:17 2018 From: pratapagoutham at gmail.com (Goutham Pratapa) Date: Mon, 6 Aug 2018 19:12:17 +0530 Subject: [openstack-dev] [tempest] Small doubt in Tempest setup Message-ID: Hi all, This is regarding Tempest setup I have cloned and setup my tempest i could run my tests with '*nosetests*' also but when i try to run with *testr* im getting *$ testr list-tests * *No .testr.conf config file* any idea why it is occurring and any idea how to fix it will really help.. Thanks in advance. -- Cheers !!! Goutham Pratapa -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Mon Aug 6 13:48:34 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 6 Aug 2018 09:48:34 -0400 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <4b9c4035-d706-7d55-c60c-567c405b5fe0@oracle.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> <156b4688-aba3-2a01-3917-b42a209f0ecd@gmail.com> <6c636d43-d0cc-f805-cf20-96acbc61b139@fried.cc> <625fd356-c5a1-5818-80f1-a8f8c570d830@gmail.com> <171010d9-0cc8-da77-b51f-292ad8e2cb26@gmail.com> <5B6363BA.9000900@windriver.com> <97bfe7dc-eb25-bf30-7a84-6ef29105324e@gmail.com> <4b9c4035-d706-7d55-c60c-567c405b5fe0@oracle.com> Message-ID: <7f2eda97-ba9d-11c6-47ba-5a63ea9a9c97@gmail.com> On 08/04/2018 07:35 PM, Michael Glasgow wrote: > On 8/2/2018 7:27 PM, Jay Pipes wrote: >> It's not an exception. It's normal course of events. NoValidHosts >> means there were no compute nodes that met the requested resource >> amounts. > > To clarify, I didn't mean a python exception. Neither did I. I was referring to exceptional behaviour, not a Python exception. > I concede that I should've chosen a better word for the type of > object I have in mind. > >> If a SELECT statement against an Oracle DB returns 0 rows, is that an >> exception? No. Would an operator need to re-send the SELECT statement >> with an EXPLAIN SELECT in order to get information about what indexes >> were used to winnow the result set (to zero)? Yes. Either that, or the >> operator would need to gradually re-execute smaller SELECT statements >> containing fewer filters in order to determine which join or predicate >> caused a result set to contain zero rows. > > I'm not sure if this analogy fully appreciates the perspective of the > operator.  You're correct of course that if you select on a db and the > correct answer is zero rows, then zero rows is the right answer, 100% of > the time. > > Whereas what I thought we meant when we talk about "debugging no valid > host failures" is that zero rows is *not* the right answer, and yet > you're getting zero rows anyway. No, "debugging no valid host failures" doesn't mean that zero rows is the wrong answer. It means "find out why Nova thinks there's nowhere that my instance will fit". > So yes, absolutely with an Oracle DB you would get an ORA-XXXXX > exception in that case, along with a trace file that told you where > things went off the rails. Which is exactly what we don't have > here. That is precisely the opposite of what I was saying. Again, getting no results is *not* an error. It's normal behaviour and indicates there were no compute hosts that met the requirements of the request. This is not an error or exceptional behaviour. It's simply the result of a query against the placement database. If you get zero rows returned, that means you need to determine what part of your request caused the winnowed result set to go from >0 rows to 0 rows. And what we've been discussing is exactly the process by which such an investigation could be done. There are two options: do the investigation *inline* as part of the original request or do it *offline* after the original request returns 0 rows. Doing it inline means splitting the large query we currently construct into multiple queries (for each related group of requested resources and/or traits) and logging the number of results grabbed for each of those queries. Doing if offline means developing some diagnostic tool that an operator could run (similar to what Windriver did with [1]). The issue with that is that the diagnostic tool can only represent the resource usage at the time the diagnostic tool was run, not when the original request that returned 0 rows ran. [1] https://github.com/starlingx-staging/stx-nova/commit/71acfeae0d1c59fdc77704527d763bd85a276f9a#diff-94f87e728df6465becce5241f3da53c8R330 > If I understand your perspective correctly, it's basically that > placement is working as designed, so there's nothing more to do except > pore over debug output.  Can we consider: > >  (1) that might not always be true if there are bugs Bugs in the placement service are an entirely separate issue. They do occur, of course, but we're not talking about that here. >  (2) even when it is technically true, from the user's perspective, I'd > posit that it's rare that a user requests an instance with the express > intent of not launching an instance. (?)  If they're "debugging" this > issue, it means there's a misconfiguration or some unexpected state that > they have to go find. Depends on what you have in mind as a "user". If I launch an instance in an AWS region, I'd be very surprised if the service told me there was nowhere to place my instance unless of course I'd asked it to launch an instance with requirements that exceeded AWS' ability to launch. If you're talking about a user of a private IT cloud with a single rack of compute hosts, that user might very well expect to see a return of "sorry mate, there's nowhere to put your request right now.". There is no explicit or implicit SLA or guarantee that Nova needs to somehow create a place to put an instance when no such place exists to put the instance. Best, -jay From afazekas at redhat.com Mon Aug 6 13:49:32 2018 From: afazekas at redhat.com (Attila Fazekas) Date: Mon, 6 Aug 2018 15:49:32 +0200 Subject: [openstack-dev] [tempest] Small doubt in Tempest setup In-Reply-To: References: Message-ID: Please use ostestr or stestr instead of testr. $ git clone https://github.com/openstack/tempest $ cd tempest/ $ stestr --list $ ostestr -l #old way, also worked These tools handling the config creation implicitly. -------------- next part -------------- An HTML attachment was scrubbed... URL: From afazekas at redhat.com Mon Aug 6 13:57:23 2018 From: afazekas at redhat.com (Attila Fazekas) Date: Mon, 6 Aug 2018 15:57:23 +0200 Subject: [openstack-dev] [tempest] Small doubt in Tempest setup In-Reply-To: References: Message-ID: I was tried to be quick and become wrong. ;-) Here are the working ways: On Mon, Aug 6, 2018 at 3:49 PM, Attila Fazekas wrote: > Please use ostestr or stestr instead of testr. > > $ git clone https://github.com/openstack/tempest > $ cd tempest/ > $ stestr init > $ stestr list > > $ git clone https://github.com/openstack/tempest > $ cd tempest/ > $ ostestr -l #old way, also worked, does to steps > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.wilde at rackspace.com Mon Aug 6 14:09:09 2018 From: david.wilde at rackspace.com (Dave Wilde) Date: Mon, 6 Aug 2018 14:09:09 +0000 Subject: [openstack-dev] [openstack-ansible] Proposing Jonathan Rosser as core reviewer In-Reply-To: References: <58ff-5b5ec980-31-29cd3a00@223498964>, Message-ID: +1 ________________________________ From: Markos Chandras Sent: Monday, August 6, 2018 4:35:55 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [openstack-ansible] Proposing Jonathan Rosser as core reviewer On 07/30/2018 11:16 AM, jean-philippe at evrard.me wrote: > Hello everyone, > > I'd like to propose Jonathan Rosser (jrosser) as core reviewer for OpenStack-Ansible. > The BBC team [1] has been very active recently across the board, but worked heavily in our ops repo, making sure the experience is complete for operators. > > I value Jonathan's opinion (I remember the storage backend conversations for lxc/systemd-nspawn!), and I'd like this positive trend to continue. On top of it Jonathan has been recently reviewing quite a series of patches, and is involved into some of our important work: bringing the Bionic support. > > Best regards, > Jean-Philippe Evrard (evrardjp) +1 Jonathan will be a valuable addition to the project. -- markos SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Aug 6 14:47:21 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 06 Aug 2018 10:47:21 -0400 Subject: [openstack-dev] [tc] Technical Committee status update for 6 August Message-ID: <1533566676-sup-2859@lrrr.local> This is the weekly summary of work being done by the Technical Committee members. The full list of active items is managed in the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recent Activity == Project updates: - extra ATCs for I18n team: https://review.openstack.org/586751 - move security team repos to the SIG: https://review.openstack.org/586896 == Leaderless teams after Stein PTL elections == We had 7 teams without any volunteers to serve as PTL for the Stein cycle. The TC is handling each on a case-by-case basis, working with the project teams and considering the broader context of activity over the last cycle. Omer Anson has volunteered to be PTL for the Dragonflow team. Omer is the current PTL, so I do not anticipate any issues with the TC confirming him to serve again. Sam Yaple has agreed to serve as the PTL for the Loci team. Sam is active in the project, so I do not anticipate any issues with his confirmation. Dirk Mueller has volunteered to serve as packaging-rpm team PTL. I do not anticipate any issues with his confirmation, either. - https://review.openstack.org/#/c/588617/ Jeremy and Julia are going to work with the RefStack team and Interop working group to settle the ownership of the repositories currently owned by the RefStack team. I anticipate the RefStack team being dissolved when those new owners are found for those repositories. Dariusz Krol has volunteered to serve as Trove team PTL. The TC considered Dariusz's status carefully, because he is not currently a contributor to Trove, but the Trove team seems willing to accept Dariusz, so I anticipate the TC accepting him as PTL. - https://review.openstack.org/#/c/588510/ Paul and Chris are working to contact the Winstackers team about whether they want to find a volunteer to serve as PTL, or if the team should be dissolved. In addition to missing the PTL election, the Freezer and Searchlight teams missed enough deadlines during the cycle for the release management team to drop them from the Rocky release. We do not want to continue to list teams as official if the projects are not maintained and the teams are not active in the community. Given the apparent lack of participation in community processes, we are considering removing both teams from governance. We will not be rushing to make a decision, however, so if you are interested in either project please join the relevant thread with your input. - removing both from the rocky release: https://review.openstack.org/#/c/588605/ - removing freezer from governance: http://lists.openstack.org/pipermail/openstack-dev/2018-August/132873.html and https://review.openstack.org/#/c/588645/ - removing searchlight from governance: http://lists.openstack.org/pipermail/openstack-dev/2018-August/132874.html and https://review.openstack.org/#/c/588644/ == Ongoing Discussions == Ian has updated his proposals to change the project testing interface to support PDF generation and documentation translation. These need to be reviewed by folks familiar with the tools and processes. - https://review.openstack.org/#/c/572559/ - https://review.openstack.org/#/c/588110/ Sean has posted a new draft of the goal to create automated upgrade checker tools. - https://review.openstack.org/#/c/585491/ I have a patch to remove expired extra ATCs from several projects. "Extra ATC" status is time-limited, just as regular ATC status is. This patch is just housekeeping to remove some names that have expired. - https://review.openstack.org/#/c/588586/ The TC is planning 2 meetings during the week of the PTG. The proposed agendas are up for comment. - https://etherpad.openstack.org/p/tc-stein-ptg == TC member actions/focus/discussions for the coming week(s) == The PTG is approaching quickly. Please complete any remaining team health checks. Besides the items listed above as ongoing discussions, we have several other governance reviews open without sufficient votes to be approved. Please review. - https://review.openstack.org/#/q/project:openstack/governance+is:open == Contacting the TC == The Technical Committee uses a series of weekly "office hour" time slots for synchronous communication. We hope that by having several such times scheduled, we will have more opportunities to engage with members of the community from different timezones. Office hour times in #openstack-tc: - 09:00 UTC on Tuesdays - 01:00 UTC on Wednesdays - 15:00 UTC on Thursdays If you have something you would like the TC to discuss, you can add it to our office hour conversation starter etherpad at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Many of us also run IRC bouncers which stay in #openstack-tc most of the time, so please do not feel that you need to wait for an office hour time to pose a question or offer a suggestion. You can use the string "tc-members" to alert the members to your question. You will find channel logs with past conversations at http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ If you expect your topic to require significant discussion or to need input from members of the community other than the TC, please start a mailing list discussion on openstack-dev at lists.openstack.org and use the subject tag "[tc]" to bring it to the attention of TC members. From lbragstad at gmail.com Mon Aug 6 14:53:36 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 6 Aug 2018 09:53:36 -0500 Subject: [openstack-dev] Paste unmaintained In-Reply-To: References: <687c3ce92c327a15b6a37930e49ebc97f4d6a95f.camel@redhat.com> Message-ID: On 08/02/2018 09:36 AM, Chris Dent wrote: > On Thu, 2 Aug 2018, Stephen Finucane wrote: > >> Given that multiple projects are using this, we may want to think about >> reaching out to the author and seeing if there's anything we can do to >> at least keep this maintained going forward. I've talked to cdent about >> this already but if anyone else has ideas, please let me know. > > I've sent some exploratory email to Ian, the original author, to get > a sense of where things are and whether there's an option for us (or > if for some reason us wasn't okay, me) to adopt it. If email doesn't > land I'll try again with other media > > I agree with the idea of trying to move away from using it, as > mentioned elsewhere in this thread and in IRC, but it's not a simple > step as at least in some projects we are using paste files as > configuration that people are allowed (and do) change. Moving away > from that is the hard part, not figuring out how to load WSGI > middleware in a modern way. ++ Keystone has been battling this specific debate for several releases. The mutable configuration goal in addition to some much needed technical debt cleanup was the final nail. Long story short, moving off of paste eases the implementations for initiatives we've had in the pipe for a long time. We started an effort to move to flask in Rocky. Morgan has been working through the migration since June, and it's been quite involved [0]. At one point he mentioned trying to write-up how he approached the migration for keystone. I understand that not every project structures their APIs the same way, but a high-level guide might be helpful for some if the long-term goal is to eventually move off of paste (e.g. how we approached it, things that tripped us up, how we prepared the code base for flask, et cetera). I'd be happy to help coordinate a session or retrospective at the PTG if other groups find that helpful. [0] https://review.openstack.org/#/q/(status:open+OR+status:merged)+project:openstack/keystone+branch:master+topic:bug/1776504 > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From pratapagoutham at gmail.com Mon Aug 6 14:55:09 2018 From: pratapagoutham at gmail.com (Goutham Pratapa) Date: Mon, 6 Aug 2018 20:25:09 +0530 Subject: [openstack-dev] [tempest] Small doubt in Tempest setup In-Reply-To: References: Message-ID: Done thanks afazekas Thanks Goutham On Mon, 6 Aug 2018 at 7:27 PM, Attila Fazekas wrote: > I was tried to be quick and become wrong. ;-) > > Here are the working ways: > > On Mon, Aug 6, 2018 at 3:49 PM, Attila Fazekas > wrote: > >> Please use ostestr or stestr instead of testr. >> >> $ git clone https://github.com/openstack/tempest >> $ cd tempest/ >> $ stestr init >> $ stestr list >> >> $ git clone https://github.com/openstack/tempest >> $ cd tempest/ >> $ ostestr -l #old way, also worked, does to steps >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cheers !!! Goutham Pratapa -------------- next part -------------- An HTML attachment was scrubbed... URL: From therve at redhat.com Mon Aug 6 15:13:23 2018 From: therve at redhat.com (Thomas Herve) Date: Mon, 6 Aug 2018 17:13:23 +0200 Subject: [openstack-dev] Paste unmaintained In-Reply-To: <1533219691-sup-5515@lrrr.local> References: <687c3ce92c327a15b6a37930e49ebc97f4d6a95f.camel@redhat.com> <1533219691-sup-5515@lrrr.local> Message-ID: On Thu, Aug 2, 2018 at 4:27 PM, Doug Hellmann wrote: > Excerpts from Stephen Finucane's message of 2018-08-02 15:11:25 +0100: >> tl;dr: It seems Paste [1] may be entering unmaintained territory and we >> may need to do something about it. >> >> I was cleaning up some warning messages that nova was issuing this >> morning and noticed a few coming from Paste. I was going to draft a PR >> to fix this, but a quick browse through the Bitbucket project [2] >> suggests there has been little to no activity on that for well over a >> year. One particular open PR - "Python 3.7 support" - is particularly >> concerning, given the recent mailing list threads on the matter. >> >> Given that multiple projects are using this, we may want to think about >> reaching out to the author and seeing if there's anything we can do to >> at least keep this maintained going forward. I've talked to cdent about >> this already but if anyone else has ideas, please let me know. >> >> Stephen >> >> [1] https://pypi.org/project/Paste/ >> [2] https://bitbucket.org/ianb/paste/ >> [3] https://bitbucket.org/ianb/paste/pull-requests/41 >> > > The last I heard, a few years ago Ian moved away from Python to > JavaScript as part of his work at Mozilla. The support around > paste.deploy has been sporadic since then, and was one of the reasons > we discussed a goal of dropping paste.ini as a configuration file. > > Do we have a real sense of how many of the projects below, which > list Paste in requirements.txt, actually use it directly or rely > on it for configuration? > > Doug > > $ beagle search --ignore-case --file requirements.txt 'paste[><=! ]' > +----------------------------------------+--------------------------------------------------------+------+--------------------+ > | Repository | Filename | Line | Text | > +----------------------------------------+--------------------------------------------------------+------+--------------------+ > | airship-armada | requirements.txt | 8 | Paste>=2.0.3 | > | airship-deckhand | requirements.txt | 12 | Paste # MIT | > | anchor | requirements.txt | 9 | Paste # MIT | > | apmec | requirements.txt | 6 | Paste>=2.0.2 # MIT | > | barbican | requirements.txt | 22 | Paste>=2.0.2 # MIT | > | cinder | requirements.txt | 37 | Paste>=2.0.2 # MIT | > | congress | requirements.txt | 11 | Paste>=2.0.2 # MIT | > | designate | requirements.txt | 25 | Paste>=2.0.2 # MIT | > | ec2-api | requirements.txt | 20 | Paste # MIT | > | freezer-api | requirements.txt | 8 | Paste>=2.0.2 # MIT | > | gce-api | requirements.txt | 16 | Paste>=2.0.2 # MIT | > | glance | requirements.txt | 31 | Paste>=2.0.2 # MIT | > | glare | requirements.txt | 29 | Paste>=2.0.2 # MIT | > | karbor | requirements.txt | 28 | Paste>=2.0.2 # MIT | > | kingbird | requirements.txt | 7 | Paste>=2.0.2 # MIT | > | manila | requirements.txt | 30 | Paste>=2.0.2 # MIT | > | meteos | requirements.txt | 29 | Paste # MIT | > | monasca-events-api | requirements.txt | 6 | Paste # MIT | > | monasca-log-api | requirements.txt | 6 | Paste>=2.0.2 # MIT | > | murano | requirements.txt | 28 | Paste>=2.0.2 # MIT | > | neutron | requirements.txt | 6 | Paste>=2.0.2 # MIT | > | nova | requirements.txt | 19 | Paste>=2.0.2 # MIT | > | novajoin | requirements.txt | 6 | Paste>=2.0.2 # MIT | > | oslo.service | requirements.txt | 17 | Paste>=2.0.2 # MIT | > | requirements | global-requirements.txt | 187 | Paste # MIT | > | searchlight | requirements.txt | 27 | Paste>=2.0.2 # MIT | > | tacker | requirements.txt | 6 | Paste>=2.0.2 # MIT | > | tatu | requirements.txt | 18 | Paste # MIT | > | tricircle | requirements.txt | 7 | Paste>=2.0.2 # MIT | > | trio2o | requirements.txt | 7 | Paste # MIT | > | trove | requirements.txt | 11 | Paste>=2.0.2 # MIT | > | upstream-institute-virtual-environment | elements/upstream-training/static/tmp/requirements.txt | 147 | Paste==2.0.3 | If you look for PasteDeploy you'll find quite a few more. I know at least Heat and Swift don't depend on Paste but on PasteDeploy. -- Thomas From pratapagoutham at gmail.com Mon Aug 6 15:27:18 2018 From: pratapagoutham at gmail.com (Goutham Pratapa) Date: Mon, 6 Aug 2018 20:57:18 +0530 Subject: [openstack-dev] [tempest] Small doubt in Tempest setup In-Reply-To: References: Message-ID: stestr worked thanks but im getting the same error for ostestr -l any idea on what to do ?? On Mon, Aug 6, 2018 at 7:27 PM, Attila Fazekas wrote: > I was tried to be quick and become wrong. ;-) > > Here are the working ways: > > On Mon, Aug 6, 2018 at 3:49 PM, Attila Fazekas > wrote: > >> Please use ostestr or stestr instead of testr. >> >> $ git clone https://github.com/openstack/tempest >> $ cd tempest/ >> $ stestr init >> $ stestr list >> >> $ git clone https://github.com/openstack/tempest >> $ cd tempest/ >> $ ostestr -l #old way, also worked, does to steps >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Cheers !!! Goutham Pratapa -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Mon Aug 6 15:28:11 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 6 Aug 2018 09:28:11 -0600 Subject: [openstack-dev] [tripleo] Proposing Lukas Bezdicka core on TripleO In-Reply-To: <9bd08898-b667-47c0-4b18-2e50de5ea406@redhat.com> References: <9bd08898-b667-47c0-4b18-2e50de5ea406@redhat.com> Message-ID: +1 On Mon, Aug 6, 2018 at 7:19 AM, Bogdan Dobrelya wrote: > +1 > > On 8/1/18 1:31 PM, Giulio Fidente wrote: >> >> Hi, >> >> I would like to propose Lukas Bezdicka core on TripleO. >> >> Lukas did a lot work in our tripleoclient, tripleo-common and >> tripleo-heat-templates repos to make FFU possible. >> >> FFU, which is meant to permit upgrades from Newton to Queens, requires >> in depth understanding of many TripleO components (for example Heat, >> Mistral and the TripleO client) but also of specific TripleO features >> which were added during the course of the three releases (for example >> config-download and upgrade tasks). I believe his FFU work to have been >> very challenging. >> >> Given his broad understanding, more recently Lukas started helping doing >> reviews in other areas. >> >> I am so sure he'll be a great addition to our group that I am not even >> looking for comments, just votes :D >> > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dougal at redhat.com Mon Aug 6 15:50:12 2018 From: dougal at redhat.com (Dougal Matthews) Date: Mon, 6 Aug 2018 16:50:12 +0100 Subject: [openstack-dev] [tripleo] Proposing Lukas Bezdicka core on TripleO In-Reply-To: References: <9bd08898-b667-47c0-4b18-2e50de5ea406@redhat.com> Message-ID: +1 On 6 August 2018 at 16:28, Alex Schultz wrote: > +1 > > On Mon, Aug 6, 2018 at 7:19 AM, Bogdan Dobrelya > wrote: > > +1 > > > > On 8/1/18 1:31 PM, Giulio Fidente wrote: > >> > >> Hi, > >> > >> I would like to propose Lukas Bezdicka core on TripleO. > >> > >> Lukas did a lot work in our tripleoclient, tripleo-common and > >> tripleo-heat-templates repos to make FFU possible. > >> > >> FFU, which is meant to permit upgrades from Newton to Queens, requires > >> in depth understanding of many TripleO components (for example Heat, > >> Mistral and the TripleO client) but also of specific TripleO features > >> which were added during the course of the three releases (for example > >> config-download and upgrade tasks). I believe his FFU work to have been > >> very challenging. > >> > >> Given his broad understanding, more recently Lukas started helping doing > >> reviews in other areas. > >> > >> I am so sure he'll be a great addition to our group that I am not even > >> looking for comments, just votes :D > >> > > > > > > -- > > Best regards, > > Bogdan Dobrelya, > > Irc #bogdando > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From strigazi at gmail.com Mon Aug 6 16:34:42 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Mon, 6 Aug 2018 18:34:42 +0200 Subject: [openstack-dev] [releease][ptl] Missing and forced releases In-Reply-To: <20180803165205.GC29171@sm-workstation> References: <20180803165205.GC29171@sm-workstation> Message-ID: Hello, I have requested a release for python-magnumclient [0]. Per Doug Hellmann's comment in [0], I am requesting a FFE for python-magnumclient. Apologies for the inconvenience, Spyros [0] https://review.openstack.org/#/c/589138/ On Fri, 3 Aug 2018 at 18:52, Sean McGinnis wrote: > Today the release team reviewed the rocky deliverables and their releases > done > so far this cycle. There are a few areas of concern right now. > > Unreleased cycle-with-intermediary > ================================== > There is a much longer list than we would like to see of > cycle-with-intermediary deliverables that have not done any releases so > far in > Rocky. These deliverables should not wait until the very end of the cycle > to > release so that pending changes can be made available earlier and there > are no > last minute surprises. > > For owners of cycle-with-intermediary deliverables, please take a look at > what > you have merged that has not been released and consider doing a release > ASAP. > We are not far from the final deadline for these projects, but it would > still > be good to do a release ahead of that to be safe. > > Deliverables that miss the final deadline will be at risk of being dropped > from > the Rocky coordinated release. > > Unrelease client libraries > ========================== > The following client libraries have not done a release: > > python-cloudkittyclient > python-designateclient > python-karborclient > python-magnumclient > python-searchlightclient* > python-senlinclient > python-tricircleclient > > The deadline for client library releases was last Thursday, July 26. This > coming Monday the release team will force a release on HEAD for these > clients. > The release I proposed in [0] is the current HEAD of the master branch. > > * python-searchlight client is currently planned on being dropped due to > searchlight itself not having met the minimum of two milestone releases > during the rocky cycle. > > Missing milestone 3 > =================== > The following projects missed tagging a milestone 3 release: > > cinder > designate > freezer > mistral > searchlight > > Following policy, a milestone 3 tag will be forced on HEAD for these > deliverables on Monday. > > Freezer and searchlight missed previous milestone deadlines and will be > dropped > from the Rocky coordinated release. > > If there are any questions or concerns, please respond here or get ahold of > someone from the release management team in the #openstack-release channel. > > -- > Sean McGinnis (smcginnis) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Mon Aug 6 16:52:38 2018 From: ed at leafe.com (Ed Leafe) Date: Mon, 6 Aug 2018 11:52:38 -0500 Subject: [openstack-dev] UC nomination period is now open! Message-ID: <277DC0C9-C34D-47D9-B14F-81E41F136909@leafe.com> As the subject says, the nomination period for the summer[0] User Committee elections is now open. Any individual member of the Foundation who is an Active User Contributor (AUC) can propose their candidacy (except the three sitting UC members elected in the previous election). Self-nomination is common; no third party nomination is required. Nominations are made by sending an email to the user-committee at lists.openstack.org mailing-list, with the subject: “UC candidacy” by August 17, 05:59 UTC. The email can include a description of the candidate platform. The candidacy is then confirmed by one of the election officials, after verification of the electorate status of the candidate. [0] Sorry, southern hemisphere people! -- Ed Leafe From whayutin at redhat.com Mon Aug 6 16:56:49 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 6 Aug 2018 10:56:49 -0600 Subject: [openstack-dev] [tripleo] 3rd party ovb jobs are down Message-ID: Greetings, There is currently an unplanned outtage atm for the tripleo 3rd party OVB based jobs. We will contact the list when there are more details. Thank you! -- Wes Hayutin Associate MANAGER Red Hat w hayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon Aug 6 17:11:37 2018 From: zigo at debian.org (Thomas Goirand) Date: Mon, 6 Aug 2018 19:11:37 +0200 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> <12b36042-0303-0c2b-5239-7e4084fe1219@debian.org> Message-ID: <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> On 08/02/2018 10:43 AM, Andrey Kurilin wrote: > There's also some "raise StopIteration" issues in: > - ceilometer > - cinder > - designate > - glance > - glare > - heat > - karbor > - manila > - murano > - networking-ovn > - neutron-vpnaas > - nova > - rally > > > Can you provide any traceback or steps to reproduce the issue for Rally > project ? I'm not sure there's any. The only thing I know is that it has stop StopIteration stuff, but I'm not sure if they are part of generators, in which case they should simply be replaced by "return" if you want it to be py 3.7 compatible. I didn't have time to investigate these, but at least Glance was affected, and a patch was sent (as well as an async patch). None of them has been merged yet: https://review.openstack.org/#/c/586050/ https://review.openstack.org/#/c/586716/ That'd be ok if at least there was some reviews. It looks like nobody cares but Debian & Ubuntu people... :( Cheers, Thomas Goirand (zigo) From aj at suse.com Mon Aug 6 17:27:37 2018 From: aj at suse.com (Andreas Jaeger) Date: Mon, 6 Aug 2018 19:27:37 +0200 Subject: [openstack-dev] [tripleo] EOL process for newton branches In-Reply-To: <20180719045945.GB30070@thor.bakeyournoodle.com> References: <20180718234625.GA30070@thor.bakeyournoodle.com> <20180719045945.GB30070@thor.bakeyournoodle.com> Message-ID: <5565c598-7327-b7f3-773b-2cfb26c8326b@suse.com> Tony, On 2018-07-19 06:59, Tony Breeds wrote: > On Wed, Jul 18, 2018 at 08:08:16PM -0400, Emilien Macchi wrote: >> Option 2, EOL everything. >> Thanks a lot for your help on this one, Tony. > > No problem. > > I've created: > https://review.openstack.org/583856 > to tag final releases for tripleo deliverables and then mark them as > EOL. This one has merged now. > > Once that merges we can arrange for someone, with appropriate > permissions to run: > > # EOL repos belonging to tripleo > eol_branch.sh -- stable/newton newton-eol \ > openstack/instack openstack/instack-undercloud \ > openstack/os-apply-config openstack/os-collect-config \ > openstack/os-net-config openstack/os-refresh-config \ > openstack/puppet-tripleo openstack/python-tripleoclient \ > openstack/tripleo-common openstack/tripleo-heat-templates \ > openstack/tripleo-image-elements \ > openstack/tripleo-puppet-elements openstack/tripleo-ui \ > openstack/tripleo-validations Tony, will you coordinate with infra to run this yourself again - or let them run it for you, please? Note that we removed the script with retiring release-tools repo, I propose to readd with https://review.openstack.org/589236 and https://review.openstack.org/589237 and would love your review on these, please. I want to be sure that we import the right version... thanks, Andreas > > Yours Tony. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From prometheanfire at gentoo.org Mon Aug 6 17:36:21 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 6 Aug 2018 12:36:21 -0500 Subject: [openstack-dev] [release][requirements][python-magnumclient] Magnumclient FFE In-Reply-To: References: <20180803165205.GC29171@sm-workstation> Message-ID: <20180806173621.e7zgkkewmkg6qwkj@gentoo.org> On 18-08-06 18:34:42, Spyros Trigazis wrote: > Hello, > > I have requested a release for python-magnumclient [0]. > Per Doug Hellmann's comment in [0], I am requesting a FFE for > python-magnumclient. > My question to you is if this needs to be a constraints only thing or if there is some project that REQUIRES this new version to work (in which case that project needs to update it's exclusions or minumum). -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From kendall at openstack.org Mon Aug 6 17:36:26 2018 From: kendall at openstack.org (Kendall Waters) Date: Mon, 6 Aug 2018 12:36:26 -0500 Subject: [openstack-dev] Denver PTG Registration Price Increases on August 23 Message-ID: <00AB295F-05B2-4DE6-8D56-31BC924D9123@openstack.org> Hi everyone, The September 2018 PTG in Denver is right around the corner! Friendly reminder that ticket prices will increase to USD $599 on August 22 at 11:59pm PT (August 23 at 6:59 UTC). So purchase your tickets before the price increases. Register here: https://denver2018ptg.eventbrite.com Our discounted hotel block is filling up and will sell out. The last date to book in the hotel block is August 20 so book now here: www.openstack.org/ptg If you have any questions, please email ptg at openstack.org . Cheers, Kendall Kendall Waters OpenStack Marketing & Events kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Mon Aug 6 17:43:10 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Mon, 06 Aug 2018 17:43:10 -0000 Subject: [openstack-dev] zaqar-ui 5.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for zaqar-ui for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/zaqar-ui/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/zaqar-ui/log/?h=stable/rocky Release notes for zaqar-ui can be found at: http://docs.openstack.org/releasenotes/zaqar-ui/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/zaqar-ui and tag it *rocky-rc-potential* to bring it to the zaqar-ui release crew's attention. From no-reply at openstack.org Mon Aug 6 17:43:47 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Mon, 06 Aug 2018 17:43:47 -0000 Subject: [openstack-dev] zaqar 7.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for zaqar for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/zaqar/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/zaqar/log/?h=stable/rocky Release notes for zaqar can be found at: http://docs.openstack.org/releasenotes/zaqar/ From juliaashleykreger at gmail.com Mon Aug 6 17:53:03 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 6 Aug 2018 13:53:03 -0400 Subject: [openstack-dev] The state of the ironic universe - August 6th, 2018 Message-ID: News! ===== In the past month we released ironic 11.0 and now this week we expect to release ironic 11.1. With 11.1, ironic has: * The ``deploy_steps`` framework in order to give better control over what consists of a deployment. * BIOS settings management interfaces for the ``ilo`` and ``irmc`` hardware types. * Ramdisk deploy interface has merged. We await your bug reports! * Conductors can now be grouped into specific failure domains with specific nodes assigned to those failure domains. This allows for an operator to configure a conductor in data center A to manage only the hardware in data center A, and not data center B. * Capability has been added to the API to allow driver interface values to be reset to the conductor default values when the driver name is being changed. * Support for partition images with ppc64le hardware has merged. Previously operators could only use whole disk images on that architecture. * Out-of-band RAID configuration is now available with the ``irmc`` hardware type. * Several bug fixes related to cleaning, PXE, and UEFI booting. In slightly depressing news the ``xclarity`` hardware type has been deprecated. This is due to the fact the third-party CI for the hardware type has not yet been established. The team working on the hardware type is continuing to work on getting CI up and running, and we expect to rescind the deprecation in the next release of ironic. Stein Planning -------------- Our Stein planning etherpad[0] has had some activity and we have started to started to place procedural -2s on major changes which will impact the Rocky release. Expect these to be removed once we've released Ironic 11.1. Recent New Specifications ========================= * Support for SmartNICs[1] * Rework inspector boot mangement[2] Specifications starting to see activity ======================================= * Make IPA to ironic API communication optional[3] * Cleanhold state to enable cleaning steps collection [4] Recently merged specifications ============================== * Owner information storage[5] * Direct Deploy with local HTTP server[6] [0]: https://etherpad.openstack.org/p/ironic-stein-ptg [1]: https://review.openstack.org/582767 [2]: https://review.openstack.org/589230 [3]: https://review.openstack.org/#/c/212206 [4]: https://review.openstack.org/507910 [5]: https://review.openstack.org/560089 [6]: https://review.openstack.org/504039 From zbitter at redhat.com Mon Aug 6 18:58:37 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 6 Aug 2018 14:58:37 -0400 Subject: [openstack-dev] [OpenStack-dev][heat][keystone][security sig][all] SSL option for keystone session In-Reply-To: References: Message-ID: On 06/08/18 00:46, Rico Lin wrote: > Hi all > I would like to trigger a discussion on providing directly SSL content > for KeyStone session. Since all team using SSL, I believe this maybe > concerns to other projects as well. > > As we consider to implement customize SSL option for Heat remote stack > [3] (and multicloud support [1]), I'm trying to figure out what is the > best solution for this. Current SSL option in KeyStone session didn't > allow us to provide directly CERT/Key string, instead only allow us to > provide CERT/Key file path. Which is actually a limitation of > python with the version less than 3.7 ([2]). As we not gonna easily get > ride of previous python versions, we try to figure out what is the best > solution we can approach here. > > Some way, we can think about, like using pipeline, or create a file, > encrypted it and send the file path out to KeyStone session. > > Would like to hear more from all for any advice or suggestion on how can > we approach this. Create a temporary directory using tempfile.mkdtemp() as shown here: https://security.openstack.org/guidelines/dg_using-temporary-files-securely.html#correct This probably only needs to happen once per process. (Also I would pass mode=0o600 when creating the file instead of using umask().) Assuming the data gets read only once, then I'd suggest rather than using a tempfile, create a named pipe using os.mkfifo(), open it, and write the data. Then pass the filename of the FIFO to the SSL lib. Close it again after and remove the pipe. > [1] https://etherpad.openstack.org/p/ptg-rocky-multi-cloud > [2] https://www.python.org/dev/peps/pep-0543/ > [3] https://review.openstack.org/#/c/480923/ >  -- > May The Force of OpenStack Be With You, > */Rico Lin > /*irc: ricolin > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sean.mcginnis at gmx.com Mon Aug 6 19:02:41 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 6 Aug 2018 19:02:41 +0000 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> <12b36042-0303-0c2b-5239-7e4084fe1219@debian.org> <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> Message-ID: <20180806190241.GA3368@devvm1> > > I didn't have time to investigate these, but at least Glance was > affected, and a patch was sent (as well as an async patch). None of them > has been merged yet: > > https://review.openstack.org/#/c/586050/ > https://review.openstack.org/#/c/586716/ > > That'd be ok if at least there was some reviews. It looks like nobody > cares but Debian & Ubuntu people... :( > Keep in mind that your priorities are different than everyone elses. There are large parts of the community still working on Python 3.5 support (our officially supported Python 3 version), as well as smaller teams overall working on things like critical bugs. Unless and until we declare Python 3.7 as our new target (which I don't think we are ready to do yet), these kinds of patches will be on a best effort basis. Making sure that duplicate patches are not pushed up will also help increase the chances that they will eventually make it through as well. Sean From sean.mcginnis at gmx.com Mon Aug 6 19:06:35 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 6 Aug 2018 19:06:35 +0000 Subject: [openstack-dev] Paste unmaintained In-Reply-To: References: <687c3ce92c327a15b6a37930e49ebc97f4d6a95f.camel@redhat.com> Message-ID: <20180806190634.GB3368@devvm1> On Mon, Aug 06, 2018 at 09:53:36AM -0500, Lance Bragstad wrote: > > > > > Morgan has been working through the migration since June, and it's been > quite involved [0]. At one point he mentioned trying to write-up how he > approached the migration for keystone. I understand that not every > project structures their APIs the same way, but a high-level guide might > be helpful for some if the long-term goal is to eventually move off of > paste (e.g. how we approached it, things that tripped us up, how we > prepared the code base for flask, et cetera). > > I'd be happy to help coordinate a session or retrospective at the PTG if > other groups find that helpful. > I would find this very useful. I'm not sure the Cinder team has the resources to tackle something like this immediately, but having a better understanding of what would be involved would really help scope the work. And if we have existing examples to follow and at least an outline of the steps to do the work, it might be a good low-hanging-fruit type of thing for someone to tackle if they are looking to get involved. From jimmy at openstack.org Mon Aug 6 19:07:24 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 06 Aug 2018 14:07:24 -0500 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> <5B4DF6B4.9030501@openstack.org> <7f053a98-718b-0470-9ffa-934f0e715a76@gmail.com> <5B4E132E.5050607@openstack.org> <5B50A476.8010606@openstack.org> <5B5F295F.3090608@openstack.org> <1f5afd62cc3a9a8923586a404e707366@arcor.de> <16e69b47c8b71bf6f920ab8f3df61928@arcor.de> <5B63349F.4010204@openstack.org> <5B63565F.1010109@openstack.org> Message-ID: <5B689C6C.2010006@openstack.org> A heads up that the Translators are now listed at the bottom of the page as well, along with the rest of the paper contributors: https://www.openstack.org/edge-computing/cloud-edge-computing-beyond-the-data-center?lang=ja_JP Cheers! Jimmy Frank Kloeker wrote: > Hi Jimmy, > > thanks for announcement. Great stuff! It looks really great and it's > easy to navigate. I think a special thanks goes to Sebastian for > designing the pages. One small remark: have you tried text-align: > justify? I think it would be a little bit more readable, like a > science paper (German word is: Ordnung) > I put the projects again on the frontpage of the translation platform, > so we'll get more translations shortly. > > kind regards > > Frank > > Am 2018-08-02 21:07, schrieb Jimmy McArthur: >> The Edge and Containers translations are now live. As new >> translations become available, we will add them to the page. >> >> https://www.openstack.org/containers/ >> https://www.openstack.org/edge-computing/ >> >> Note that the Chinese translation has not been added to Zanata at this >> time, so I've left the PDF download up on that page. >> >> Thanks everyone and please let me know if you have questions or >> concerns! >> >> Cheers! >> Jimmy >> >> Jimmy McArthur wrote: >>> Frank, >>> >>> We expect to have these papers up this afternoon. I'll update this >>> thread when we do. >>> >>> Thanks! >>> Jimmy >>> >>> Frank Kloeker wrote: >>>> Hi Sebastian, >>>> >>>> okay, it's translated now. In Edge whitepaper is the problem with >>>> XML-Parsing of the term AT&T. Don't know how to escape this. Maybe >>>> you will see the warning during import too. >>>> >>>> kind regards >>>> >>>> Frank >>>> >>>> Am 2018-07-30 20:09, schrieb Sebastian Marcet: >>>>> Hi Frank, >>>>> i was double checking pot file and realized that original pot missed >>>>> some parts of the original paper (subsections of the paper) >>>>> apologizes >>>>> on that >>>>> i just re uploaded an updated pot file with missing subsections >>>>> >>>>> regards >>>>> >>>>> On Mon, Jul 30, 2018 at 2:20 PM, Frank Kloeker >>>>> wrote: >>>>> >>>>>> Hi Jimmy, >>>>>> >>>>>> from the GUI I'll get this link: >>>>>> >>>>> https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >>>>> >>>>>> [1] >>>>>> >>>>>> paper version are only in container whitepaper: >>>>>> >>>>>> >>>>> https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >>>>> >>>>>> [2] >>>>>> >>>>>> In general there is no group named papers >>>>>> >>>>>> kind regards >>>>>> >>>>>> Frank >>>>>> >>>>>> Am 2018-07-30 17:06, schrieb Jimmy McArthur: >>>>>> Frank, >>>>>> >>>>>> We're getting a 404 when looking for the pot file on the Zanata API: >>>>>> >>>>> https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >>>>> >>>>>> [3] >>>>>> >>>>>> As a result, we can't pull the po files. Any idea what might be >>>>>> happening? >>>>>> >>>>>> Seeing the same thing with both papers... >>>>>> >>>>>> Thank you, >>>>>> Jimmy >>>>>> >>>>>> Frank Kloeker wrote: >>>>>> Hi Jimmy, >>>>>> >>>>>> Korean and German version are now done on the new format. Can you >>>>>> check publishing? >>>>>> >>>>>> thx >>>>>> >>>>>> Frank >>>>>> >>>>>> Am 2018-07-19 16:47, schrieb Jimmy McArthur: >>>>>> Hi all - >>>>>> >>>>>> Follow up on the Edge paper specifically: >>>>>> >>>>> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >>>>> >>>>>> [4] This is now available. As I mentioned on IRC this morning, it >>>>>> should >>>>>> be VERY close to the PDF. Probably just needs a quick review. >>>>>> >>>>>> Let me know if I can assist with anything. >>>>>> >>>>>> Thank you to i18n team for all of your help!!! >>>>>> >>>>>> Cheers, >>>>>> Jimmy >>>>>> >>>>>> Jimmy McArthur wrote: >>>>>> Ian raises some great points :) I'll try to address below... >>>>>> >>>>>> Ian Y. Choi wrote: >>>>>> Hello, >>>>>> >>>>>> When I saw overall translation source strings on container >>>>>> whitepaper, I would infer that new edge computing whitepaper >>>>>> source strings would include HTML markup tags. >>>>>> One of the things I discussed with Ian and Frank in Vancouver is >>>>>> the expense of recreating PDFs with new translations. It's >>>>>> prohibitively expensive for the Foundation as it requires design >>>>>> resources which we just don't have. As a result, we created the >>>>>> Containers whitepaper in HTML, so that it could be easily updated >>>>>> w/o working with outside design contractors. I indicated that we >>>>>> would also be moving the Edge paper to HTML so that we could prevent >>>>>> that additional design resource cost. >>>>>> On the other hand, the source strings of edge computing whitepaper >>>>>> which I18n team previously translated do not include HTML markup >>>>>> tags, since the source strings are based on just text format. >>>>>> The version that Akihiro put together was based on the Edge PDF, >>>>>> which we unfortunately didn't have the resources to implement in the >>>>>> same format. >>>>>> >>>>>> I really appreciate Akihiro's work on RST-based support on >>>>>> publishing translated edge computing whitepapers, since >>>>>> translators do not have to re-translate all the strings. >>>>>> I would like to second this. It took a lot of initiative to work on >>>>>> the RST-based translation. At the moment, it's just not usable for >>>>>> the reasons mentioned above. >>>>>> On the other hand, it seems that I18n team needs to investigate on >>>>>> translating similar strings of HTML-based edge computing whitepaper >>>>>> source strings, which would discourage translators. >>>>>> Can you expand on this? I'm not entirely clear on why the HTML >>>>>> based translation is more difficult. >>>>>> >>>>>> That's my point of view on translating edge computing whitepaper. >>>>>> >>>>>> For translating container whitepaper, I want to further ask the >>>>>> followings since *I18n-based tools* >>>>>> would mean for translators that translators can test and publish >>>>>> translated whitepapers locally: >>>>>> >>>>>> - How to build translated container whitepaper using original >>>>>> Silverstripe-based repository? >>>>>> https://docs.openstack.org/i18n/latest/tools.html [5] describes >>>>>> well how to build translated artifacts for RST-based OpenStack >>>>>> repositories >>>>>> but I could not find the way how to build translated container >>>>>> whitepaper with translated resources on Zanata. >>>>>> This is a little tricky. It's possible to set up a local version >>>>>> of the OpenStack website >>>>>> >>>>> (https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >>>>> >>>>>> [6]). However, we have to manually ingest the po files as they are >>>>>> completed and then push them out to production, so that wouldn't do >>>>>> much to help with your local build. I'm open to suggestions on how >>>>>> we can make this process easier for the i18n team. >>>>>> >>>>>> Thank you, >>>>>> Jimmy >>>>>> >>>>>> With many thanks, >>>>>> >>>>>> /Ian >>>>>> >>>>>> Jimmy McArthur wrote on 7/17/2018 11:01 PM: >>>>>> Frank, >>>>>> >>>>>> I'm sorry to hear about the displeasure around the Edge paper. As >>>>>> mentioned in a prior thread, the RST format that Akihiro worked did >>>>>> not work with the Zanata process that we have been using with our >>>>>> CMS. Additionally, the existing EDGE page is a PDF, so we had to >>>>>> build a new template to work with the new HTML whitepaper layout we >>>>>> created for the Containers paper. I outlined this in the thread " >>>>>> [OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge Computing >>>>>> Whitepaper Translation" on 6/25/18 and mentioned we would be ready >>>>>> with the template around 7/13. >>>>>> >>>>>> We completed the work on the new whitepaper template and then put >>>>>> out the pot files on Zanata so we can get the po language files >>>>>> back. If this process is too cumbersome for the translation team, >>>>>> I'm open to discussion, but right now our entire translation process >>>>>> is based on the official OpenStack Docs translation process outlined >>>>>> by the i18n team: >>>>>> https://docs.openstack.org/i18n/latest/en_GB/tools.html [7] >>>>>> >>>>>> Again, I realize Akihiro put in some work on his own proposing the >>>>>> new translation type. If the i18n team is moving to this format >>>>>> instead, we can work on redoing our process. >>>>>> >>>>>> Please let me know if I can clarify further. >>>>>> >>>>>> Thanks, >>>>>> Jimmy >>>>>> >>>>>> Frank Kloeker wrote: >>>>>> Hi Jimmy, >>>>>> >>>>>> permission was added for you and Sebastian. The Container Whitepaper >>>>>> is on the Zanata frontpage now. But we removed Edge Computing >>>>>> whitepaper last week because there is a kind of displeasure in the >>>>>> team since the results of translation are still not published beside >>>>>> Chinese version. It would be nice if we have a commitment from the >>>>>> Foundation that results are published in a specific timeframe. This >>>>>> includes your requirements until the translation should be >>>>>> available. >>>>>> >>>>>> thx Frank >>>>>> >>>>>> Am 2018-07-16 17:26, schrieb Jimmy McArthur: >>>>>> Sorry, I should have also added... we additionally need permissions >>>>>> so >>>>>> that we can add the a new version of the pot file to this project: >>>>>> >>>>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>>>> >>>>>> [8] Thanks! >>>>>> Jimmy >>>>>> >>>>>> Jimmy McArthur wrote: >>>>>> Hi all - >>>>>> >>>>>> We have both of the current whitepapers up and available for >>>>>> translation. Can we promote these on the Zanata homepage? >>>>>> >>>>>> >>>>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>>>> >>>>>> [9] >>>>>> >>>>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>>>> >>>>>> [10] Thanks all! >>>>>> Jimmy >>>>>> >>>>>> >>>>> __________________________________________________________________________ >>>>> >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> [12] >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> [12] >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> [12] >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> [12] >>>>> >>>>> >>>>> >>>>> Links: >>>>> ------ >>>>> [1] >>>>> https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >>>>> [2] >>>>> https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >>>>> [3] >>>>> https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >>>>> [4] >>>>> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >>>>> [5] https://docs.openstack.org/i18n/latest/tools.html >>>>> [6] >>>>> https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >>>>> [7] https://docs.openstack.org/i18n/latest/en_GB/tools.html >>>>> [8] >>>>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>>>> [9] >>>>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>>>> [10] >>>>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>>>> [11] >>>>> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> [12] >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From strigazi at gmail.com Mon Aug 6 19:37:12 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Mon, 6 Aug 2018 21:37:12 +0200 Subject: [openstack-dev] [release][requirements][python-magnumclient] Magnumclient FFE In-Reply-To: <20180806173621.e7zgkkewmkg6qwkj@gentoo.org> References: <20180803165205.GC29171@sm-workstation> <20180806173621.e7zgkkewmkg6qwkj@gentoo.org> Message-ID: It is constraints only. There is no project that requires the new version. Spyros On Mon, 6 Aug 2018, 19:36 Matthew Thode, wrote: > On 18-08-06 18:34:42, Spyros Trigazis wrote: > > Hello, > > > > I have requested a release for python-magnumclient [0]. > > Per Doug Hellmann's comment in [0], I am requesting a FFE for > > python-magnumclient. > > > > My question to you is if this needs to be a constraints only thing or if > there is some project that REQUIRES this new version to work (in which > case that project needs to update it's exclusions or minumum). > > -- > Matthew Thode (prometheanfire) > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Mon Aug 6 20:06:23 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 6 Aug 2018 15:06:23 -0500 Subject: [openstack-dev] [release][requirements][python-magnumclient] Magnumclient FFE In-Reply-To: References: <20180803165205.GC29171@sm-workstation> <20180806173621.e7zgkkewmkg6qwkj@gentoo.org> Message-ID: <20180806200623.vwsepip3mh2wpa6i@gentoo.org> On 18-08-06 21:37:12, Spyros Trigazis wrote: > It is constraints only. There is no project > that requires the new version. > > Spyros > > On Mon, 6 Aug 2018, 19:36 Matthew Thode, wrote: > > > On 18-08-06 18:34:42, Spyros Trigazis wrote: > > > Hello, > > > > > > I have requested a release for python-magnumclient [0]. > > > Per Doug Hellmann's comment in [0], I am requesting a FFE for > > > python-magnumclient. > > > > > > > My question to you is if this needs to be a constraints only thing or if > > there is some project that REQUIRES this new version to work (in which > > case that project needs to update it's exclusions or minumum). > > Has my ack then -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From miguel at mlavalle.com Mon Aug 6 20:44:00 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 6 Aug 2018 15:44:00 -0500 Subject: [openstack-dev] [neutron] Bug deputy report week July 30th - August 5th Message-ID: Dear Neutron Team, I was the bugs deputy for the week of July 39th - August 6th (inclusive, so bcafarel has to start on the 7th). Here's the summary of the bugs that were filed: High: https://bugs.launchpad.net/neutron/+bug/1785656 test_internal_dns.InternalDNSTest fails even though dns-integration extension isn't loaded. Proposed fixes: https://review.openstack.org/#/c/589247, https://review.openstack.org/# /c/589255 Medium: https://bugs.launchpad.net/neutron/+bug/1784837 Test tempest.scenario.test_security_groups_basic_ops.TestSecurity GroupsBasicOps.test_in_tenant_traffic fails in neutron-tempest-dvr-ha-multinode-full job https://bugs.launchpad.net/neutron/+bug/1784836 Functional tests from neutron.tests.functional.db.migrations fails randomly https://bugs.launchpad.net/neutron/+bug/1785582 Connectivity to instance after L3 router migration from Legacy to HA fails. Assigned to Slawek Low: https://bugs.launchpad.net/neutron/+bug/1785025 Install and configure controller node in Neutron https://bugs.launchpad.net/neutron/+bug/1784586 Networking guide doesn't clarify that subnets inherit the RBAC policies of their network. Fix: https://review.openstack.org/#/c/588844 In discussion: https://bugs.launchpad.net/neutron/+bug/1784484 intermittent issue getting assigned MACs for SRIOV nics, causes nova timeout https://bugs.launchpad.net/neutron/+bug/1784259 Neutron RBAC not working for multiple extensions https://bugs.launchpad.net/neutron/+bug/1785615 DNS resolution through eventlet contact nameservers if there's an IPv4 or IPv6 entry present in hosts file RFEs: https://bugs.launchpad.net/neutron/+bug/1784879 Neutron doesn't update Designate with some use cases https://bugs.launchpad.net/neutron/+bug/1784590 neutron-dynamic-routing bgp agent should have options for MP-BGP https://bugs.launchpad.net/neutron/+bug/1785608 [RFE] neutron ovs agent support baremetal port using smart nic Invalid: https://bugs.launchpad.net/neutron/+bug/1784950 get_device_details RPC fails if host not specified https://bugs.launchpad.net/neutron/+bug/1785189 Floatingip and router bandwidth speed limit failure Incomplete: https://bugs.launchpad.net/neutron/+bug/1785349 policy.json does not contain rule for auto-allocated-topologies removal https://bugs.launchpad.net/neutron/+bug/1785539 Some notifications related to l3 flavor pass context Best regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Mon Aug 6 20:52:04 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 6 Aug 2018 16:52:04 -0400 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> <12b36042-0303-0c2b-5239-7e4084fe1219@debian.org> <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> Message-ID: <658519a3-e79e-cdac-6f55-7dad77df043b@redhat.com> On 06/08/18 13:11, Thomas Goirand wrote: > On 08/02/2018 10:43 AM, Andrey Kurilin wrote: >> There's also some "raise StopIteration" issues in: >> - ceilometer >> - cinder >> - designate >> - glance >> - glare >> - heat >> - karbor >> - manila >> - murano >> - networking-ovn >> - neutron-vpnaas >> - nova >> - rally >> >> >> Can you provide any traceback or steps to reproduce the issue for Rally >> project ? I assume Thomas is only trying to run the unit tests, since that's what he has to do to verify the package? > I'm not sure there's any. The only thing I know is that it has stop > StopIteration stuff, but I'm not sure if they are part of generators, in > which case they should simply be replaced by "return" if you want it to > be py 3.7 compatible. I was about to say nobody is doing 'raise StopIteration' where they mean 'return' until I saw that the Glance tests apparently were :D The main issue though is when StopIteration is raised by one thing that happens to be called from *another* generator. e.g. many of the Heat tests that are failing are because we supplied a too-short list of side-effects to a mock and calling next() on them raises StopIteration, but because the calls were happening from inside a generator the StopIterations previously just got swallowed. If no generator were involved the test would have failed with the StopIteration exception. (Note: this was a bug - either in the code or more likely the tests. The purpose of the change in py37 was to expose this kind of bug wherever it exists.) > I didn't have time to investigate these, but at least Glance was > affected, and a patch was sent (as well as an async patch). None of them > has been merged yet: > > https://review.openstack.org/#/c/586050/ > https://review.openstack.org/#/c/586716/ > > That'd be ok if at least there was some reviews. It looks like nobody > cares but Debian & Ubuntu people... :( > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Mon Aug 6 20:56:45 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 06 Aug 2018 16:56:45 -0400 Subject: [openstack-dev] [python3] champions, please review the updated process Message-ID: <1533588883-sup-4209@lrrr.local> I have updated the README.rst in the goal-tools repository with an updated process for preparing, proposing, and tracking the zuul migration patches. I need the other champions to look over the instructions and let me know if any parts are confusing or incomplete. Please do that as soon as you can, so we can be prepared to start generating patches after the final release for Rocky is done. http://git.openstack.org/cgit/openstack/goal-tools/tree/README.rst#n22 Doug From whayutin at redhat.com Mon Aug 6 21:55:05 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 6 Aug 2018 15:55:05 -0600 Subject: [openstack-dev] [tripleo] 3rd party ovb jobs are down In-Reply-To: References: Message-ID: On Mon, Aug 6, 2018 at 12:56 PM Wesley Hayutin wrote: > Greetings, > > There is currently an unplanned outtage atm for the tripleo 3rd party OVB > based jobs. > We will contact the list when there are more details. > > Thank you! > OK, I'm going to call an end to the current outtage. We are closely monitoring the ovb 3rd party jobs. I'll called for the outtage when we hit [1]. Once I deleted the stack that moved teh HA routers to back_up state, the networking came back online. Additionally Kieran and I had to work through a number of instances that required admin access to remove. Once those resources were cleaned up our CI tooling removed the rest of the stacks in delete_failed status. The stacks in delete_failed status were holding ip address that were causing new stacks to fail [2] There are still active issues that could cause OVB jobs to fail. This connection issues [3] was originaly thought to be DNS, however that turned out to not be the case. You may also see your job have a "node_failure" status, Paul has sent updates about this issue and is working on a patch and integration into rdo software factory. The CI team is close to including all the console logs into the regular job logs, however if needed atm they can be viewed at [5]. We are also adding the bmc to the list of instances that we collect logs from. *To summarize* the most recent outtage was infra related and the errors were swallowed up in the bmc console log that at the time was not available to users. We continue to monitor that ovb jobs at http://cistatus.tripleo.org/ The legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master job is at a 53% pass rate, it needs to move to a > 85% pass rate to match other check jobs. Thanks all! [1] https://bugzilla.redhat.com/show_bug.cgi?id=1570136 [2] http://paste.openstack.org/show/727444/ [3] https://bugs.launchpad.net/tripleo/+bug/1785342 [4] https://review.openstack.org/#/c/584488/ [5] http://38.145.34.41/console-logs/?C=M;O=D > > -- > > Wes Hayutin > > Associate MANAGER > > Red Hat > > > > w hayutin at redhat.com T: +1919 <+19197544114> > 4232509 IRC: weshay > > > View my calendar and check my availability for meetings HERE > > -- Wes Hayutin Associate MANAGER Red Hat w hayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Aug 6 22:03:28 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 6 Aug 2018 17:03:28 -0500 Subject: [openstack-dev] [nova] StarlingX diff analysis Message-ID: In case you haven't heard, there was this StarlingX thing announced at the last summit. I have gone through the enormous nova diff in their repo and the results are in a spreadsheet [1]. Given the enormous spreadsheet (see a pattern?), I have further refined that into a set of high-level charts [2]. I suspect there might be some negative reactions to even doing this type of analysis lest it might seem like promoting throwing a huge pile of code over the wall and expecting the OpenStack (or more specifically the nova) community to pick it up. That's not my intention at all, nor do I expect nova maintainers to be responsible for upstreaming any of this. This is all educational to figure out what the major differences and overlaps are and what could be constructively upstreamed from the starlingx staging repo since it's not all NFV and Edge dragons in here, there are some legitimate bug fixes and good ideas. I'm sharing it because I want to feel like my time spent on this in the last week wasn't all for nothing. [1] https://docs.google.com/spreadsheets/d/1ugp1FVWMsu4x3KgrmPf7HGX8Mh1n80v-KVzweSDZunU/edit?usp=sharing [2] https://docs.google.com/presentation/d/1P-__JnxCFUbSVlEoPX26Jz6VaOyNg-jZbBsmmKA2f0c/edit?usp=sharing -- Thanks, Matt From sundar.nadathur at intel.com Mon Aug 6 22:16:31 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Mon, 6 Aug 2018 15:16:31 -0700 Subject: [openstack-dev] [Cyborg] Agent - Conductor update Message-ID: Hi,    The Cyborg agent in a compute node collects information about devices from the Cyborg drivers on that node. It then needs to push that information to the Cyborg conductor in the controller, which then needs to persist it in the Cyborg db and update Placement. Further, the agent needs to collect and update this information periodically (or possibly in response to notifications) to handle hot add/delete of devices, reprogramming (for FPGAs), health failure of devices, etc. In this morning's call, we discussed how to do this periodic update [1]. In particular, we talked about how to compute the difference between the previous device configuration in a compute node and the current one, whether the agent do should do that diff or the controller, etc. Since there are many fields per device, and they are tree-structured, the complexity of doing the diff seemed large. On taking a closer look, however, the amount of computation needed to do the update is not huge. Say, for discussion's sake, that the controller has a snapshot of the entire device config for a specific compute node, i.e. an array of device structures NewConfig[]. It reads the current list of devices for that node from the db, CurrentConfig[]. Then the controller's logic is like this: * Determine the list of devices in NewConfig[] but not in CurrentConfig[] (this is a set difference in Python [2]): they are the newly added ones. For each newly added device, do a single transaction to add all the fields to the db together. * Determine the list of devices in CurrentConfig[] but not in NewConfig[]: they are the deleted devices.For each such device, do a single transaction to delete that entry. * For each modified device, compute what has changed, and update that alone. This is the per-field diff. Say each field in the device structure is a string of 100 characters, and it takes 1 nanosecond to add, delete or modify a character. So, each field takes 100 ns to update (add/delete/modify). Say 20 fields per device: so 2 us to add, delete or modify a device. Say 10 devices per compute node: so 20 us per node. 500 nodes will take 10 milliseconds. So, if each node sends a refresh every second, the controller will spend a very small fraction of that time in updating the db, even including transaction costs, set difference computation, etc. This back-of-the-envelope calculation shows that we need not try to optimize too early: the agent should send the entire device config over to the controller, and let it update the db per-device and per-field. [1] https://etherpad.openstack.org/p/cyborg-rocky-development [2] https://docs.python.org/2/library/sets.html Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Tue Aug 7 00:20:45 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 7 Aug 2018 10:20:45 +1000 Subject: [openstack-dev] [all][election][senlin][tacker] Last chance to vote Message-ID: <20180807002010.GB9540@thor.bakeyournoodle.com> Hello Senlin and Tacker contributors, Just a quick reminder that elections are closing soon, if you haven't already you should use your right to vote and pick your favourite candidate! You have until Aug 07, 2018 23:45 UTC. Thanks for your time! Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From muroi.masahito at lab.ntt.co.jp Tue Aug 7 02:17:14 2018 From: muroi.masahito at lab.ntt.co.jp (Masahito MUROI) Date: Tue, 7 Aug 2018 11:17:14 +0900 Subject: [openstack-dev] [Blazar] Stein etherpad Message-ID: Hi Blazar folks, I prepared the etherpad page for the Stein PTG. https://etherpad.openstack.org/p/blazar-ptg-stein best regards, Masahito From tony at bakeyournoodle.com Tue Aug 7 04:34:45 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 7 Aug 2018 14:34:45 +1000 Subject: [openstack-dev] [tripleo] EOL process for newton branches In-Reply-To: <5565c598-7327-b7f3-773b-2cfb26c8326b@suse.com> References: <20180718234625.GA30070@thor.bakeyournoodle.com> <20180719045945.GB30070@thor.bakeyournoodle.com> <5565c598-7327-b7f3-773b-2cfb26c8326b@suse.com> Message-ID: <20180807043445.GH9540@thor.bakeyournoodle.com> On Mon, Aug 06, 2018 at 07:27:37PM +0200, Andreas Jaeger wrote: > Tony, > > On 2018-07-19 06:59, Tony Breeds wrote: > > On Wed, Jul 18, 2018 at 08:08:16PM -0400, Emilien Macchi wrote: > > > Option 2, EOL everything. > > > Thanks a lot for your help on this one, Tony. > > > > No problem. > > > > I've created: > > https://review.openstack.org/583856 > > to tag final releases for tripleo deliverables and then mark them as > > EOL. > > This one has merged now. Thanks. > > > > Once that merges we can arrange for someone, with appropriate > > permissions to run: > > > > # EOL repos belonging to tripleo > > eol_branch.sh -- stable/newton newton-eol \ > > openstack/instack openstack/instack-undercloud \ > > openstack/os-apply-config openstack/os-collect-config \ > > openstack/os-net-config openstack/os-refresh-config \ > > openstack/puppet-tripleo openstack/python-tripleoclient \ > > openstack/tripleo-common openstack/tripleo-heat-templates \ > > openstack/tripleo-image-elements \ > > openstack/tripleo-puppet-elements openstack/tripleo-ui \ > > openstack/tripleo-validations > > Tony, will you coordinate with infra to run this yourself again - or let > them run it for you, please? I'm happy with either option. If it hasn't been run when I get online tomorrow I'll ask on #openstack-infra and I'll do it myself. > Note that we removed the script with retiring release-tools repo, I propose > to readd with https://review.openstack.org/589236 and > https://review.openstack.org/589237 and would love your review on these, > please. I want to be sure that we import the right version... Thanks for doing that! LGTM +1 :) Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Tue Aug 7 05:08:10 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 7 Aug 2018 15:08:10 +1000 Subject: [openstack-dev] [tripleo] EOL process for newton branches In-Reply-To: <20180807043445.GH9540@thor.bakeyournoodle.com> References: <20180718234625.GA30070@thor.bakeyournoodle.com> <20180719045945.GB30070@thor.bakeyournoodle.com> <5565c598-7327-b7f3-773b-2cfb26c8326b@suse.com> <20180807043445.GH9540@thor.bakeyournoodle.com> Message-ID: <20180807050810.GI9540@thor.bakeyournoodle.com> On Tue, Aug 07, 2018 at 02:34:45PM +1000, Tony Breeds wrote: > On Mon, Aug 06, 2018 at 07:27:37PM +0200, Andreas Jaeger wrote: > > Tony, > > > > On 2018-07-19 06:59, Tony Breeds wrote: > > > On Wed, Jul 18, 2018 at 08:08:16PM -0400, Emilien Macchi wrote: > > > > Option 2, EOL everything. > > > > Thanks a lot for your help on this one, Tony. > > > > > > No problem. > > > > > > I've created: > > > https://review.openstack.org/583856 > > > to tag final releases for tripleo deliverables and then mark them as > > > EOL. > > > > This one has merged now. > > Thanks. > > > > > > > Once that merges we can arrange for someone, with appropriate > > > permissions to run: > > > > > > # EOL repos belonging to tripleo > > > eol_branch.sh -- stable/newton newton-eol \ > > > openstack/instack openstack/instack-undercloud \ > > > openstack/os-apply-config openstack/os-collect-config \ > > > openstack/os-net-config openstack/os-refresh-config \ > > > openstack/puppet-tripleo openstack/python-tripleoclient \ > > > openstack/tripleo-common openstack/tripleo-heat-templates \ > > > openstack/tripleo-image-elements \ > > > openstack/tripleo-puppet-elements openstack/tripleo-ui \ > > > openstack/tripleo-validations > > > > Tony, will you coordinate with infra to run this yourself again - or let > > them run it for you, please? > > I'm happy with either option. If it hasn't been run when I get online > tomorrow I'll ask on #openstack-infra and I'll do it myself. Okay Ian gave me permission to do this. Those repos have been tagged newton-eol and had the branches deleted. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 484 bytes Desc: not available URL: From gael.therond at gmail.com Tue Aug 7 06:10:53 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Tue, 7 Aug 2018 08:10:53 +0200 Subject: [openstack-dev] [Openstack-operators] [nova] StarlingX diff analysis In-Reply-To: References: Message-ID: Hi matt, everyone, I just read your analysis and would like to thank you for such work. I really think there are numerous features included/used on this Nova rework that would be highly beneficial for Nova and users of it. I hope people will fairly appreciate you work. I didn’t had time to check StarlingX code quality, how did you feel it while you were doing your analysis? Thanks a lot for this share. I’ll have a closer look at it this afternoon as my company may be interested by some features. Kind regards, G. Le mar. 7 août 2018 à 00:03, Matt Riedemann a écrit : > In case you haven't heard, there was this StarlingX thing announced at > the last summit. I have gone through the enormous nova diff in their > repo and the results are in a spreadsheet [1]. Given the enormous > spreadsheet (see a pattern?), I have further refined that into a set of > high-level charts [2]. > > I suspect there might be some negative reactions to even doing this type > of analysis lest it might seem like promoting throwing a huge pile of > code over the wall and expecting the OpenStack (or more specifically the > nova) community to pick it up. That's not my intention at all, nor do I > expect nova maintainers to be responsible for upstreaming any of this. > > This is all educational to figure out what the major differences and > overlaps are and what could be constructively upstreamed from the > starlingx staging repo since it's not all NFV and Edge dragons in here, > there are some legitimate bug fixes and good ideas. I'm sharing it > because I want to feel like my time spent on this in the last week > wasn't all for nothing. > > [1] > > https://docs.google.com/spreadsheets/d/1ugp1FVWMsu4x3KgrmPf7HGX8Mh1n80v-KVzweSDZunU/edit?usp=sharing > [2] > > https://docs.google.com/presentation/d/1P-__JnxCFUbSVlEoPX26Jz6VaOyNg-jZbBsmmKA2f0c/edit?usp=sharing > > -- > > Thanks, > > Matt > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Tue Aug 7 06:38:45 2018 From: aj at suse.com (Andreas Jaeger) Date: Tue, 7 Aug 2018 08:38:45 +0200 Subject: [openstack-dev] [tripleo] EOL process for newton branches In-Reply-To: <20180807050810.GI9540@thor.bakeyournoodle.com> References: <20180718234625.GA30070@thor.bakeyournoodle.com> <20180719045945.GB30070@thor.bakeyournoodle.com> <5565c598-7327-b7f3-773b-2cfb26c8326b@suse.com> <20180807043445.GH9540@thor.bakeyournoodle.com> <20180807050810.GI9540@thor.bakeyournoodle.com> Message-ID: <441e4cb8-10ee-2797-ee5c-fd6d212d3bc5@suse.com> On 2018-08-07 07:08, Tony Breeds wrote: > Okay Ian gave me permission to do this. Those repos have been tagged > newton-eol and had the branches deleted. Thanks, Tony! Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From eumel at arcor.de Tue Aug 7 07:49:17 2018 From: eumel at arcor.de (Frank Kloeker) Date: Tue, 07 Aug 2018 09:49:17 +0200 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: <5B689C6C.2010006@openstack.org> References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> <5B4DF6B4.9030501@openstack.org> <7f053a98-718b-0470-9ffa-934f0e715a76@gmail.com> <5B4E132E.5050607@openstack.org> <5B50A476.8010606@openstack.org> <5B5F295F.3090608@openstack.org> <1f5afd62cc3a9a8923586a404e707366@arcor.de> <16e69b47c8b71bf6f920ab8f3df61928@arcor.de> <5B63349F.4010204@openstack.org> <5B63565F.1010109@openstack.org> <5B689C6C.2010006@openstack.org> Message-ID: <4ef95237972fb567d9eaebba82bf9066@arcor.de> Many thanks, Jimmy! At last I draw your attention to Stackalytics. Translation metrics for whitepapers not counted there. Maybe you have an advice for https://review.openstack.org/#/c/588965/ kind regards Frank Am 2018-08-06 21:07, schrieb Jimmy McArthur: > A heads up that the Translators are now listed at the bottom of the > page as well, along with the rest of the paper contributors: > > https://www.openstack.org/edge-computing/cloud-edge-computing-beyond-the-data-center?lang=ja_JP > > Cheers! > Jimmy > > Frank Kloeker wrote: >> Hi Jimmy, >> >> thanks for announcement. Great stuff! It looks really great and it's >> easy to navigate. I think a special thanks goes to Sebastian for >> designing the pages. One small remark: have you tried text-align: >> justify? I think it would be a little bit more readable, like a >> science paper (German word is: Ordnung) >> I put the projects again on the frontpage of the translation platform, >> so we'll get more translations shortly. >> >> kind regards >> >> Frank >> >> Am 2018-08-02 21:07, schrieb Jimmy McArthur: >>> The Edge and Containers translations are now live. As new >>> translations become available, we will add them to the page. >>> >>> https://www.openstack.org/containers/ >>> https://www.openstack.org/edge-computing/ >>> >>> Note that the Chinese translation has not been added to Zanata at >>> this >>> time, so I've left the PDF download up on that page. >>> >>> Thanks everyone and please let me know if you have questions or >>> concerns! >>> >>> Cheers! >>> Jimmy >>> >>> Jimmy McArthur wrote: >>>> Frank, >>>> >>>> We expect to have these papers up this afternoon. I'll update this >>>> thread when we do. >>>> >>>> Thanks! >>>> Jimmy >>>> >>>> Frank Kloeker wrote: >>>>> Hi Sebastian, >>>>> >>>>> okay, it's translated now. In Edge whitepaper is the problem with >>>>> XML-Parsing of the term AT&T. Don't know how to escape this. Maybe >>>>> you will see the warning during import too. >>>>> >>>>> kind regards >>>>> >>>>> Frank >>>>> >>>>> Am 2018-07-30 20:09, schrieb Sebastian Marcet: >>>>>> Hi Frank, >>>>>> i was double checking pot file and realized that original pot >>>>>> missed >>>>>> some parts of the original paper (subsections of the paper) >>>>>> apologizes >>>>>> on that >>>>>> i just re uploaded an updated pot file with missing subsections >>>>>> >>>>>> regards >>>>>> >>>>>> On Mon, Jul 30, 2018 at 2:20 PM, Frank Kloeker >>>>>> wrote: >>>>>> >>>>>>> Hi Jimmy, >>>>>>> >>>>>>> from the GUI I'll get this link: >>>>>>> >>>>>> https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >>>>>>> [1] >>>>>>> >>>>>>> paper version are only in container whitepaper: >>>>>>> >>>>>>> >>>>>> https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >>>>>>> [2] >>>>>>> >>>>>>> In general there is no group named papers >>>>>>> >>>>>>> kind regards >>>>>>> >>>>>>> Frank >>>>>>> >>>>>>> Am 2018-07-30 17:06, schrieb Jimmy McArthur: >>>>>>> Frank, >>>>>>> >>>>>>> We're getting a 404 when looking for the pot file on the Zanata >>>>>>> API: >>>>>>> >>>>>> https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >>>>>>> [3] >>>>>>> >>>>>>> As a result, we can't pull the po files. Any idea what might be >>>>>>> happening? >>>>>>> >>>>>>> Seeing the same thing with both papers... >>>>>>> >>>>>>> Thank you, >>>>>>> Jimmy >>>>>>> >>>>>>> Frank Kloeker wrote: >>>>>>> Hi Jimmy, >>>>>>> >>>>>>> Korean and German version are now done on the new format. Can you >>>>>>> check publishing? >>>>>>> >>>>>>> thx >>>>>>> >>>>>>> Frank >>>>>>> >>>>>>> Am 2018-07-19 16:47, schrieb Jimmy McArthur: >>>>>>> Hi all - >>>>>>> >>>>>>> Follow up on the Edge paper specifically: >>>>>>> >>>>>> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >>>>>>> [4] This is now available. As I mentioned on IRC this morning, it >>>>>>> should >>>>>>> be VERY close to the PDF. Probably just needs a quick review. >>>>>>> >>>>>>> Let me know if I can assist with anything. >>>>>>> >>>>>>> Thank you to i18n team for all of your help!!! >>>>>>> >>>>>>> Cheers, >>>>>>> Jimmy >>>>>>> >>>>>>> Jimmy McArthur wrote: >>>>>>> Ian raises some great points :) I'll try to address below... >>>>>>> >>>>>>> Ian Y. Choi wrote: >>>>>>> Hello, >>>>>>> >>>>>>> When I saw overall translation source strings on container >>>>>>> whitepaper, I would infer that new edge computing whitepaper >>>>>>> source strings would include HTML markup tags. >>>>>>> One of the things I discussed with Ian and Frank in Vancouver is >>>>>>> the expense of recreating PDFs with new translations. It's >>>>>>> prohibitively expensive for the Foundation as it requires design >>>>>>> resources which we just don't have. As a result, we created the >>>>>>> Containers whitepaper in HTML, so that it could be easily updated >>>>>>> w/o working with outside design contractors. I indicated that we >>>>>>> would also be moving the Edge paper to HTML so that we could >>>>>>> prevent >>>>>>> that additional design resource cost. >>>>>>> On the other hand, the source strings of edge computing >>>>>>> whitepaper >>>>>>> which I18n team previously translated do not include HTML markup >>>>>>> tags, since the source strings are based on just text format. >>>>>>> The version that Akihiro put together was based on the Edge PDF, >>>>>>> which we unfortunately didn't have the resources to implement in >>>>>>> the >>>>>>> same format. >>>>>>> >>>>>>> I really appreciate Akihiro's work on RST-based support on >>>>>>> publishing translated edge computing whitepapers, since >>>>>>> translators do not have to re-translate all the strings. >>>>>>> I would like to second this. It took a lot of initiative to work >>>>>>> on >>>>>>> the RST-based translation. At the moment, it's just not usable >>>>>>> for >>>>>>> the reasons mentioned above. >>>>>>> On the other hand, it seems that I18n team needs to investigate >>>>>>> on >>>>>>> translating similar strings of HTML-based edge computing >>>>>>> whitepaper >>>>>>> source strings, which would discourage translators. >>>>>>> Can you expand on this? I'm not entirely clear on why the HTML >>>>>>> based translation is more difficult. >>>>>>> >>>>>>> That's my point of view on translating edge computing whitepaper. >>>>>>> >>>>>>> For translating container whitepaper, I want to further ask the >>>>>>> followings since *I18n-based tools* >>>>>>> would mean for translators that translators can test and publish >>>>>>> translated whitepapers locally: >>>>>>> >>>>>>> - How to build translated container whitepaper using original >>>>>>> Silverstripe-based repository? >>>>>>> https://docs.openstack.org/i18n/latest/tools.html [5] describes >>>>>>> well how to build translated artifacts for RST-based OpenStack >>>>>>> repositories >>>>>>> but I could not find the way how to build translated container >>>>>>> whitepaper with translated resources on Zanata. >>>>>>> This is a little tricky. It's possible to set up a local version >>>>>>> of the OpenStack website >>>>>>> >>>>>> (https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >>>>>>> [6]). However, we have to manually ingest the po files as they >>>>>>> are >>>>>>> completed and then push them out to production, so that wouldn't >>>>>>> do >>>>>>> much to help with your local build. I'm open to suggestions on >>>>>>> how >>>>>>> we can make this process easier for the i18n team. >>>>>>> >>>>>>> Thank you, >>>>>>> Jimmy >>>>>>> >>>>>>> With many thanks, >>>>>>> >>>>>>> /Ian >>>>>>> >>>>>>> Jimmy McArthur wrote on 7/17/2018 11:01 PM: >>>>>>> Frank, >>>>>>> >>>>>>> I'm sorry to hear about the displeasure around the Edge paper. >>>>>>> As >>>>>>> mentioned in a prior thread, the RST format that Akihiro worked >>>>>>> did >>>>>>> not work with the Zanata process that we have been using with >>>>>>> our >>>>>>> CMS. Additionally, the existing EDGE page is a PDF, so we had to >>>>>>> build a new template to work with the new HTML whitepaper layout >>>>>>> we >>>>>>> created for the Containers paper. I outlined this in the thread " >>>>>>> [OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge Computing >>>>>>> Whitepaper Translation" on 6/25/18 and mentioned we would be >>>>>>> ready >>>>>>> with the template around 7/13. >>>>>>> >>>>>>> We completed the work on the new whitepaper template and then put >>>>>>> out the pot files on Zanata so we can get the po language files >>>>>>> back. If this process is too cumbersome for the translation team, >>>>>>> I'm open to discussion, but right now our entire translation >>>>>>> process >>>>>>> is based on the official OpenStack Docs translation process >>>>>>> outlined >>>>>>> by the i18n team: >>>>>>> https://docs.openstack.org/i18n/latest/en_GB/tools.html [7] >>>>>>> >>>>>>> Again, I realize Akihiro put in some work on his own proposing >>>>>>> the >>>>>>> new translation type. If the i18n team is moving to this format >>>>>>> instead, we can work on redoing our process. >>>>>>> >>>>>>> Please let me know if I can clarify further. >>>>>>> >>>>>>> Thanks, >>>>>>> Jimmy >>>>>>> >>>>>>> Frank Kloeker wrote: >>>>>>> Hi Jimmy, >>>>>>> >>>>>>> permission was added for you and Sebastian. The Container >>>>>>> Whitepaper >>>>>>> is on the Zanata frontpage now. But we removed Edge Computing >>>>>>> whitepaper last week because there is a kind of displeasure in >>>>>>> the >>>>>>> team since the results of translation are still not published >>>>>>> beside >>>>>>> Chinese version. It would be nice if we have a commitment from >>>>>>> the >>>>>>> Foundation that results are published in a specific timeframe. >>>>>>> This >>>>>>> includes your requirements until the translation should be >>>>>>> available. >>>>>>> >>>>>>> thx Frank >>>>>>> >>>>>>> Am 2018-07-16 17:26, schrieb Jimmy McArthur: >>>>>>> Sorry, I should have also added... we additionally need >>>>>>> permissions >>>>>>> so >>>>>>> that we can add the a new version of the pot file to this >>>>>>> project: >>>>>>> >>>>>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>>>>>> [8] Thanks! >>>>>>> Jimmy >>>>>>> >>>>>>> Jimmy McArthur wrote: >>>>>>> Hi all - >>>>>>> >>>>>>> We have both of the current whitepapers up and available for >>>>>>> translation. Can we promote these on the Zanata homepage? >>>>>>> >>>>>>> >>>>>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>>>>>> [9] >>>>>>> >>>>>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>>>>>> [10] Thanks all! >>>>>>> Jimmy >>>>>>> >>>>>>> >>>>>> __________________________________________________________________________ >>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>> Unsubscribe: >>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>>> [11] >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> [12] >>>>>> >>>>>> __________________________________________________________________________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> [12] >>>>>> >>>>>> __________________________________________________________________________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> [12] >>>>>> >>>>>> __________________________________________________________________________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> [12] >>>>>> >>>>>> >>>>>> >>>>>> Links: >>>>>> ------ >>>>>> [1] >>>>>> https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >>>>>> [2] >>>>>> https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >>>>>> [3] >>>>>> https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >>>>>> [4] >>>>>> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >>>>>> [5] https://docs.openstack.org/i18n/latest/tools.html >>>>>> [6] >>>>>> https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >>>>>> [7] https://docs.openstack.org/i18n/latest/en_GB/tools.html >>>>>> [8] >>>>>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>>>>> [9] >>>>>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>>>>> [10] >>>>>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>>>>> [11] >>>>>> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>> [12] >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> From wangpeihuixyz at 126.com Tue Aug 7 08:15:26 2018 From: wangpeihuixyz at 126.com (Frank Wang) Date: Tue, 7 Aug 2018 16:15:26 +0800 (CST) Subject: [openstack-dev] [neutron] Does neutron support QinQ(vlan transparent) ? Message-ID: <6e1ff2b5.671a.16513747f10.Coremail.wangpeihuixyz@126.com> Hello folks, I noted that the API already has the vlan_transparent attribute in the network, Do neutron-agents(linux-bridge, openvswitch) support QinQ? I didn't find any reference materials that could guide me on how to use or configure it. Thank for your time reading this, Any comments would be appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Aug 7 11:28:03 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 7 Aug 2018 12:28:03 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-32 Message-ID: HTML: https://anticdent.org/tc-report-18-32.html The TC discussions of interest in the past week have been related to the recent [PTL elections](https://governance.openstack.org/election/) and planning for the [forthcoming PTG](https://www.openstack.org/ptg). ## PTL Election Gaps A few official projects had no nominee for the PTL position. An [etherpad](https://etherpad.openstack.org/p/stein-leaderless) was created to track this, and most of the situations have been resolved. Pointers to some of the discussion: * [Near the end of nomination period](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-31.log.html#t2018-07-31T17:39:28). * [Discussion about Trove](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-02.log.html#t2018-08-02T13:59:11). There's quite a bit here about how we evaluate the health of a project and the value of volunteers, and for how long we are willing to extend grace periods for projects which have a history of imperfect health. * [What to do about RefStack](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-02.log.html#t2018-08-02T16:01:12) which evolved to become a discussion about the role of the QA team. * [Freezer and Searchlight](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-07.log.html#t2018-08-07T09:06:37). Where we (the TC) seem to have some minor disagreement is the extent to which we should be extending a lifeline to official projects which are (for whatever reason) struggling to keep up with responsibilities or we should be using the power to remove official status as a way to highlight need. ## PTG Planning The PTG is a month away, so the TC is doing a bit of planning to prepare. There will be two different days during which the TC will meet: Sunday afternoon before the PTG, and all day Friday. Most planning is happening on [this etherpad](https://etherpad.openstack.org/p/tc-stein-ptg). There is also of specific etherpad about [the relationship between the TC and the Foundation and Foundation corporate members](https://etherpad.openstack.org/p/tc-board-foundation). And one for [post-lunch topics](https://etherpad.openstack.org/p/PTG4-postlunch). IRC links: * [Discussion about limiting the agenda](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-03.log.html#t2018-08-03T12:31:38). If there's any disagreement in this planning process, it is over whether we should focus our time on topics we have some chance of resolving or at least making some concrete progress, or we should spend the time having open-ended discussions. Ideally there would be time for both, as the latter is required to develop the shared language that is needed to take real action. But as is rampant in the community we are constrained by time and other responsibilities. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From work at seanmooney.info Tue Aug 7 11:32:44 2018 From: work at seanmooney.info (Sean Mooney) Date: Tue, 7 Aug 2018 12:32:44 +0100 Subject: [openstack-dev] [neutron] Does neutron support QinQ(vlan transparent) ? In-Reply-To: <6e1ff2b5.671a.16513747f10.Coremail.wangpeihuixyz@126.com> References: <6e1ff2b5.671a.16513747f10.Coremail.wangpeihuixyz@126.com> Message-ID: TL;DR it wont work with the ovs agent but "should" work with linux bridge. see full message below for details. regards sean. the linux bridge agent supports the vlan_transparent option only when createing networks with an l3 segmentation type e.g. vxlan,gre... ovs using the neutron l2 agnet does not supprot vlan_transparent netwroks because of how that agent use vlans for tenant isolation on the br-int. it is possible to use achive vlan transparancy with ovs usign an sdn controller such as odl or ovn but that was not what you asked in your question so i wont expand on that futher. if you deploy openstack with linux bridge networking and then create a tenant network of type vxlan with vlan_transparancy set to true and your tenants generate QinQ traffic with an mtu reduced so that it will fix within the vxlan tunnel unfragmented then yes it should be possibly however you may need to disable port_security/security groups on the port as im not sure if the ip tables firewall driver will correctly handel this case. an alternive to disabling security groups would be to add an explicit rule that matched on the etehrnet type and allowed QinQ traffic on ingress and egress from the vm. as far as i am aware this is not tested in the gate so while it should work the lack of documentation and test coverage means you will likely be one of the first to test it if you choose to do so and it may fail for many reasons. On 7 August 2018 at 09:15, Frank Wang wrote: > Hello folks, > > I noted that the API already has the vlan_transparent attribute in the > network, Do neutron-agents(linux-bridge, openvswitch) support QinQ? I > didn't find any reference materials that could guide me on how to use or > configure it. > > Thank for your time reading this, Any comments would be appreciated. > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From balazs.gibizer at ericsson.com Tue Aug 7 11:48:36 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 07 Aug 2018 13:48:36 +0200 Subject: [openstack-dev] [nova]Notification update week 32 Message-ID: <1533642516.26377.2@smtp.office365.com> Hi, Here is the latest notification subteam update. Bugs ---- No RC potential notification bug is tracked. No new bug since last week. Weekly meeting -------------- No meeting is planned for this week. Cheers, gibi From thomas at goirand.fr Tue Aug 7 11:52:48 2018 From: thomas at goirand.fr (Thomas Goirand) Date: Tue, 7 Aug 2018 13:52:48 +0200 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: <20180806190241.GA3368@devvm1> References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> <12b36042-0303-0c2b-5239-7e4084fe1219@debian.org> <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> <20180806190241.GA3368@devvm1> Message-ID: On 08/06/2018 09:02 PM, Sean McGinnis wrote: >> >> I didn't have time to investigate these, but at least Glance was >> affected, and a patch was sent (as well as an async patch). None of them >> has been merged yet: >> >> https://review.openstack.org/#/c/586050/ >> https://review.openstack.org/#/c/586716/ >> >> That'd be ok if at least there was some reviews. It looks like nobody >> cares but Debian & Ubuntu people... :( >> > > Keep in mind that your priorities are different than everyone elses. There are > large parts of the community still working on Python 3.5 support (our > officially supported Python 3 version), as well as smaller teams overall > working on things like critical bugs. > > Unless and until we declare Python 3.7 as our new target (which I don't think > we are ready to do yet), these kinds of patches will be on a best effort basis. This is exactly what I'm complaining about. OpenStack upstream has very wrong priorities. If we really are to switch to Python 3, then we got to make sure we're current, because that's the version distros are end up running. Or maybe we only care if "it works on devstack" (tm)? Cheers, Thomas Goirand (zigo) From mordred at inaugust.com Tue Aug 7 12:35:55 2018 From: mordred at inaugust.com (Monty Taylor) Date: Tue, 7 Aug 2018 07:35:55 -0500 Subject: [openstack-dev] [requirements][release] FFE for openstacksdk 0.17.2 Message-ID: <082089a7-124e-2d20-77a5-8b5e9c0a8748@inaugust.com> Hey all, I'd like to request an FFE to release 0.17.2 of openstacksdk from stable/rocky. Infra discovered an issue that affects the production nodepool related to the multi-threaded TaskManager and exception propagation. When it gets triggered, we lose an entire cloud of capacity (whoops) until we restart the associated nodepool-launcher process. Nothing in OpenStack uses the particular feature in openstacksdk in question (yet), so nobody should need to even bump constraints. Thanks! Monty From work at seanmooney.info Tue Aug 7 13:24:44 2018 From: work at seanmooney.info (Sean Mooney) Date: Tue, 7 Aug 2018 14:24:44 +0100 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> <12b36042-0303-0c2b-5239-7e4084fe1219@debian.org> <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> <20180806190241.GA3368@devvm1> Message-ID: On 7 August 2018 at 12:52, Thomas Goirand wrote: > On 08/06/2018 09:02 PM, Sean McGinnis wrote: >>> >>> I didn't have time to investigate these, but at least Glance was >>> affected, and a patch was sent (as well as an async patch). None of them >>> has been merged yet: >>> >>> https://review.openstack.org/#/c/586050/ >>> https://review.openstack.org/#/c/586716/ >>> >>> That'd be ok if at least there was some reviews. It looks like nobody >>> cares but Debian & Ubuntu people... :( >>> >> >> Keep in mind that your priorities are different than everyone elses. There are >> large parts of the community still working on Python 3.5 support (our >> officially supported Python 3 version), as well as smaller teams overall >> working on things like critical bugs. >> >> Unless and until we declare Python 3.7 as our new target (which I don't think >> we are ready to do yet), these kinds of patches will be on a best effort basis. > > This is exactly what I'm complaining about. OpenStack upstream has very > wrong priorities. If we really are to switch to Python 3, then we got to > make sure we're current, because that's the version distros are end up > running. Or maybe we only care if "it works on devstack" (tm)? python 3.7 has some backward incompatible changes if i recall correctly such as forked thread not inheriting open file descriptor form the parent. i dont think that will bite us but it might mess with privsep deamon though i think we fork a full process not a thread in that case. the point im trying to make here is that following the latest python versions is likely going to require us to either A.) use only the backwards compatible subset or B.) make some code test what versions of python 3 we are using the same way the six package does. so im not sure pushing for python 3.7 is the right thing to do. also i would not assume all distros will ship 3.7 in the near term. i have not check lately but i believe cento 7 unless make 3.4 and 3.6 available in the default repos. ubuntu 18.04 ships with 3.6 i believe im not sure about other linux distros but since most openstack deployment are done on LTS releases of operating systems i would suspect that python 3.6 will be the main python 3 versions we see deployed in production for some time. having a 3.7 gate is not a bad idea but priority wise have a 3.6 gate would be much higher on my list. i think we as a community will have to decide on the minimum and maximum python 3 versions we support for each release and adjust as we go forward. i would suggst a min of 3.5 and max of 3.6 for rocky. for stien perhaps bump that to min of 3.6 max 3.7 but i think this is something that needs to be address community wide via a governance resolution rather then per project. it will also impact the external python lib we can depend on too which is another reason i think thie need to be a comuntiy wide discussion and goal that is informed by what distros are doing but not mandated by what any one distro is doing. regards sean. > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Tue Aug 7 13:29:04 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 7 Aug 2018 08:29:04 -0500 Subject: [openstack-dev] [Openstack-operators] [nova] StarlingX diff analysis In-Reply-To: References: Message-ID: <45bd7236-b9f8-026d-620b-7356d4effa49@gmail.com> On 8/7/2018 1:10 AM, Flint WALRUS wrote: > I didn’t had time to check StarlingX code quality, how did you feel it > while you were doing your analysis? I didn't dig into the test diffs themselves, but it was my impression that from what I was poking around in the local git repo, there were several changes which didn't have any test coverage. For the really big full stack changes (L3 CAT, CPU scaling and shared/pinned CPUs on same host), toward the end I just started glossing over a lot of that because it's so much code in so many places, so I can't really speak very well to how it was written or how well it is tested (maybe WindRiver had a more robust CI system running integration tests, I don't know). There were also some things which would have been caught in code review upstream. For example, they ignore the "force" parameter for live migration so that live migration requests always go through the scheduler. However, the "force" parameter is only on newer microversions. Before that, if you specified a host at all it would bypass the scheduler, but the change didn't take that into account, so they still have gaps in some of the things they were trying to essentially disable in the API. On the whole I think the quality is OK. It's not really possible to accurately judge that when looking at a single diff this large. -- Thanks, Matt From zigo at debian.org Tue Aug 7 14:11:43 2018 From: zigo at debian.org (Thomas Goirand) Date: Tue, 7 Aug 2018 16:11:43 +0200 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> <12b36042-0303-0c2b-5239-7e4084fe1219@debian.org> <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> <20180806190241.GA3368@devvm1> Message-ID: <30fd7e68-3a58-2ab1-bba0-c4c5e0eb2bf5@debian.org> On 08/07/2018 03:24 PM, Sean Mooney wrote: > so im not sure pushing for python 3.7 is the right thing to do. also i would not > assume all distros will ship 3.7 in the near term. i have not check lately but > i believe cento 7 unless make 3.4 and 3.6 available in the default repos. > ubuntu 18.04 ships with 3.6 i believe The current plan for Debian is that we'll be trying to push for Python 3.7 for Buster, which freezes in January. This freeze date means that it's going to be Rocky that will end up in the next Debian release. If Python 3.7 is a failure, then late November, we will remove Python 3.7 from Unstable and let Buster release with 3.6. As for Ubuntu, it is currently unclear if 18.10 will be released with Python 3.7 or not, but I believe they are trying to do that. If not, then 19.04 will for sure be released with Python 3.7. > im not sure about other linux distros but since most openstack > deployment are done > on LTS releases of operating systems i would suspect that python 3.6 > will be the main > python 3 versions we see deployed in production for some time. In short: that's wrong. > having a 3.7 gate is not a bad idea but priority wise have a 3.6 gate > would be much higher on my list. Wrong list. One version behind. > i think we as a community will have to decide on the minimum and > maximum python 3 versions > we support for each release and adjust as we go forward. Whatever the OpenStack community decides is not going to change what distributions like Debian will do. This type of reasoning lacks a much needed humility. > i would suggst a min of 3.5 and max of 3.6 for rocky. My suggestion is that these bugs are of very high importance and that they should at least deserve attention. That the gate for Python 3.7 isn't ready, I can understand, as everyone's time is limited. This doesn't mean that the OpenStack community at large should just dismiss patches that are important for downstream. > for stien perhaps bump that to min of 3.6 max 3.7 but i think this is > something that needs to be address community wide > via a governance resolution rather then per project. At this point, dropping 3.5 isn't a good idea either, even for Stein. > it will also > impact the external python lib we can depend on too which is > another reason i think thie need to be a comuntiy wide discussion and > goal that is informed by what distros are doing but > not mandated by what any one distro is doing. > regards > sean. Postponing any attempt to support anything current is always a bad idea. I don't see why there's even a controversy when one attempts to fix bugs that will, sooner or later, also hit the gate. Cheers, Thomas Goirand (zigo) From whayutin at redhat.com Tue Aug 7 14:14:22 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 7 Aug 2018 08:14:22 -0600 Subject: [openstack-dev] [tripleo] 3rd party ovb jobs are down In-Reply-To: References: Message-ID: On Mon, Aug 6, 2018 at 5:55 PM Wesley Hayutin wrote: > On Mon, Aug 6, 2018 at 12:56 PM Wesley Hayutin > wrote: > >> Greetings, >> >> There is currently an unplanned outtage atm for the tripleo 3rd party OVB >> based jobs. >> We will contact the list when there are more details. >> >> Thank you! >> > > OK, > I'm going to call an end to the current outtage. We are closely monitoring > the ovb 3rd party jobs. > I'll called for the outtage when we hit [1]. Once I deleted the stack > that moved teh HA routers to back_up state, the networking came back online. > > Additionally Kieran and I had to work through a number of instances that > required admin access to remove. > Once those resources were cleaned up our CI tooling removed the rest of > the stacks in delete_failed status. The stacks in delete_failed status > were holding ip address that were causing new stacks to fail [2] > > There are still active issues that could cause OVB jobs to fail. > This connection issues [3] was originaly thought to be DNS, however that > turned out to not be the case. > You may also see your job have a "node_failure" status, Paul has sent > updates about this issue and is working on a patch and integration into rdo > software factory. > > The CI team is close to including all the console logs into the regular > job logs, however if needed atm they can be viewed at [5]. > We are also adding the bmc to the list of instances that we collect logs > from. > > *To summarize* the most recent outtage was infra related and the errors > were swallowed up in the bmc console log that at the time was not available > to users. > > We continue to monitor that ovb jobs at http://cistatus.tripleo.org/ > The legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master job > is at a 53% pass rate, it needs to move to a > 85% pass rate to match other > check jobs. > > Thanks all! > Following up, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master job is at a 78.6% pass rate today. Certainly an improvement. We had a quick sync meeting this morning w/ RDO-Cloud admins, tripleo and infra folks. There are two remaining issues. There is an active issue w/ network connections, and an issue w/ instances booting into node_failure status. New issues creep up all the time and we're actively monitoring those as well. Still shooting for 85% pass rate. Thanks all > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1570136 > [2] http://paste.openstack.org/show/727444/ > [3] https://bugs.launchpad.net/tripleo/+bug/1785342 > [4] https://review.openstack.org/#/c/584488/ > [5] http://38.145.34.41/console-logs/?C=M;O=D > > > > > > >> >> -- >> >> Wes Hayutin >> >> Associate MANAGER >> >> Red Hat >> >> >> >> w hayutin at redhat.com T: +1919 <+19197544114> >> 4232509 IRC: weshay >> >> >> View my calendar and check my availability for meetings HERE >> >> > -- > > Wes Hayutin > > Associate MANAGER > > Red Hat > > > > w hayutin at redhat.com T: +1919 <+19197544114> > 4232509 IRC: weshay > > > View my calendar and check my availability for meetings HERE > > -- Wes Hayutin Associate MANAGER Red Hat w hayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Tue Aug 7 14:21:39 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Tue, 7 Aug 2018 09:21:39 -0500 Subject: [openstack-dev] [requirements][release] FFE for openstacksdk 0.17.2 In-Reply-To: <082089a7-124e-2d20-77a5-8b5e9c0a8748@inaugust.com> References: <082089a7-124e-2d20-77a5-8b5e9c0a8748@inaugust.com> Message-ID: <20180807142139.kk2jmbokrhkkzprk@gentoo.org> On 18-08-07 07:35:55, Monty Taylor wrote: > Hey all, > > I'd like to request an FFE to release 0.17.2 of openstacksdk from > stable/rocky. > > Infra discovered an issue that affects the production nodepool related to > the multi-threaded TaskManager and exception propagation. When it gets > triggered, we lose an entire cloud of capacity (whoops) until we restart the > associated nodepool-launcher process. > > Nothing in OpenStack uses the particular feature in openstacksdk in question > (yet), so nobody should need to even bump constraints. > Well, considering constraints is the minimum you can ask for an FFE for, we'll go with that :P FFE approved -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From stdake at cisco.com Tue Aug 7 14:28:20 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Tue, 7 Aug 2018 14:28:20 +0000 Subject: [openstack-dev] [kolla] Dropping core reviewer Message-ID: <1533652097121.31214@cisco.com> Kollians, Many of you that know me well know my feelings towards participating as a core reviewer in a project. Folks with the ability to +2/+W gerrit changes can sometimes unintentionally harm a codebase if they are not consistently reviewing and maintaining codebase context. I also believe in leading an exception-free life, and I'm no exception to my own rules. As I am not reviewing Kolla actively given my OpenStack individually elected board of directors service and other responsibilities, I am dropping core reviewer ability for the Kolla repositories. I want to take a moment to thank the thousands of people that have contributed and shaped Kolla into the modern deployment system for OpenStack that it is today. I personally find Kolla to be my finest body of work as a leader. Kolla would not have been possible without the involvement of the OpenStack global community working together to resolve the operational pain points of OpenStack. Thank you for your contributions. Finally, quoting Thierry [1] from our initial application to OpenStack, " ... Long live Kolla!" Cheers! -steve [1] https://review.openstack.org/#/c/206789/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.meadows at gmail.com Tue Aug 7 14:31:07 2018 From: alan.meadows at gmail.com (Alan Meadows) Date: Tue, 07 Aug 2018 07:31:07 -0700 Subject: [openstack-dev] [openstack-helm] [vote] Core Reviewer nomination for Chris Wedgwood In-Reply-To: References: Message-ID: <792545d64dbe6da1a40a7fa5667aa77eb71e0ec1.camel@gmail.com> +1 On Fri, 2018-08-03 at 16:52 +0000, Richard Wellum wrote: > > +1 > > On Fri, Aug 3, 2018 at 11:39 AM Steve Wilkerson com> > wrote: > > > +1 > > > > On Fri, Aug 3, 2018 at 10:05 AM, MCEUEN, MATT > > wrote: > > > > > OpenStack-Helm core reviewer team, > > > > > > I would like to nominate Chris Wedgwood as core review for the > > > OpenStack-Helm. > > > > > > Chris is one of the most prolific reviewers in the OSH community, > > > but > > > more importantly is a very thorough and helpful reviewer. Many > > > of my most > > > insightful reviews are thanks to him, and I know the same is true > > > for many > > > other team members. In addition, he is an accomplished OSH > > > engineer and > > > has contributed features that run the gamut, including Ceph > > > integration, > > > Calico support, Neutron configuration, Gating, and core Helm- > > > Toolkit > > > functionality. > > > > > > Please consider this email my +1 vote. > > > > > > A +1 vote indicates that you are in favor of his core reviewer > > > candidacy, > > > and a -1 is a veto. Voting will be open for the next seven days > > > (closing > > > 8/10) or until all OpenStack-Helm core reviewers cast their vote. > > > > > > Thank you, > > > Matt McEuen > > > > > > _________________________________________________________________ > > > _________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From bdobreli at redhat.com Tue Aug 7 14:43:28 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 7 Aug 2018 16:43:28 +0200 Subject: [openstack-dev] [tripleo] Patches to speed up plan operations In-Reply-To: References: Message-ID: <769e5290-6887-3610-b8de-449af1559327@redhat.com> On 8/2/18 1:34 AM, Ian Main wrote: > > Hey folks! > > So I've been working on some patches to speed up plan operations in > TripleO.  This was originally driven by the UI needing to be able to > perform a 'plan upload' in something less than several minutes. :) > > https://review.openstack.org/#/c/581153/ > https://review.openstack.org/#/c/581141/ > > I have a functioning set of patches, and it actually cuts over 2 minutes > off the overcloud deployment time. > > Without patch: > + openstack overcloud plan create --templates > /home/stack/tripleo-heat-templates/ overcloud > Creating Swift container to store the plan > Creating plan from template files in: /home/stack/tripleo-heat-templates/ > Plan created. > real    3m3.415s > > With patch: > + openstack overcloud plan create --templates > /home/stack/tripleo-heat-templates/ overcloud > Creating Swift container to store the plan > Creating plan from template files in: /home/stack/tripleo-heat-templates/ > Plan created. > real    0m44.694s > > This is on VMs.  On real hardware it now takes something like 15-20 > seconds to do the plan upload which is much more manageable from the UI > standpoint. > > Some things about what this patch does: > > - It makes use of process-templates.py (written for the undercloud) to > process the jinjafied templates.  This reduces replication with the > existing version in the code base and is very fast as it's all done on > local disk. Just wanted to say Special Big Thank You for doing that code consolidation work! > - It stores the bulk of the templates as a tarball in swift.  Any > individual files in swift take precedence over the contents of the > tarball so it should be backwards compatible.  This is a great speed up > as we're not accessing a lot of individual files in swift. > > There's still some work to do; cleaning up and fixing the unit tests, > testing upgrades etc.  I just wanted to get some feedback on the general > idea and hopefully some reviews and/or help - especially with the unit > test stuff. > > Thanks everyone! > >     Ian > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From zigo at debian.org Tue Aug 7 14:57:59 2018 From: zigo at debian.org (Thomas Goirand) Date: Tue, 7 Aug 2018 16:57:59 +0200 Subject: [openstack-dev] Paste unmaintained In-Reply-To: <1533219691-sup-5515@lrrr.local> References: <687c3ce92c327a15b6a37930e49ebc97f4d6a95f.camel@redhat.com> <1533219691-sup-5515@lrrr.local> Message-ID: On 08/02/2018 04:27 PM, Doug Hellmann wrote: > > The last I heard, a few years ago Ian moved away from Python to > JavaScript as part of his work at Mozilla. The support around > paste.deploy has been sporadic since then, and was one of the reasons > we discussed a goal of dropping paste.ini as a configuration file. > > Do we have a real sense of how many of the projects below, which > list Paste in requirements.txt, actually use it directly or rely > on it for configuration? > > Doug > > $ beagle search --ignore-case --file requirements.txt 'paste[><=! ]' > +----------------------------------------+--------------------------------------------------------+------+--------------------+ > | Repository | Filename | Line | Text | > +----------------------------------------+--------------------------------------------------------+------+--------------------+ > | airship-armada | requirements.txt | 8 | Paste>=2.0.3 | > | airship-deckhand | requirements.txt | 12 | Paste # MIT | > | anchor | requirements.txt | 9 | Paste # MIT | > | apmec | requirements.txt | 6 | Paste>=2.0.2 # MIT | > | barbican | requirements.txt | 22 | Paste>=2.0.2 # MIT | > | cinder | requirements.txt | 37 | Paste>=2.0.2 # MIT | > | congress | requirements.txt | 11 | Paste>=2.0.2 # MIT | > | designate | requirements.txt | 25 | Paste>=2.0.2 # MIT | > | ec2-api | requirements.txt | 20 | Paste # MIT | > | freezer-api | requirements.txt | 8 | Paste>=2.0.2 # MIT | > | gce-api | requirements.txt | 16 | Paste>=2.0.2 # MIT | > | glance | requirements.txt | 31 | Paste>=2.0.2 # MIT | > | glare | requirements.txt | 29 | Paste>=2.0.2 # MIT | > | karbor | requirements.txt | 28 | Paste>=2.0.2 # MIT | > | kingbird | requirements.txt | 7 | Paste>=2.0.2 # MIT | > | manila | requirements.txt | 30 | Paste>=2.0.2 # MIT | > | meteos | requirements.txt | 29 | Paste # MIT | > | monasca-events-api | requirements.txt | 6 | Paste # MIT | > | monasca-log-api | requirements.txt | 6 | Paste>=2.0.2 # MIT | > | murano | requirements.txt | 28 | Paste>=2.0.2 # MIT | > | neutron | requirements.txt | 6 | Paste>=2.0.2 # MIT | > | nova | requirements.txt | 19 | Paste>=2.0.2 # MIT | > | novajoin | requirements.txt | 6 | Paste>=2.0.2 # MIT | > | oslo.service | requirements.txt | 17 | Paste>=2.0.2 # MIT | > | requirements | global-requirements.txt | 187 | Paste # MIT | > | searchlight | requirements.txt | 27 | Paste>=2.0.2 # MIT | > | tacker | requirements.txt | 6 | Paste>=2.0.2 # MIT | > | tatu | requirements.txt | 18 | Paste # MIT | > | tricircle | requirements.txt | 7 | Paste>=2.0.2 # MIT | > | trio2o | requirements.txt | 7 | Paste # MIT | > | trove | requirements.txt | 11 | Paste>=2.0.2 # MIT | > | upstream-institute-virtual-environment | elements/upstream-training/static/tmp/requirements.txt | 147 | Paste==2.0.3 | > +----------------------------------------+--------------------------------------------------------+------+--------------------+ Doug, That's nice to have direct dependency, but this doesn't cover everything. If using uwsgi, if you want any kind of logging from the wsgi application, you need to use pastescript, which itself runtimes depends on paste. So, anything which potentially has an API also depends indirectly on Paste. Cheers, Thomas Goirand (zigo) From ianyrchoi at gmail.com Tue Aug 7 15:06:07 2018 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Wed, 8 Aug 2018 00:06:07 +0900 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: <4ef95237972fb567d9eaebba82bf9066@arcor.de> References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> <5B4DF6B4.9030501@openstack.org> <7f053a98-718b-0470-9ffa-934f0e715a76@gmail.com> <5B4E132E.5050607@openstack.org> <5B50A476.8010606@openstack.org> <5B5F295F.3090608@openstack.org> <1f5afd62cc3a9a8923586a404e707366@arcor.de> <16e69b47c8b71bf6f920ab8f3df61928@arcor.de> <5B63349F.4010204@openstack.org> <5B63565F.1010109@openstack.org> <5B689C6C.2010006@openstack.org> <4ef95237972fb567d9eaebba82bf9066@arcor.de> Message-ID: <675a5bfb-c4c0-00f4-4b23-08d2f80fb310@gmail.com> Hello Frank, I did not notice the list of project but just from the perspective of translation metrics in Stackalytics and Zanata, whitepaper translation contribution is retrieved from Zanata to Stackalytics according to the implementation through https://review.openstack.org/#/c/288871/ . For the perspective on Stackalytics <-> the list of projects, another possible solution would be to create "openstack-org" project in Zanata, migrate edge-computing & container whitepaper into "openstack-org" project with different version names, ane adding "openstack-org" project in Stackalytics for consistency. With many thanks, /Ian Frank Kloeker wrote on 8/7/2018 4:49 PM: > Many thanks, Jimmy! At last I draw your attention to Stackalytics. > Translation metrics for whitepapers not counted there. Maybe you have > an advice for https://review.openstack.org/#/c/588965/ > > kind regards > > Frank > > Am 2018-08-06 21:07, schrieb Jimmy McArthur: >> A heads up that the Translators are now listed at the bottom of the >> page as well, along with the rest of the paper contributors: >> >> https://www.openstack.org/edge-computing/cloud-edge-computing-beyond-the-data-center?lang=ja_JP >> >> >> Cheers! >> Jimmy >> >> Frank Kloeker wrote: >>> Hi Jimmy, >>> >>> thanks for announcement. Great stuff! It looks really great and it's >>> easy to navigate. I think a special thanks goes to Sebastian for >>> designing the pages. One small remark: have you tried text-align: >>> justify? I think it would be a little bit more readable, like a >>> science paper (German word is: Ordnung) >>> I put the projects again on the frontpage of the translation >>> platform, so we'll get more translations shortly. >>> >>> kind regards >>> >>> Frank >>> >>> Am 2018-08-02 21:07, schrieb Jimmy McArthur: >>>> The Edge and Containers translations are now live.  As new >>>> translations become available, we will add them to the page. >>>> >>>> https://www.openstack.org/containers/ >>>> https://www.openstack.org/edge-computing/ >>>> >>>> Note that the Chinese translation has not been added to Zanata at this >>>> time, so I've left the PDF download up on that page. >>>> >>>> Thanks everyone and please let me know if you have questions or >>>> concerns! >>>> >>>> Cheers! >>>> Jimmy >>>> >>>> Jimmy McArthur wrote: >>>>> Frank, >>>>> >>>>> We expect to have these papers up this afternoon. I'll update this >>>>> thread when we do. >>>>> >>>>> Thanks! >>>>> Jimmy >>>>> >>>>> Frank Kloeker wrote: >>>>>> Hi Sebastian, >>>>>> >>>>>> okay, it's translated now. In Edge whitepaper is the problem with >>>>>> XML-Parsing of the term AT&T. Don't know how to escape this. >>>>>> Maybe you will see the warning during import too. >>>>>> >>>>>> kind regards >>>>>> >>>>>> Frank >>>>>> >>>>>> Am 2018-07-30 20:09, schrieb Sebastian Marcet: >>>>>>> Hi Frank, >>>>>>> i was double checking pot file and realized that original pot >>>>>>> missed >>>>>>> some parts of the original paper (subsections of the paper) >>>>>>> apologizes >>>>>>> on that >>>>>>> i just re uploaded an updated pot file with missing subsections >>>>>>> >>>>>>> regards >>>>>>> >>>>>>> On Mon, Jul 30, 2018 at 2:20 PM, Frank Kloeker >>>>>>> wrote: >>>>>>> >>>>>>>> Hi Jimmy, >>>>>>>> >>>>>>>> from the GUI I'll get this link: >>>>>>>> >>>>>>> https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >>>>>>> >>>>>>>> [1] >>>>>>>> >>>>>>>> paper version  are only in container whitepaper: >>>>>>>> >>>>>>>> >>>>>>> https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >>>>>>> >>>>>>>> [2] >>>>>>>> >>>>>>>> In general there is no group named papers >>>>>>>> >>>>>>>> kind regards >>>>>>>> >>>>>>>> Frank >>>>>>>> >>>>>>>> Am 2018-07-30 17:06, schrieb Jimmy McArthur: >>>>>>>> Frank, >>>>>>>> >>>>>>>> We're getting a 404 when looking for the pot file on the Zanata >>>>>>>> API: >>>>>>>> >>>>>>> https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >>>>>>> >>>>>>>> [3] >>>>>>>> >>>>>>>> As a result, we can't pull the po files.  Any idea what might be >>>>>>>> happening? >>>>>>>> >>>>>>>> Seeing the same thing with both papers... >>>>>>>> >>>>>>>> Thank you, >>>>>>>> Jimmy >>>>>>>> >>>>>>>> Frank Kloeker wrote: >>>>>>>> Hi Jimmy, >>>>>>>> >>>>>>>> Korean and German version are now done on the new format. Can you >>>>>>>> check publishing? >>>>>>>> >>>>>>>> thx >>>>>>>> >>>>>>>> Frank >>>>>>>> >>>>>>>> Am 2018-07-19 16:47, schrieb Jimmy McArthur: >>>>>>>> Hi all - >>>>>>>> >>>>>>>> Follow up on the Edge paper specifically: >>>>>>>> >>>>>>> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >>>>>>> >>>>>>>> [4] This is now available. As I mentioned on IRC this morning, it >>>>>>>> should >>>>>>>> be VERY close to the PDF.  Probably just needs a quick review. >>>>>>>> >>>>>>>> Let me know if I can assist with anything. >>>>>>>> >>>>>>>> Thank you to i18n team for all of your help!!! >>>>>>>> >>>>>>>> Cheers, >>>>>>>> Jimmy >>>>>>>> >>>>>>>> Jimmy McArthur wrote: >>>>>>>> Ian raises some great points :) I'll try to address below... >>>>>>>> >>>>>>>> Ian Y. Choi wrote: >>>>>>>> Hello, >>>>>>>> >>>>>>>> When I saw overall translation source strings on container >>>>>>>> whitepaper, I would infer that new edge computing whitepaper >>>>>>>> source strings would include HTML markup tags. >>>>>>>> One of the things I discussed with Ian and Frank in Vancouver is >>>>>>>> the expense of recreating PDFs with new translations.  It's >>>>>>>> prohibitively expensive for the Foundation as it requires design >>>>>>>> resources which we just don't have.  As a result, we created the >>>>>>>> Containers whitepaper in HTML, so that it could be easily updated >>>>>>>> w/o working with outside design contractors.  I indicated that we >>>>>>>> would also be moving the Edge paper to HTML so that we could >>>>>>>> prevent >>>>>>>> that additional design resource cost. >>>>>>>> On the other hand, the source strings of edge computing whitepaper >>>>>>>> which I18n team previously translated do not include HTML markup >>>>>>>> tags, since the source strings are based on just text format. >>>>>>>> The version that Akihiro put together was based on the Edge PDF, >>>>>>>> which we unfortunately didn't have the resources to implement >>>>>>>> in the >>>>>>>> same format. >>>>>>>> >>>>>>>> I really appreciate Akihiro's work on RST-based support on >>>>>>>> publishing translated edge computing whitepapers, since >>>>>>>> translators do not have to re-translate all the strings. >>>>>>>> I would like to second this. It took a lot of initiative to >>>>>>>> work on >>>>>>>> the RST-based translation.  At the moment, it's just not usable >>>>>>>> for >>>>>>>> the reasons mentioned above. >>>>>>>> On the other hand, it seems that I18n team needs to investigate on >>>>>>>> translating similar strings of HTML-based edge computing >>>>>>>> whitepaper >>>>>>>> source strings, which would discourage translators. >>>>>>>> Can you expand on this? I'm not entirely clear on why the HTML >>>>>>>> based translation is more difficult. >>>>>>>> >>>>>>>> That's my point of view on translating edge computing whitepaper. >>>>>>>> >>>>>>>> For translating container whitepaper, I want to further ask the >>>>>>>> followings since *I18n-based tools* >>>>>>>> would mean for translators that translators can test and publish >>>>>>>> translated whitepapers locally: >>>>>>>> >>>>>>>> - How to build translated container whitepaper using original >>>>>>>> Silverstripe-based repository? >>>>>>>> https://docs.openstack.org/i18n/latest/tools.html [5] describes >>>>>>>> well how to build translated artifacts for RST-based OpenStack >>>>>>>> repositories >>>>>>>> but I could not find the way how to build translated container >>>>>>>> whitepaper with translated resources on Zanata. >>>>>>>> This is a little tricky.  It's possible to set up a local version >>>>>>>> of the OpenStack website >>>>>>>> >>>>>>> (https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >>>>>>> >>>>>>>> [6]).  However, we have to manually ingest the po files as they >>>>>>>> are >>>>>>>> completed and then push them out to production, so that >>>>>>>> wouldn't do >>>>>>>> much to help with your local build.  I'm open to suggestions on >>>>>>>> how >>>>>>>> we can make this process easier for the i18n team. >>>>>>>> >>>>>>>> Thank you, >>>>>>>> Jimmy >>>>>>>> >>>>>>>> With many thanks, >>>>>>>> >>>>>>>> /Ian >>>>>>>> >>>>>>>> Jimmy McArthur wrote on 7/17/2018 11:01 PM: >>>>>>>> Frank, >>>>>>>> >>>>>>>> I'm sorry to hear about the displeasure around the Edge paper.  As >>>>>>>> mentioned in a prior thread, the RST format that Akihiro worked >>>>>>>> did >>>>>>>> not work with the  Zanata process that we have been using with our >>>>>>>> CMS.  Additionally, the existing EDGE page is a PDF, so we had to >>>>>>>> build a new template to work with the new HTML whitepaper >>>>>>>> layout we >>>>>>>> created for the Containers paper. I outlined this in the thread " >>>>>>>> [OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge Computing >>>>>>>> Whitepaper Translation" on 6/25/18 and mentioned we would be ready >>>>>>>> with the template around 7/13. >>>>>>>> >>>>>>>> We completed the work on the new whitepaper template and then put >>>>>>>> out the pot files on Zanata so we can get the po language files >>>>>>>> back. If this process is too cumbersome for the translation team, >>>>>>>> I'm open to discussion, but right now our entire translation >>>>>>>> process >>>>>>>> is based on the official OpenStack Docs translation process >>>>>>>> outlined >>>>>>>> by the i18n team: >>>>>>>> https://docs.openstack.org/i18n/latest/en_GB/tools.html [7] >>>>>>>> >>>>>>>> Again, I realize Akihiro put in some work on his own proposing the >>>>>>>> new translation type. If the i18n team is moving to this format >>>>>>>> instead, we can work on redoing our process. >>>>>>>> >>>>>>>> Please let me know if I can clarify further. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Jimmy >>>>>>>> >>>>>>>> Frank Kloeker wrote: >>>>>>>> Hi Jimmy, >>>>>>>> >>>>>>>> permission was added for you and Sebastian. The Container >>>>>>>> Whitepaper >>>>>>>> is on the Zanata frontpage now. But we removed Edge Computing >>>>>>>> whitepaper last week because there is a kind of displeasure in the >>>>>>>> team since the results of translation are still not published >>>>>>>> beside >>>>>>>> Chinese version. It would be nice if we have a commitment from the >>>>>>>> Foundation that results are published in a specific timeframe. >>>>>>>> This >>>>>>>> includes your requirements until the translation should be >>>>>>>> available. >>>>>>>> >>>>>>>> thx Frank >>>>>>>> >>>>>>>> Am 2018-07-16 17:26, schrieb Jimmy McArthur: >>>>>>>> Sorry, I should have also added... we additionally need >>>>>>>> permissions >>>>>>>> so >>>>>>>> that we can add the a new version of the pot file to this project: >>>>>>>> >>>>>>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>>>>>> >>>>>>>> [8] Thanks! >>>>>>>> Jimmy >>>>>>>> >>>>>>>> Jimmy McArthur wrote: >>>>>>>> Hi all - >>>>>>>> >>>>>>>> We have both of the current whitepapers up and available for >>>>>>>> translation.  Can we promote these on the Zanata homepage? >>>>>>>> >>>>>>>> >>>>>>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>>>>>> >>>>>>>> [9] >>>>>>>> >>>>>>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>>>>>> >>>>>>>> [10] Thanks all! >>>>>>>> Jimmy >>>>>>>> >>>>>>>> >>>>>>> __________________________________________________________________________ >>>>>>> >>>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>>> Unsubscribe: >>>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>> [12] >>>>>>> >>>>>>> __________________________________________________________________________ >>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>> Unsubscribe: >>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> [12] >>>>>>> >>>>>>> __________________________________________________________________________ >>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>> Unsubscribe: >>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> [12] >>>>>>> >>>>>>> __________________________________________________________________________ >>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>> Unsubscribe: >>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> [12] >>>>>>> >>>>>>> >>>>>>> >>>>>>> Links: >>>>>>> ------ >>>>>>> [1] >>>>>>> https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >>>>>>> [2] >>>>>>> https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >>>>>>> [3] >>>>>>> https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >>>>>>> [4] >>>>>>> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >>>>>>> [5] https://docs.openstack.org/i18n/latest/tools.html >>>>>>> [6] >>>>>>> https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >>>>>>> [7] https://docs.openstack.org/i18n/latest/en_GB/tools.html >>>>>>> [8] >>>>>>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>>>>>> [9] >>>>>>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>>>>>> [10] >>>>>>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>>>>>> [11] >>>>>>> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>>> >>>>>>> [12] >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>> >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cdent+os at anticdent.org Tue Aug 7 15:10:32 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 7 Aug 2018 16:10:32 +0100 (BST) Subject: [openstack-dev] Paste unmaintained In-Reply-To: References: <687c3ce92c327a15b6a37930e49ebc97f4d6a95f.camel@redhat.com> <1533219691-sup-5515@lrrr.local> Message-ID: On Tue, 7 Aug 2018, Thomas Goirand wrote: > That's nice to have direct dependency, but this doesn't cover > everything. If using uwsgi, if you want any kind of logging from the > wsgi application, you need to use pastescript, which itself runtimes > depends on paste. So, anything which potentially has an API also depends > indirectly on Paste. Can you point to more info on this, as it doesn't correspond with my experience of using uwsgi? In my experience uwsgi has built in support for logging without dependencies: https://uwsgi-docs.readthedocs.io/en/latest/LogFormat.html As I said in IRC a while ago: It doesn't really matter how many of our projects are using Paste or PasteDeploy: If any of them are, then we have a problem to address. We already know that some of the big/popular ones use it. That's enough to require us to work on a solution. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From doug at doughellmann.com Tue Aug 7 15:17:46 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 07 Aug 2018 11:17:46 -0400 Subject: [openstack-dev] Paste unmaintained In-Reply-To: References: <687c3ce92c327a15b6a37930e49ebc97f4d6a95f.camel@redhat.com> <1533219691-sup-5515@lrrr.local> Message-ID: <1533655006-sup-850@lrrr.local> Excerpts from Thomas Goirand's message of 2018-08-07 16:57:59 +0200: > On 08/02/2018 04:27 PM, Doug Hellmann wrote: > > > > The last I heard, a few years ago Ian moved away from Python to > > JavaScript as part of his work at Mozilla. The support around > > paste.deploy has been sporadic since then, and was one of the reasons > > we discussed a goal of dropping paste.ini as a configuration file. > > Doug, > > That's nice to have direct dependency, but this doesn't cover > everything. If using uwsgi, if you want any kind of logging from the > wsgi application, you need to use pastescript, which itself runtimes > depends on paste. So, anything which potentially has an API also depends > indirectly on Paste. I'm not sure why that would be the case. Surely *any* middleware could set up logging? Doug From no-reply at openstack.org Tue Aug 7 15:18:37 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Tue, 07 Aug 2018 15:18:37 -0000 Subject: [openstack-dev] qinling 1.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for qinling for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/qinling/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/qinling/log/?h=stable/rocky Release notes for qinling can be found at: http://docs.openstack.org/releasenotes/qinling/ From dprince at redhat.com Tue Aug 7 15:18:59 2018 From: dprince at redhat.com (Dan Prince) Date: Tue, 7 Aug 2018 11:18:59 -0400 Subject: [openstack-dev] [TripleO] easily identifying how services are configured In-Reply-To: References: <8e58a5def5b66d229e9c304cbbc9f25c02d49967.camel@redhat.com> <927f5ff4ec528bdcc5877c7a1a5635c62f5f1cb5.camel@redhat.com> <5c220d66-d4e5-2b19-048c-af3a37c846a3@nemebean.com> <88d7f66c-4215-b032-0b98-2671f14dab21@redhat.com> Message-ID: On Thu, Aug 2, 2018 at 5:42 PM Steve Baker wrote: > > > > On 02/08/18 13:03, Alex Schultz wrote: > > On Mon, Jul 9, 2018 at 6:28 AM, Bogdan Dobrelya wrote: > >> On 7/6/18 7:02 PM, Ben Nemec wrote: > >>> > >>> > >>> On 07/05/2018 01:23 PM, Dan Prince wrote: > >>>> On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote: > >>>>> > >>>>> I would almost rather see us organize the directories by service > >>>>> name/project instead of implementation. > >>>>> > >>>>> Instead of: > >>>>> > >>>>> puppet/services/nova-api.yaml > >>>>> puppet/services/nova-conductor.yaml > >>>>> docker/services/nova-api.yaml > >>>>> docker/services/nova-conductor.yaml > >>>>> > >>>>> We'd have: > >>>>> > >>>>> services/nova/nova-api-puppet.yaml > >>>>> services/nova/nova-conductor-puppet.yaml > >>>>> services/nova/nova-api-docker.yaml > >>>>> services/nova/nova-conductor-docker.yaml > >>>>> > >>>>> (or perhaps even another level of directories to indicate > >>>>> puppet/docker/ansible?) > >>>> > >>>> I'd be open to this but doing changes on this scale is a much larger > >>>> developer and user impact than what I was thinking we would be willing > >>>> to entertain for the issue that caused me to bring this up (i.e. how to > >>>> identify services which get configured by Ansible). > >>>> > >>>> Its also worth noting that many projects keep these sorts of things in > >>>> different repos too. Like Kolla fully separates kolla-ansible and > >>>> kolla-kubernetes as they are quite divergent. We have been able to > >>>> preserve some of our common service architectures but as things move > >>>> towards kubernetes we may which to change things structurally a bit > >>>> too. > >>> > >>> True, but the current directory layout was from back when we intended to > >>> support multiple deployment tools in parallel (originally > >>> tripleo-image-elements and puppet). Since I think it has become clear that > >>> it's impractical to maintain two different technologies to do essentially > >>> the same thing I'm not sure there's a need for it now. It's also worth > >>> noting that kolla-kubernetes basically died because there wasn't enough > >>> people to maintain both deployment methods, so we're not the only ones who > >>> have found that to be true. If/when we move to kubernetes I would > >>> anticipate it going like the initial containers work did - development for a > >>> couple of cycles, then a switch to the new thing and deprecation of the old > >>> thing, then removal of support for the old thing. > >>> > >>> That being said, because of the fact that the service yamls are > >>> essentially an API for TripleO because they're referenced in user > >> > >> this ^^ > >> > >>> resource registries, I'm not sure it's worth the churn to move everything > >>> either. I think that's going to be an issue either way though, it's just a > >>> question of the scope. _Something_ is going to move around no matter how we > >>> reorganize so it's a problem that needs to be addressed anyway. > >> > >> [tl;dr] I can foresee reorganizing that API becomes a nightmare for > >> maintainers doing backports for queens (and the LTS downstream release based > >> on it). Now imagine kubernetes support comes within those next a few years, > >> before we can let the old API just go... > >> > >> I have an example [0] to share all that pain brought by a simple move of > >> 'API defaults' from environments/services-docker to environments/services > >> plus environments/services-baremetal. Each time a file changes contents by > >> its old location, like here [1], I had to run a lot of sanity checks to > >> rebase it properly. Like checking for the updated paths in resource > >> registries are still valid or had to/been moved as well, then picking the > >> source of truth for diverged old vs changes locations - all that to loose > >> nothing important in progress. > >> > >> So I'd say please let's do *not* change services' paths/namespaces in t-h-t > >> "API" w/o real need to do that, when there is no more alternatives left to > >> that. > >> > > Ok so it's time to dig this thread back up. I'm currently looking at > > the chrony support which will require a new service[0][1]. Rather than > > add it under puppet, we'll likely want to leverage ansible. So I guess > > the question is where do we put services going forward? Additionally > > as we look to truly removing the baremetal deployment options and > > puppet service deployment, it seems like we need to consolidate under > > a single structure. Given that we don't want force too much churn, > > does this mean that we should align to the docker/services/*.yaml > > structure or should we be proposing a new structure that we can try to > > align on. > > > > There is outstanding tech-debt around the nested stacks and references > > within these services when we added the container deployments so it's > > something that would be beneficial to start tackling sooner rather > > than later. Personally I think we're always going to have the issue > > when we rename files that could have been referenced by custom > > templates, but I don't think we can continue to carry the outstanding > > tech debt around these static locations. Should we be investing in > > coming up with some sort of mappings that we can use/warn a user on > > when we move files? > > When Stein development starts, the puppet services will have been > deprecated for an entire cycle. Can I suggest we use this reorganization > as the time we delete the puppet services files? This would release us > of the burden of maintaining a deployment method that we no longer use. > Also we'll gain a deployment speedup by removing a nested stack for each > docker based service. > > Then I'd suggest doing an "mv docker/services services" and moving any > remaining files in the puppet directory into that. This is basically the > naming that James suggested, except we wouldn't have to suffix the files > with -puppet.yaml, -docker.yaml unless we still had more than one > deployment method for that service. Refactoring the 'services' directories should be safer in the future (if we support softlinking as you suggest below). But even now I think we can do some level of organization within the 'services' directories to accommodate as very few users utilize these directly. It is primarily via the t-h-t 'environments' files that users consume our services. So long as we take care to keep our environments in order I think refactoring should be fine right? All my initial ask here was lets not confuse ourselves by puppet services that are now configured entirely by Ansible in the puppet/services directory. We can start shaping things however we'd like now and gradually update the existing environments to consume the new ways, taking care that upgrades are well supported along the way. And optimizations too! One really painful thing to go and do is moving the environments files themselves. We've already done this in Rocky (environments/services-docker is now called environments/services!) so most of the pain is already absorbed here. In hindsight I think we should be more careful how we add/refactor the environments directory in the future.... The rest of the tree is more internal to TripleO development however and we can refactor that more freely I think. > > Finally, we could consider symlinking docker/services to services for a > cycle. I'm not sure how a swift-stored plan would handle this, but this > would be a great reason to land Ian's plan speedup patch[1] which stores > tripleo-heat-templates in a tarball :) Aside from the performance benefits of this patch there are lots of hidden features there. Symlinking is one of them and should work fine as heatclient would send the files to Heat via the local filesystem instead of relying on Swift. This does increase the load on Heat API a bit but we should be able to adjust our heat-api configs if needed to account for this hit. As an aside Swift as a storage backend for the templates was fine back in the days before we had hundreds of .j2 templates. Now that things are being rendered from templates all over the place the Swift solution and all the associated get/updates to and from the each Swift object are a really inefficient way of dealing with large sets of templates that need rendering. Local filesystem operations will beat out what we are doing now every time and should provide some scalability into the future as we continue to adapt how we utilize t-h-t in the future. Dan > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-August/132768.html > > > Thanks, > > -Alex > > > > [0] https://review.openstack.org/#/c/586679/ > > [1] https://review.openstack.org/#/c/588111/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ianyrchoi at gmail.com Tue Aug 7 15:20:24 2018 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Wed, 8 Aug 2018 00:20:24 +0900 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: <675a5bfb-c4c0-00f4-4b23-08d2f80fb310@gmail.com> References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> <5B4DF6B4.9030501@openstack.org> <7f053a98-718b-0470-9ffa-934f0e715a76@gmail.com> <5B4E132E.5050607@openstack.org> <5B50A476.8010606@openstack.org> <5B5F295F.3090608@openstack.org> <1f5afd62cc3a9a8923586a404e707366@arcor.de> <16e69b47c8b71bf6f920ab8f3df61928@arcor.de> <5B63349F.4010204@openstack.org> <5B63565F.1010109@openstack.org> <5B689C6C.2010006@openstack.org> <4ef95237972fb567d9eaebba82bf9066@arcor.de> <675a5bfb-c4c0-00f4-4b23-08d2f80fb310@gmail.com> Message-ID: Oh my mistake - document not version. Rewriting: For the perspective on Stackalytics <-> the list of projects, another possible solution would be to create "openstack-org" project in Zanata, migrate edge-computing & container whitepaper from two different projects into "openstack-org" project with different document names, and adding "openstack-org" project in Stackalytics for consistency. With many thanks, /Ian Ian Y. Choi wrote on 8/8/2018 12:06 AM: > Hello Frank, > > I did not notice the list of project but just from the perspective of > translation metrics in Stackalytics and Zanata, > whitepaper translation contribution is retrieved from Zanata to > Stackalytics according to the implementation through > https://review.openstack.org/#/c/288871/ . > > For the perspective on Stackalytics <-> the list of projects, another > possible solution would be to create "openstack-org" project in Zanata, > migrate edge-computing & container whitepaper into "openstack-org" > project with different version names, ane > adding "openstack-org" project in Stackalytics for consistency. > > > With many thanks, > > /Ian > > Frank Kloeker wrote on 8/7/2018 4:49 PM: >> Many thanks, Jimmy! At last I draw your attention to Stackalytics. >> Translation metrics for whitepapers not counted there. Maybe you have >> an advice for https://review.openstack.org/#/c/588965/ >> >> kind regards >> >> Frank >> >> Am 2018-08-06 21:07, schrieb Jimmy McArthur: >>> A heads up that the Translators are now listed at the bottom of the >>> page as well, along with the rest of the paper contributors: >>> >>> https://www.openstack.org/edge-computing/cloud-edge-computing-beyond-the-data-center?lang=ja_JP >>> >>> >>> Cheers! >>> Jimmy >>> >>> Frank Kloeker wrote: >>>> Hi Jimmy, >>>> >>>> thanks for announcement. Great stuff! It looks really great and >>>> it's easy to navigate. I think a special thanks goes to Sebastian >>>> for designing the pages. One small remark: have you tried >>>> text-align: justify? I think it would be a little bit more >>>> readable, like a science paper (German word is: Ordnung) >>>> I put the projects again on the frontpage of the translation >>>> platform, so we'll get more translations shortly. >>>> >>>> kind regards >>>> >>>> Frank >>>> >>>> Am 2018-08-02 21:07, schrieb Jimmy McArthur: >>>>> The Edge and Containers translations are now live.  As new >>>>> translations become available, we will add them to the page. >>>>> >>>>> https://www.openstack.org/containers/ >>>>> https://www.openstack.org/edge-computing/ >>>>> >>>>> Note that the Chinese translation has not been added to Zanata at >>>>> this >>>>> time, so I've left the PDF download up on that page. >>>>> >>>>> Thanks everyone and please let me know if you have questions or >>>>> concerns! >>>>> >>>>> Cheers! >>>>> Jimmy >>>>> >>>>> Jimmy McArthur wrote: >>>>>> Frank, >>>>>> >>>>>> We expect to have these papers up this afternoon. I'll update >>>>>> this thread when we do. >>>>>> >>>>>> Thanks! >>>>>> Jimmy >>>>>> >>>>>> Frank Kloeker wrote: >>>>>>> Hi Sebastian, >>>>>>> >>>>>>> okay, it's translated now. In Edge whitepaper is the problem >>>>>>> with XML-Parsing of the term AT&T. Don't know how to escape >>>>>>> this. Maybe you will see the warning during import too. >>>>>>> >>>>>>> kind regards >>>>>>> >>>>>>> Frank >>>>>>> >>>>>>> Am 2018-07-30 20:09, schrieb Sebastian Marcet: >>>>>>>> Hi Frank, >>>>>>>> i was double checking pot file and realized that original pot >>>>>>>> missed >>>>>>>> some parts of the original paper (subsections of the paper) >>>>>>>> apologizes >>>>>>>> on that >>>>>>>> i just re uploaded an updated pot file with missing subsections >>>>>>>> >>>>>>>> regards >>>>>>>> >>>>>>>> On Mon, Jul 30, 2018 at 2:20 PM, Frank Kloeker >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Hi Jimmy, >>>>>>>>> >>>>>>>>> from the GUI I'll get this link: >>>>>>>>> >>>>>>>> https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >>>>>>>> >>>>>>>>> [1] >>>>>>>>> >>>>>>>>> paper version  are only in container whitepaper: >>>>>>>>> >>>>>>>>> >>>>>>>> https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >>>>>>>> >>>>>>>>> [2] >>>>>>>>> >>>>>>>>> In general there is no group named papers >>>>>>>>> >>>>>>>>> kind regards >>>>>>>>> >>>>>>>>> Frank >>>>>>>>> >>>>>>>>> Am 2018-07-30 17:06, schrieb Jimmy McArthur: >>>>>>>>> Frank, >>>>>>>>> >>>>>>>>> We're getting a 404 when looking for the pot file on the >>>>>>>>> Zanata API: >>>>>>>>> >>>>>>>> https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >>>>>>>> >>>>>>>>> [3] >>>>>>>>> >>>>>>>>> As a result, we can't pull the po files.  Any idea what might be >>>>>>>>> happening? >>>>>>>>> >>>>>>>>> Seeing the same thing with both papers... >>>>>>>>> >>>>>>>>> Thank you, >>>>>>>>> Jimmy >>>>>>>>> >>>>>>>>> Frank Kloeker wrote: >>>>>>>>> Hi Jimmy, >>>>>>>>> >>>>>>>>> Korean and German version are now done on the new format. Can you >>>>>>>>> check publishing? >>>>>>>>> >>>>>>>>> thx >>>>>>>>> >>>>>>>>> Frank >>>>>>>>> >>>>>>>>> Am 2018-07-19 16:47, schrieb Jimmy McArthur: >>>>>>>>> Hi all - >>>>>>>>> >>>>>>>>> Follow up on the Edge paper specifically: >>>>>>>>> >>>>>>>> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >>>>>>>> >>>>>>>>> [4] This is now available. As I mentioned on IRC this morning, it >>>>>>>>> should >>>>>>>>> be VERY close to the PDF.  Probably just needs a quick review. >>>>>>>>> >>>>>>>>> Let me know if I can assist with anything. >>>>>>>>> >>>>>>>>> Thank you to i18n team for all of your help!!! >>>>>>>>> >>>>>>>>> Cheers, >>>>>>>>> Jimmy >>>>>>>>> >>>>>>>>> Jimmy McArthur wrote: >>>>>>>>> Ian raises some great points :) I'll try to address below... >>>>>>>>> >>>>>>>>> Ian Y. Choi wrote: >>>>>>>>> Hello, >>>>>>>>> >>>>>>>>> When I saw overall translation source strings on container >>>>>>>>> whitepaper, I would infer that new edge computing whitepaper >>>>>>>>> source strings would include HTML markup tags. >>>>>>>>> One of the things I discussed with Ian and Frank in Vancouver is >>>>>>>>> the expense of recreating PDFs with new translations.  It's >>>>>>>>> prohibitively expensive for the Foundation as it requires design >>>>>>>>> resources which we just don't have.  As a result, we created the >>>>>>>>> Containers whitepaper in HTML, so that it could be easily updated >>>>>>>>> w/o working with outside design contractors.  I indicated that we >>>>>>>>> would also be moving the Edge paper to HTML so that we could >>>>>>>>> prevent >>>>>>>>> that additional design resource cost. >>>>>>>>> On the other hand, the source strings of edge computing >>>>>>>>> whitepaper >>>>>>>>> which I18n team previously translated do not include HTML markup >>>>>>>>> tags, since the source strings are based on just text format. >>>>>>>>> The version that Akihiro put together was based on the Edge PDF, >>>>>>>>> which we unfortunately didn't have the resources to implement >>>>>>>>> in the >>>>>>>>> same format. >>>>>>>>> >>>>>>>>> I really appreciate Akihiro's work on RST-based support on >>>>>>>>> publishing translated edge computing whitepapers, since >>>>>>>>> translators do not have to re-translate all the strings. >>>>>>>>> I would like to second this. It took a lot of initiative to >>>>>>>>> work on >>>>>>>>> the RST-based translation.  At the moment, it's just not >>>>>>>>> usable for >>>>>>>>> the reasons mentioned above. >>>>>>>>> On the other hand, it seems that I18n team needs to >>>>>>>>> investigate on >>>>>>>>> translating similar strings of HTML-based edge computing >>>>>>>>> whitepaper >>>>>>>>> source strings, which would discourage translators. >>>>>>>>> Can you expand on this? I'm not entirely clear on why the HTML >>>>>>>>> based translation is more difficult. >>>>>>>>> >>>>>>>>> That's my point of view on translating edge computing whitepaper. >>>>>>>>> >>>>>>>>> For translating container whitepaper, I want to further ask the >>>>>>>>> followings since *I18n-based tools* >>>>>>>>> would mean for translators that translators can test and publish >>>>>>>>> translated whitepapers locally: >>>>>>>>> >>>>>>>>> - How to build translated container whitepaper using original >>>>>>>>> Silverstripe-based repository? >>>>>>>>> https://docs.openstack.org/i18n/latest/tools.html [5] describes >>>>>>>>> well how to build translated artifacts for RST-based OpenStack >>>>>>>>> repositories >>>>>>>>> but I could not find the way how to build translated container >>>>>>>>> whitepaper with translated resources on Zanata. >>>>>>>>> This is a little tricky.  It's possible to set up a local version >>>>>>>>> of the OpenStack website >>>>>>>>> >>>>>>>> (https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >>>>>>>> >>>>>>>>> [6]).  However, we have to manually ingest the po files as >>>>>>>>> they are >>>>>>>>> completed and then push them out to production, so that >>>>>>>>> wouldn't do >>>>>>>>> much to help with your local build.  I'm open to suggestions >>>>>>>>> on how >>>>>>>>> we can make this process easier for the i18n team. >>>>>>>>> >>>>>>>>> Thank you, >>>>>>>>> Jimmy >>>>>>>>> >>>>>>>>> With many thanks, >>>>>>>>> >>>>>>>>> /Ian >>>>>>>>> >>>>>>>>> Jimmy McArthur wrote on 7/17/2018 11:01 PM: >>>>>>>>> Frank, >>>>>>>>> >>>>>>>>> I'm sorry to hear about the displeasure around the Edge >>>>>>>>> paper.  As >>>>>>>>> mentioned in a prior thread, the RST format that Akihiro >>>>>>>>> worked did >>>>>>>>> not work with the  Zanata process that we have been using with >>>>>>>>> our >>>>>>>>> CMS.  Additionally, the existing EDGE page is a PDF, so we had to >>>>>>>>> build a new template to work with the new HTML whitepaper >>>>>>>>> layout we >>>>>>>>> created for the Containers paper. I outlined this in the thread " >>>>>>>>> [OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge Computing >>>>>>>>> Whitepaper Translation" on 6/25/18 and mentioned we would be >>>>>>>>> ready >>>>>>>>> with the template around 7/13. >>>>>>>>> >>>>>>>>> We completed the work on the new whitepaper template and then put >>>>>>>>> out the pot files on Zanata so we can get the po language files >>>>>>>>> back. If this process is too cumbersome for the translation team, >>>>>>>>> I'm open to discussion, but right now our entire translation >>>>>>>>> process >>>>>>>>> is based on the official OpenStack Docs translation process >>>>>>>>> outlined >>>>>>>>> by the i18n team: >>>>>>>>> https://docs.openstack.org/i18n/latest/en_GB/tools.html [7] >>>>>>>>> >>>>>>>>> Again, I realize Akihiro put in some work on his own proposing >>>>>>>>> the >>>>>>>>> new translation type. If the i18n team is moving to this format >>>>>>>>> instead, we can work on redoing our process. >>>>>>>>> >>>>>>>>> Please let me know if I can clarify further. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Jimmy >>>>>>>>> >>>>>>>>> Frank Kloeker wrote: >>>>>>>>> Hi Jimmy, >>>>>>>>> >>>>>>>>> permission was added for you and Sebastian. The Container >>>>>>>>> Whitepaper >>>>>>>>> is on the Zanata frontpage now. But we removed Edge Computing >>>>>>>>> whitepaper last week because there is a kind of displeasure in >>>>>>>>> the >>>>>>>>> team since the results of translation are still not published >>>>>>>>> beside >>>>>>>>> Chinese version. It would be nice if we have a commitment from >>>>>>>>> the >>>>>>>>> Foundation that results are published in a specific timeframe. >>>>>>>>> This >>>>>>>>> includes your requirements until the translation should be >>>>>>>>> available. >>>>>>>>> >>>>>>>>> thx Frank >>>>>>>>> >>>>>>>>> Am 2018-07-16 17:26, schrieb Jimmy McArthur: >>>>>>>>> Sorry, I should have also added... we additionally need >>>>>>>>> permissions >>>>>>>>> so >>>>>>>>> that we can add the a new version of the pot file to this >>>>>>>>> project: >>>>>>>>> >>>>>>>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>>>>>>> >>>>>>>>> [8] Thanks! >>>>>>>>> Jimmy >>>>>>>>> >>>>>>>>> Jimmy McArthur wrote: >>>>>>>>> Hi all - >>>>>>>>> >>>>>>>>> We have both of the current whitepapers up and available for >>>>>>>>> translation.  Can we promote these on the Zanata homepage? >>>>>>>>> >>>>>>>>> >>>>>>>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>>>>>>> >>>>>>>>> [9] >>>>>>>>> >>>>>>>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>>>>>>> >>>>>>>>> [10] Thanks all! >>>>>>>>> Jimmy >>>>>>>>> >>>>>>>>> >>>>>>>> __________________________________________________________________________ >>>>>>>> >>>>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>>>> Unsubscribe: >>>>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>>>>> [11] >>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>>> [12] >>>>>>>> >>>>>>>> __________________________________________________________________________ >>>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>>> Unsubscribe: >>>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>> [12] >>>>>>>> >>>>>>>> __________________________________________________________________________ >>>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>>> Unsubscribe: >>>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>> [12] >>>>>>>> >>>>>>>> __________________________________________________________________________ >>>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>>> Unsubscribe: >>>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>> [12] >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Links: >>>>>>>> ------ >>>>>>>> [1] >>>>>>>> https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >>>>>>>> [2] >>>>>>>> https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >>>>>>>> [3] >>>>>>>> https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >>>>>>>> [4] >>>>>>>> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >>>>>>>> [5] https://docs.openstack.org/i18n/latest/tools.html >>>>>>>> [6] >>>>>>>> https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >>>>>>>> [7] https://docs.openstack.org/i18n/latest/en_GB/tools.html >>>>>>>> [8] >>>>>>>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>>>>>>> [9] >>>>>>>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>>>>>>> [10] >>>>>>>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>>>>>>> [11] >>>>>>>> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>>>> >>>>>>>> [12] >>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> >>>>>> >>>>>> >>>>>> __________________________________________________________________________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From zbitter at redhat.com Tue Aug 7 15:28:39 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 7 Aug 2018 11:28:39 -0400 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: <30fd7e68-3a58-2ab1-bba0-c4c5e0eb2bf5@debian.org> References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> <12b36042-0303-0c2b-5239-7e4084fe1219@debian.org> <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> <20180806190241.GA3368@devvm1> <30fd7e68-3a58-2ab1-bba0-c4c5e0eb2bf5@debian.org> Message-ID: <0f2f9e10-4419-8fc0-39a9-737ba2be00f4@redhat.com> Top posting to avoid getting into the weeds. * OpenStack is indeed lagging behind * The road to 3.7 (and eventually 3.8) runs through 3.6 * As part of the project-wide python3-first goal we aim to have everything working on 3.6 for Stein, so we are making some progress at least * As of now we are *not* dropping support for 3.5 in Stein * No matter what we do, the specific issue you're encountering is structural: we don't add support for a Python version in the gate until it is available in an Ubuntu LTS release, and that doesn't happen until after it is available in Debian, so you will always have the problem that new Python versions will be introduced in Debian before we have a gate for them * Structural problems require structural solutions; "everybody work harder/pay more attention/prioritise differently" will not do it * I don't see any evidence that people are refusing to review patches that fix 3.7 issues, and I certainly don't think fixing them is 'controversial' On 07/08/18 10:11, Thomas Goirand wrote: > On 08/07/2018 03:24 PM, Sean Mooney wrote: >> so im not sure pushing for python 3.7 is the right thing to do. also i would not >> assume all distros will ship 3.7 in the near term. i have not check lately but >> i believe cento 7 unless make 3.4 and 3.6 available in the default repos. >> ubuntu 18.04 ships with 3.6 i believe > > The current plan for Debian is that we'll be trying to push for Python > 3.7 for Buster, which freezes in January. This freeze date means that > it's going to be Rocky that will end up in the next Debian release. If > Python 3.7 is a failure, then late November, we will remove Python 3.7 > from Unstable and let Buster release with 3.6. > > As for Ubuntu, it is currently unclear if 18.10 will be released with > Python 3.7 or not, but I believe they are trying to do that. If not, > then 19.04 will for sure be released with Python 3.7. > >> im not sure about other linux distros but since most openstack >> deployment are done >> on LTS releases of operating systems i would suspect that python 3.6 >> will be the main >> python 3 versions we see deployed in production for some time. > > In short: that's wrong. > >> having a 3.7 gate is not a bad idea but priority wise have a 3.6 gate >> would be much higher on my list. > > Wrong list. One version behind. > >> i think we as a community will have to decide on the minimum and >> maximum python 3 versions >> we support for each release and adjust as we go forward. > > Whatever the OpenStack community decides is not going to change what > distributions like Debian will do. This type of reasoning lacks a much > needed humility. > >> i would suggst a min of 3.5 and max of 3.6 for rocky. > > My suggestion is that these bugs are of very high importance and that > they should at least deserve attention. That the gate for Python 3.7 > isn't ready, I can understand, as everyone's time is limited. This > doesn't mean that the OpenStack community at large should just dismiss > patches that are important for downstream. > >> for stien perhaps bump that to min of 3.6 max 3.7 but i think this is >> something that needs to be address community wide >> via a governance resolution rather then per project. > > At this point, dropping 3.5 isn't a good idea either, even for Stein. > >> it will also >> impact the external python lib we can depend on too which is >> another reason i think thie need to be a comuntiy wide discussion and >> goal that is informed by what distros are doing but >> not mandated by what any one distro is doing. >> regards >> sean. > > Postponing any attempt to support anything current is always a bad idea. > I don't see why there's even a controversy when one attempts to fix bugs > that will, sooner or later, also hit the gate. > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Tue Aug 7 15:31:33 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 07 Aug 2018 11:31:33 -0400 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: <30fd7e68-3a58-2ab1-bba0-c4c5e0eb2bf5@debian.org> References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> <12b36042-0303-0c2b-5239-7e4084fe1219@debian.org> <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> <20180806190241.GA3368@devvm1> <30fd7e68-3a58-2ab1-bba0-c4c5e0eb2bf5@debian.org> Message-ID: <1533655606-sup-9712@lrrr.local> Excerpts from Thomas Goirand's message of 2018-08-07 16:11:43 +0200: > On 08/07/2018 03:24 PM, Sean Mooney wrote: > > > i think we as a community will have to decide on the minimum and > > maximum python 3 versions > > we support for each release and adjust as we go forward. > > Whatever the OpenStack community decides is not going to change what > distributions like Debian will do. This type of reasoning lacks a much > needed humility. That goes both ways, Thomas. We're in the middle of the RC1 deadline week for Rocky right now. This is not a great time to be pushing for new work unrelated to finishing that release. > > it will also > > impact the external python lib we can depend on too which is > > another reason i think thie need to be a comuntiy wide discussion and > > goal that is informed by what distros are doing but > > not mandated by what any one distro is doing. > > regards > > sean. > > Postponing any attempt to support anything current is always a bad idea. > I don't see why there's even a controversy when one attempts to fix bugs > that will, sooner or later, also hit the gate. The community is not prepared to support 3.7 today. That doesn't mean we will never support it, just that it is not the most important thing for us to be doing right now. We'll get there. Doug From dprince at redhat.com Tue Aug 7 15:33:23 2018 From: dprince at redhat.com (Dan Prince) Date: Tue, 7 Aug 2018 11:33:23 -0400 Subject: [openstack-dev] [tripleo] Patches to speed up plan operations In-Reply-To: References: Message-ID: Thanks for taking this on Ian! I'm fully on board with the effort. I like the consolidation and performance improvements. Storing t-h-t templates in Swift worked okay 3-4 years ago. Now that we have more templates, many of which need .j2 rendering the storage there has become quite a bottleneck. Additionally, since we'd be sending commands to Heat via local filesystem template storage we could consider using softlinks again within t-h-t which should help with refactoring and deprecation efforts. Dan On Wed, Aug 1, 2018 at 7:35 PM Ian Main wrote: > > > Hey folks! > > So I've been working on some patches to speed up plan operations in TripleO. This was originally driven by the UI needing to be able to perform a 'plan upload' in something less than several minutes. :) > > https://review.openstack.org/#/c/581153/ > https://review.openstack.org/#/c/581141/ > > I have a functioning set of patches, and it actually cuts over 2 minutes off the overcloud deployment time. > > Without patch: > + openstack overcloud plan create --templates /home/stack/tripleo-heat-templates/ overcloud > Creating Swift container to store the plan > Creating plan from template files in: /home/stack/tripleo-heat-templates/ > Plan created. > real 3m3.415s > > With patch: > + openstack overcloud plan create --templates /home/stack/tripleo-heat-templates/ overcloud > Creating Swift container to store the plan > Creating plan from template files in: /home/stack/tripleo-heat-templates/ > Plan created. > real 0m44.694s > > This is on VMs. On real hardware it now takes something like 15-20 seconds to do the plan upload which is much more manageable from the UI standpoint. > > Some things about what this patch does: > > - It makes use of process-templates.py (written for the undercloud) to process the jinjafied templates. This reduces replication with the existing version in the code base and is very fast as it's all done on local disk. > - It stores the bulk of the templates as a tarball in swift. Any individual files in swift take precedence over the contents of the tarball so it should be backwards compatible. This is a great speed up as we're not accessing a lot of individual files in swift. > > There's still some work to do; cleaning up and fixing the unit tests, testing upgrades etc. I just wanted to get some feedback on the general idea and hopefully some reviews and/or help - especially with the unit test stuff. > > Thanks everyone! > > Ian > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From liu.xuefeng1 at zte.com.cn Tue Aug 7 15:44:04 2018 From: liu.xuefeng1 at zte.com.cn (liu.xuefeng1 at zte.com.cn) Date: Tue, 7 Aug 2018 23:44:04 +0800 (CST) Subject: [openstack-dev] =?utf-8?q?=5Bpython-senlinclient=5D=5Brelease=5DF?= =?utf-8?q?FE_for_python-senlinclient_1=2E8=2E0?= Message-ID: <201808072344040292650@zte.com.cn> hi, all I'd like to request an FFE to release 1.8.0(stable/rocky) for python-senlinclient. The CURRENT_API_VERSION has been changed to "1.10", we need this release. BestRegards, XueFeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Tue Aug 7 16:10:52 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Tue, 7 Aug 2018 12:10:52 -0400 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: <0f2f9e10-4419-8fc0-39a9-737ba2be00f4@redhat.com> References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> <12b36042-0303-0c2b-5239-7e4084fe1219@debian.org> <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> <20180806190241.GA3368@devvm1> <30fd7e68-3a58-2ab1-bba0-c4c5e0eb2bf5@debian.org> <0f2f9e10-4419-8fc0-39a9-737ba2be00f4@redhat.com> Message-ID: On Tue, Aug 7, 2018 at 11:28 AM, Zane Bitter wrote: > Top posting to avoid getting into the weeds. > > * OpenStack is indeed lagging behind > * The road to 3.7 (and eventually 3.8) runs through 3.6 > * As part of the project-wide python3-first goal we aim to have everything > working on 3.6 for Stein, so we are making some progress at least > * As of now we are *not* dropping support for 3.5 in Stein > * No matter what we do, the specific issue you're encountering is > structural: we don't add support for a Python version in the gate until it > is available in an Ubuntu LTS release, and that doesn't happen until after > it is available in Debian, so you will always have the problem that new > Python versions will be introduced in Debian before we have a gate for them > Thanks for mentioning this. I was concerned that there wouldn't be any gating until Ubuntu 20.04 (April 2020) but Py3.7 is available in bionic today. It's a bit older version but I think that's just because we're early py3.7 stages, so we'll try to get that updated. Thanks, Corey * Structural problems require structural solutions; "everybody work > harder/pay more attention/prioritise differently" will not do it > * I don't see any evidence that people are refusing to review patches that > fix 3.7 issues, and I certainly don't think fixing them is 'controversial' > > > On 07/08/18 10:11, Thomas Goirand wrote: > >> On 08/07/2018 03:24 PM, Sean Mooney wrote: >> >>> so im not sure pushing for python 3.7 is the right thing to do. also i >>> would not >>> assume all distros will ship 3.7 in the near term. i have not check >>> lately but >>> i believe cento 7 unless make 3.4 and 3.6 available in the default repos. >>> ubuntu 18.04 ships with 3.6 i believe >>> >> >> The current plan for Debian is that we'll be trying to push for Python >> 3.7 for Buster, which freezes in January. This freeze date means that >> it's going to be Rocky that will end up in the next Debian release. If >> Python 3.7 is a failure, then late November, we will remove Python 3.7 >> from Unstable and let Buster release with 3.6. >> >> As for Ubuntu, it is currently unclear if 18.10 will be released with >> Python 3.7 or not, but I believe they are trying to do that. If not, >> then 19.04 will for sure be released with Python 3.7. >> >> im not sure about other linux distros but since most openstack >>> deployment are done >>> on LTS releases of operating systems i would suspect that python 3.6 >>> will be the main >>> python 3 versions we see deployed in production for some time. >>> >> >> In short: that's wrong. >> >> having a 3.7 gate is not a bad idea but priority wise have a 3.6 gate >>> would be much higher on my list. >>> >> >> Wrong list. One version behind. >> >> i think we as a community will have to decide on the minimum and >>> maximum python 3 versions >>> we support for each release and adjust as we go forward. >>> >> >> Whatever the OpenStack community decides is not going to change what >> distributions like Debian will do. This type of reasoning lacks a much >> needed humility. >> >> i would suggst a min of 3.5 and max of 3.6 for rocky. >>> >> >> My suggestion is that these bugs are of very high importance and that >> they should at least deserve attention. That the gate for Python 3.7 >> isn't ready, I can understand, as everyone's time is limited. This >> doesn't mean that the OpenStack community at large should just dismiss >> patches that are important for downstream. >> >> for stien perhaps bump that to min of 3.6 max 3.7 but i think this is >>> something that needs to be address community wide >>> via a governance resolution rather then per project. >>> >> >> At this point, dropping 3.5 isn't a good idea either, even for Stein. >> >> it will also >>> impact the external python lib we can depend on too which is >>> another reason i think thie need to be a comuntiy wide discussion and >>> goal that is informed by what distros are doing but >>> not mandated by what any one distro is doing. >>> regards >>> sean. >>> >> >> Postponing any attempt to support anything current is always a bad idea. >> I don't see why there's even a controversy when one attempts to fix bugs >> that will, sooner or later, also hit the gate. >> >> Cheers, >> >> Thomas Goirand (zigo) >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Tue Aug 7 16:28:59 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Tue, 07 Aug 2018 16:28:59 -0000 Subject: [openstack-dev] senlin 6.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for senlin for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/senlin/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/senlin/log/?h=stable/rocky Release notes for senlin can be found at: http://docs.openstack.org/releasenotes/senlin/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/senlin-tempest-plugin and tag it *rocky-rc-potential* to bring it to the senlin release crew's attention. From petebirley at googlemail.com Tue Aug 7 17:08:07 2018 From: petebirley at googlemail.com (Pete Birley) Date: Tue, 7 Aug 2018 12:08:07 -0500 Subject: [openstack-dev] [openstack-helm] [vote] Core Reviewer nomination for Chris Wedgwood In-Reply-To: References: <792545d64dbe6da1a40a7fa5667aa77eb71e0ec1.camel@gmail.com> Message-ID: An emphatic +1, Chris has really done some great work reviewing, and contributing code. On 7 August 2018 at 09:45, Tin Lam wrote: > +1 > > On Tue, Aug 7, 2018 at 9:31 AM Alan Meadows > wrote: > >> +1 >> >> On Fri, 2018-08-03 at 16:52 +0000, Richard Wellum > > wrote: >> >> > >> > +1 >> > >> > On Fri, Aug 3, 2018 at 11:39 AM Steve Wilkerson > > com> >> > wrote: >> > >> > > +1 >> > > >> > > On Fri, Aug 3, 2018 at 10:05 AM, MCEUEN, MATT >> > > wrote: >> > > >> > > > OpenStack-Helm core reviewer team, >> > > > >> > > > I would like to nominate Chris Wedgwood as core review for the >> > > > OpenStack-Helm. >> > > > >> > > > Chris is one of the most prolific reviewers in the OSH community, >> > > > but >> > > > more importantly is a very thorough and helpful reviewer. Many >> > > > of my most >> > > > insightful reviews are thanks to him, and I know the same is true >> > > > for many >> > > > other team members. In addition, he is an accomplished OSH >> > > > engineer and >> > > > has contributed features that run the gamut, including Ceph >> > > > integration, >> > > > Calico support, Neutron configuration, Gating, and core Helm- >> > > > Toolkit >> > > > functionality. >> > > > >> > > > Please consider this email my +1 vote. >> > > > >> > > > A +1 vote indicates that you are in favor of his core reviewer >> > > > candidacy, >> > > > and a -1 is a veto. Voting will be open for the next seven days >> > > > (closing >> > > > 8/10) or until all OpenStack-Helm core reviewers cast their vote. >> > > > >> > > > Thank you, >> > > > Matt McEuen >> > > > >> > > > _________________________________________________________________ >> > > > _________ >> > > > OpenStack Development Mailing List (not for usage questions) >> > > > Unsubscribe: >> > > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > Regards, > > Tin Lam > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Tue Aug 7 20:25:39 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 7 Aug 2018 15:25:39 -0500 Subject: [openstack-dev] [python-senlinclient][release][requirements] FFE for python-senlinclient 1.8.0 In-Reply-To: <201808072344040292650@zte.com.cn> References: <201808072344040292650@zte.com.cn> Message-ID: <20180807202539.GA11009@sm-workstation> Added requirements tag to the subject since this is a requirements FFE. On Tue, Aug 07, 2018 at 11:44:04PM +0800, liu.xuefeng1 at zte.com.cn wrote: > hi, all > > > I'd like to request an FFE to release 1.8.0(stable/rocky) > for python-senlinclient. > > The CURRENT_API_VERSION has been changed to "1.10", we need this release. > > BestRegards, > XueFeng > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean.mcginnis at gmx.com Tue Aug 7 20:29:10 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 7 Aug 2018 15:29:10 -0500 Subject: [openstack-dev] [python-senlinclient][release][requirements] FFE for python-senlinclient 1.8.0 In-Reply-To: <20180807202539.GA11009@sm-workstation> References: <201808072344040292650@zte.com.cn> <20180807202539.GA11009@sm-workstation> Message-ID: <20180807202909.GA11176@sm-workstation> On Tue, Aug 07, 2018 at 03:25:39PM -0500, Sean McGinnis wrote: > Added requirements tag to the subject since this is a requirements FFE. > > On Tue, Aug 07, 2018 at 11:44:04PM +0800, liu.xuefeng1 at zte.com.cn wrote: > > hi, all > > > > > > I'd like to request an FFE to release 1.8.0(stable/rocky) > > for python-senlinclient. > > > > The CURRENT_API_VERSION has been changed to "1.10", we need this release. > > XueFeng, do you just need upper-constraints raised for this, or also the minimum version? From that last sentence, I'm assuming you need to ensure only 1.8.0 is used for Rocky deployments. From MM9745 at att.com Tue Aug 7 20:44:43 2018 From: MM9745 at att.com (MCEUEN, MATT) Date: Tue, 7 Aug 2018 20:44:43 +0000 Subject: [openstack-dev] [openstack-helm] [vote] Core Reviewer nomination for Chris Wedgwood In-Reply-To: References: <792545d64dbe6da1a40a7fa5667aa77eb71e0ec1.camel@gmail.com> Message-ID: <7C64A75C21BB8D43BD75BB18635E4D896C980A85@MOSTLS1MSGUSRFF.ITServices.sbc.com> With this unanimous vote: welcome Mr. Wedgwood to the OpenStack-Helm core reviewer team – thank you for your work to date, and I’m looking forward to paving the way to OSH greatness with you! From: Pete Birley Sent: Tuesday, August 7, 2018 12:08 PM To: tin at irrational.io Cc: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [openstack-helm] [vote] Core Reviewer nomination for Chris Wedgwood An emphatic +1, Chris has really done some great work reviewing, and contributing code. On 7 August 2018 at 09:45, Tin Lam > wrote: +1 On Tue, Aug 7, 2018 at 9:31 AM Alan Meadows > wrote: +1 On Fri, 2018-08-03 at 16:52 +0000, Richard Wellum > wrote: > > +1 > > On Fri, Aug 3, 2018 at 11:39 AM Steve Wilkerson > com> > wrote: > > > +1 > > > > On Fri, Aug 3, 2018 at 10:05 AM, MCEUEN, MATT > > > wrote: > > > > > OpenStack-Helm core reviewer team, > > > > > > I would like to nominate Chris Wedgwood as core review for the > > > OpenStack-Helm. > > > > > > Chris is one of the most prolific reviewers in the OSH community, > > > but > > > more importantly is a very thorough and helpful reviewer. Many > > > of my most > > > insightful reviews are thanks to him, and I know the same is true > > > for many > > > other team members. In addition, he is an accomplished OSH > > > engineer and > > > has contributed features that run the gamut, including Ceph > > > integration, > > > Calico support, Neutron configuration, Gating, and core Helm- > > > Toolkit > > > functionality. > > > > > > Please consider this email my +1 vote. > > > > > > A +1 vote indicates that you are in favor of his core reviewer > > > candidacy, > > > and a -1 is a veto. Voting will be open for the next seven days > > > (closing > > > 8/10) or until all OpenStack-Helm core reviewers cast their vote. > > > > > > Thank you, > > > Matt McEuen > > > > > > _________________________________________________________________ > > > _________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Regards, Tin Lam -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Tue Aug 7 21:18:12 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 7 Aug 2018 16:18:12 -0500 Subject: [openstack-dev] [rally] ACTION REQUIRED for projects using readthedocs In-Reply-To: <625a06b2-0a64-6018-8a3b-d2d8df419190@redhat.com> References: <625a06b2-0a64-6018-8a3b-d2d8df419190@redhat.com> Message-ID: <20180807211811.GA17911@sm-workstation> The recent release of rally failed the docs publishing to readthedocs. This appears to be related to the actions required below. The failure from the job can be found here: http://logs.openstack.org/0c/0cd69c70492f800e0835da4de006fc292e43a5f1/release/trigger-readthedocs-webhook/e9de48a/job-output.txt.gz#_2018-08-07_21_10_16_898906 On Fri, Aug 03, 2018 at 02:20:40PM +1000, Ian Wienand wrote: > Hello, > > tl;dr : any projects using the "docs-on-readthedocs" job template > to trigger a build of their documentation in readthedocs needs to: > > 1) add the "openstackci" user as a maintainer of the RTD project > 2) generate a webhook integration URL for the project via RTD > 3) provide the unique webhook ID value in the "rtd_webhook_id" project > variable > > See > > https://docs.openstack.org/infra/openstack-zuul-jobs/project-templates.html#project_template-docs-on-readthedocs > > -- > > readthedocs has recently updated their API for triggering a > documentation build. In the old API, anyone could POST to a known URL > for the project and it would trigger a build. This end-point has > stopped responding and we now need to use an authenticated webhook to > trigger documentation builds. > > Since this is only done in the post and release pipelines, projects > probably haven't had great feedback that current methods are failing > and this may be a surprise. To check your publishing, you can go to > the zuul builds page [1] and filter by your project and the "post" > pipeline to find recent runs. > > There is now some setup required which can only be undertaken by a > current maintainer of the RTD project. > > In short; add the "openstackci" user as a maintainer, add a "generic > webhook" integration to the project, find the last bit of the URL from > that and put it in the project variable "rtd_webhook_id". > > Luckily OpenStack infra keeps a team of highly skilled digital artists > on retainer and they have produced a handy visual guide available at > > https://imgur.com/a/Pp4LH31 > > Once the RTD project is setup, you must provide the webhook ID value > in your project variables. This will look something like: > > - project: > templates: > - docs-on-readthedocs > - publish-to-pypi > vars: > rtd_webhook_id: '12345' > check: > jobs: > ... > > For actual examples; see pbrx [2] which keeps its config in tree, or > gerrit-dash-creator which has its configuration in project-config [3]. > > Happy to help if anyone is having issues, via mail or #openstack-infra > > Thanks! > > -i > > p.s. You don't *have* to use the jobs from the docs-on-readthedocs > templates and hence add infra as a maintainer; you can setup your own > credentials with zuul secrets in tree and write your playbooks and > jobs to use the generic role [4]. We're always happy to discuss any > concerns. > > [1] https://zuul.openstack.org/builds.html > [2] https://git.openstack.org/cgit/openstack/pbrx/tree/.zuul.yaml#n17 > [3] https://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul.d/projects.yaml > [4] https://zuul-ci.org/docs/zuul-jobs/roles.html#role-trigger-readthedocs > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From amotoki at gmail.com Tue Aug 7 22:03:52 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Wed, 8 Aug 2018 07:03:52 +0900 Subject: [openstack-dev] [cinder][api] strict schema validation and microversioning Message-ID: Hi Cinder and API-SIG folks, During reviewing a horizon bug [0], I noticed the behavior of Cinder API 3.0 was changed. Cinder introduced more strict schema validation for creating/updating volume encryption type during Rocky and a new micro version 3.53 was introduced[1]. Previously, Cinder API like 3.0 accepts unused fields in POST requests but after [1] landed unused fields are now rejected even when Cinder API 3.0 is used. In my understanding on the microversioning, the existing behavior for older versions should be kept. Is it correct? Thanks, Akihiro [0] https://bugs.launchpad.net/horizon/+bug/1783467 [1] https://review.openstack.org/#/c/573093/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Tue Aug 7 22:27:06 2018 From: mordred at inaugust.com (Monty Taylor) Date: Tue, 7 Aug 2018 17:27:06 -0500 Subject: [openstack-dev] [cinder][api] strict schema validation and microversioning In-Reply-To: References: Message-ID: On 08/07/2018 05:03 PM, Akihiro Motoki wrote: > Hi Cinder and API-SIG folks, > > During reviewing a horizon bug [0], I noticed the behavior of Cinder API > 3.0 was changed. > Cinder introduced more strict schema validation for creating/updating > volume encryption type > during Rocky and a new micro version 3.53 was introduced[1]. > > Previously, Cinder API like 3.0 accepts unused fields in POST requests > but after [1] landed unused fields are now rejected even when Cinder API > 3.0 is used. > In my understanding on the microversioning, the existing behavior for > older versions should be kept. > Is it correct? I agree with your assessment that 3.0 was used there - and also that I would expect the api validation to only change if 3.53 microversion was used. From doug at doughellmann.com Tue Aug 7 22:57:58 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 07 Aug 2018 18:57:58 -0400 Subject: [openstack-dev] [goal][python3] more updates to the goal tools Message-ID: <1533682621-sup-2284@lrrr.local> Champions, I have made quite a few changes to the tools for generating the zuul migration patches today. If you have any patches you generated locally for testing, please check out the latest version of the tool (when all of the changes merge) and regenerate them. Doug From mordred at inaugust.com Tue Aug 7 23:19:35 2018 From: mordred at inaugust.com (Monty Taylor) Date: Tue, 7 Aug 2018 18:19:35 -0500 Subject: [openstack-dev] [requirements][sdk][release] FFE request for os-service-types Message-ID: <78bc923c-7dbd-0236-e94e-0f6b1414b9f4@inaugust.com> Heya! I'd like to request a FFE for os-service-types to release 1.3.0. The main change is the inclusion of the qinling data from service-types-authority, as well as the addition of an alias for magnum. There are also two minor changes to the python portion - a parameter was added to get_service_type allowing for a more permissive approach to unknown service - and the library now handles life correctly if a service type is requested with the incorrect number of _'s and -'s. Nothing should need a lower bounds bump - only the normal U-C bump. Thanks! Monty From prometheanfire at gentoo.org Tue Aug 7 23:25:55 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Tue, 7 Aug 2018 18:25:55 -0500 Subject: [openstack-dev] [requirements][sdk][release] FFE request for os-service-types In-Reply-To: <78bc923c-7dbd-0236-e94e-0f6b1414b9f4@inaugust.com> References: <78bc923c-7dbd-0236-e94e-0f6b1414b9f4@inaugust.com> Message-ID: <20180807232555.nrmgfak2rmqnwvjv@gentoo.org> On 18-08-07 18:19:35, Monty Taylor wrote: > Heya! > > I'd like to request a FFE for os-service-types to release 1.3.0. > > The main change is the inclusion of the qinling data from > service-types-authority, as well as the addition of an alias for magnum. > > There are also two minor changes to the python portion - a parameter was > added to get_service_type allowing for a more permissive approach to unknown > service - and the library now handles life correctly if a service type is > requested with the incorrect number of _'s and -'s. > > Nothing should need a lower bounds bump - only the normal U-C bump. > ack'd -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From victoria at vmartinezdelacruz.com Tue Aug 7 23:47:28 2018 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Tue, 7 Aug 2018 20:47:28 -0300 Subject: [openstack-dev] Stepping down as coordinator for the Outreachy internships Message-ID: Hi all, I'm reaching you out to let you know that I'll be stepping down as coordinator for OpenStack next round. I had been contributing to this effort for several rounds now and I believe is a good moment for somebody else to take the lead. You all know how important is Outreachy to me and I'm grateful for all the amazing things I've done as part of the Outreachy program and all the great people I've met in the way. I plan to keep involved with the internships but leave the coordination tasks to somebody else. If you are interested in becoming an Outreachy coordinator, let me know and I can share my experience and provide some guidance. Thanks, Victoria -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Wed Aug 8 00:20:15 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 8 Aug 2018 10:20:15 +1000 Subject: [openstack-dev] [all][elections] Project Team Lead Election Conclusion and Results Message-ID: <20180808002015.GJ9540@thor.bakeyournoodle.com> Thank you to the electorate, to all those who voted and to all candidates who put their name forward for Project Team Lead (PTL) in this election. A healthy, open process breeds trust in our decision making capability thank you to all those who make this process possible. Now for the results of the PTL election process, please join me in extending congratulations to the following PTLs: * Adjutant : Adrian Turjak * Barbican : Ade Lee * Blazar : Pierre Riteau * Chef OpenStack : Samuel Cassiba * Cinder : Jay Bryant * Cloudkitty : Luka Peschke * Congress : Eric Kao * Cyborg : Li Liu * Designate : Graham Hayes * Documentation : Petr Kovar * Dragonflow : [1] * Ec2 Api : Andrey Pavlov * Freezer : [1] * Glance : Erno Kuvaja * Heat : Rico Lin * Horizon : Ivan Kolodyazhny * I18n : Frank Kloeker * Infrastructure : Clark Boylan * Ironic : Julia Kreger * Karbor : Pengju Jiao * Keystone : Lance Bragstad * Kolla : Eduardo Gonzalez Gutierrez * Kuryr : Daniel Mellado * Loci : [1] * Magnum : Spyros Trigazis * Manila : Thomas Barron * Masakari : Sampath Priyankara * Mistral : Dougal Matthews * Monasca : Witek Bedyk * Murano : Rong Zhu * Neutron : Miguel Lavalle * Nova : Melanie Witt * Octavia : Michael Johnson * OpenStackAnsible : Mohammed Naser * OpenStackClient : Dean Troyer * OpenStackSDK : Monty Taylor * OpenStack Charms : Frode Nordahl * OpenStack Helm : Pete Birley * Oslo : Ben Nemec * Packaging Rpm : [1] * PowerVMStackers : Matthew Edmonds * Puppet OpenStack : Tobias Urdin * Qinling : Lingxian Kong * Quality Assurance : Ghanshyam Mann * Rally : Andrey Kurilin * RefStack : [1] * Release Management : Sean McGinnis * Requirements : Matthew Thode * Sahara : Telles Nobrega * Searchlight : [1] * Security : [1] * Senlin : Duc Truong * Solum : Rong Zhu * Storlets : Kota Tsuyuzaki * Swift : John Dickinson * Tacker : dharmendra kushwaha * Telemetry : Julien Danjou * Tricircle : baisen song * Tripleo : Juan Osorio Robles * Trove : [1] * Vitrage : Ifat Afek * Watcher : Alexander Chadin * Winstackers : [1] * Zaqar : wang hao * Zun : Wei Ji Elections: * Senlin: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_5655e3b3821ece95 * Tacker: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_fe41cc8acc6ead91 Election process details and results are also available here: https://governance.openstack.org/election/ Thank you to all involved in the PTL election process, Yours Tony. [1] The TC is currently evaluating options for these projects. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From gmann at ghanshyammann.com Wed Aug 8 00:55:56 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 08 Aug 2018 09:55:56 +0900 Subject: [openstack-dev] [cinder][api] strict schema validation and microversioning In-Reply-To: References: Message-ID: <16517087c88.bea8030a80762.5381697592028592635@ghanshyammann.com> ---- On Wed, 08 Aug 2018 07:27:06 +0900 Monty Taylor wrote ---- > On 08/07/2018 05:03 PM, Akihiro Motoki wrote: > > Hi Cinder and API-SIG folks, > > > > During reviewing a horizon bug [0], I noticed the behavior of Cinder API > > 3.0 was changed. > > Cinder introduced more strict schema validation for creating/updating > > volume encryption type > > during Rocky and a new micro version 3.53 was introduced[1]. > > > > Previously, Cinder API like 3.0 accepts unused fields in POST requests > > but after [1] landed unused fields are now rejected even when Cinder API > > 3.0 is used. > > In my understanding on the microversioning, the existing behavior for > > older versions should be kept. > > Is it correct? > > I agree with your assessment that 3.0 was used there - and also that I > would expect the api validation to only change if 3.53 microversion was > used. +1. As you know, neutron also implemented strict validation in Rocky but with discovery via config option and extensions mechanism. Same way Cinder should make it with backward compatible way till 3.53 version. -gmann > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mnaser at vexxhost.com Wed Aug 8 02:00:11 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 7 Aug 2018 22:00:11 -0400 Subject: [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: Hi Victoria, Thank you so much for all your wonderful work especially around Outreachy! :) Sincerely, Mohammed On Tue, Aug 7, 2018 at 7:47 PM, Victoria Martínez de la Cruz wrote: > Hi all, > > I'm reaching you out to let you know that I'll be stepping down as > coordinator for OpenStack next round. I had been contributing to this effort > for several rounds now and I believe is a good moment for somebody else to > take the lead. You all know how important is Outreachy to me and I'm > grateful for all the amazing things I've done as part of the Outreachy > program and all the great people I've met in the way. I plan to keep > involved with the internships but leave the coordination tasks to somebody > else. > > If you are interested in becoming an Outreachy coordinator, let me know and > I can share my experience and provide some guidance. > > Thanks, > > Victoria > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From zhipengh512 at gmail.com Wed Aug 8 02:08:05 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 8 Aug 2018 10:08:05 +0800 Subject: [openstack-dev] [publiccloud-wg] Asia-EU friendly meeting today Message-ID: Hi team, A kind reminder for the UTC 7:00 meeting today, please do remember to register yourself to irc due to new channel policy. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wangpeihuixyz at 126.com Wed Aug 8 02:59:21 2018 From: wangpeihuixyz at 126.com (Frank Wang) Date: Wed, 8 Aug 2018 10:59:21 +0800 (CST) Subject: [openstack-dev] [neutron] Does neutron support QinQ(vlan transparent) ? In-Reply-To: References: <6e1ff2b5.671a.16513747f10.Coremail.wangpeihuixyz@126.com> Message-ID: <18634eb2.2b72.16517797ae6.Coremail.wangpeihuixyz@126.com> Thanks for your detail explanation, Sean. Actually, I'm more concern how ovs l2 agent use vlans for tenant isolation on the br-int. I wanna discuss it deeper here Please correct me if I understanding something wrong, Is there any way to make ovs l2agent to support QinQ? for example, I believe QinQ also is a kind of tunnel encapsulation, like vxlan, gre. and I think we can implement it using Hierarchical Port Binding technique It would need two level bindings(of course, need two mechanism drivers). the top-level binding service vlan, lower-level binding customer vlan. The br-int is responsible for customer vlan, the br-tun is responsible for service vlan, Is it feasible? please feel free to leave you any idea. Thanks At 2018-08-07 19:32:44, "Sean Mooney" wrote: >TL;DR >it wont work with the ovs agent but "should" work with linux bridge. >see full message below for details. >regards >sean. > >the linux bridge agent supports the vlan_transparent option only when >createing networks with an l3 segmentation type e.g. vxlan,gre... > >ovs using the neutron l2 agnet does not supprot vlan_transparent >netwroks because of how that agent use vlans for tenant isolation on >the br-int. > >it is possible to use achive vlan transparancy with ovs usign an sdn >controller such as odl or ovn but that was not what you asked in your >question so i wont expand on that futher. > >if you deploy openstack with linux bridge networking and then create a >tenant network of type vxlan with vlan_transparancy set to true and >your tenants >generate QinQ traffic with an mtu reduced so that it will fix within >the vxlan tunnel unfragmented then yes it should be possibly however >you may need to disable port_security/security groups on the port as >im not sure if the ip tables firewall driver will correctly handel >this case. > >an alternive to disabling security groups would be to add an explicit >rule that matched on the etehrnet type and allowed QinQ traffic on >ingress and egress from the vm. > >as far as i am aware this is not tested in the gate so while it should >work the lack of documentation and test coverage means you will >likely be one of the first to test it if you >choose to do so and it may fail for many reasons. > > >On 7 August 2018 at 09:15, Frank Wang wrote: >> Hello folks, >> >> I noted that the API already has the vlan_transparent attribute in the >> network, Do neutron-agents(linux-bridge, openvswitch) support QinQ? I >> didn't find any reference materials that could guide me on how to use or >> configure it. >> >> Thank for your time reading this, Any comments would be appreciated. >> >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From samuel at cassi.ba Wed Aug 8 03:46:09 2018 From: samuel at cassi.ba (Samuel Cassiba) Date: Tue, 7 Aug 2018 20:46:09 -0700 Subject: [openstack-dev] [chef] State of the Kitchen: 6th Edition Message-ID: HTML: https://samuel.cassi.ba/state-of-the-kitchen-6th-edition This is the sixth installment of what is going on with Chef OpenStack. The goal is to give a quick overview to see our progress and what is on the menu. Feedback is always welcome on the content and of what you would like to see more. ### Notable Changes * In the past month we released Chef OpenStack 17, which aligns with the Queens codename of OpenStack. Stabilization efforts centered largely around Chef major version updates and further leveraging Kitchen for integration testing. At the time of this writing, they are mirrored to GitHub and [Supermarket](https://supermarket.chef.io/users/openstack){:target="_blank"}. * openstack-attic/openstack-chef has been brought back from the aether to [openstack/openstack-chef](https://git.openstack.org/cgit/openstack/openstack-chef){:target="_blank"}. This is now the starting point for Chef OpenStack integration examples and documentation. Many thanks to infra for the smooth de-mothballing. A special thanks to fungi for putting on his decoder ring on a weekend! * The openstack-dns (Designate) and overcloud primitives (client) cookbooks have been rehomed to the openstack/ namespace, donated by jklare, calbers and frickler. (thanks!) * Support for aodh has been added to the telemetry cookbook. Thanks to Seb-Solon for the patches! ### Integration * Containerization is progressing, but decisions of old are starting to need to be revisited. Networking is where the main area of focus needs to happen. * In past releases, Chef OpenStack pared down the integration testing to facilitate in landing changes without clogging Zuul. With Zuul v3, that allows some of the older methods to be replaced with lighter weight playbooks. No doubt, as tests become reimplemented, the impact to the build queue times will have to be a consideration again. ### Stabilization * With Rocky stable packages nearing GA, this means that the cookbooks will start focusing on stabilization in earnest. More to come. * The mariadb 2.0 rewrite has not been released upstream in Sous Chefs. We are collaborating to test it in the Chef OpenStack framework and make a decision on when to release to Supermarket. The major change here is making it a pure set of resources, replacing the now-defunct database cookbook. ### On The Menu *Slow Cooker Pulled Pork* * 1 pork butt (shoulder cut) -- size matters not here, the same liquid measurements go for an average size as well as a large size * Cookin' Sause (see below) * 1 cup (240mL) cider vinegar * 1 cup (240mL) beef stock (water works, too, but we like the flavor) * 1-2 tsp (5-10mL) liquid smoke #### Cookin' Sause * 1 cup (340g) yellow mustard * 1/4 cup (57g) salt * 1/4 cup (57g) ground black pepper * 1/4 cup (57g) granulated garlic * 1/4 cup (57g) granulated onion * 1/4 cup (57g) ground cayenne > Combine the spices and the mustard with a whisk. You can use the fancy stuff here, but it's kind of a waste. Ol' Yella works just fine. Your food, your call. #### Dippin' Sause -- not cookin' sause! * 1 can tomato paste * Cider vinegar * Red pepper flakes > There are no measurements on this because it's subjective. Trust your senses and err on the side of needing to add more. *to business!* 1. Rub pork butt with cookin' sause. Make that swine sublime. 2. Place that yellow mass of meat in your slow cooker 3. Add cider vinegar, stock, liquid smoke 4. Cook for 7.5-8 hours on low, until fork tender 5. Shred with forks until it doesn't look like mustard 6. Serve with dippin' sause, or use it as drownin' sause 7. Enjoy Your humble line cook, Samuel Cassiba (scas) From openstack at medberry.net Wed Aug 8 04:18:26 2018 From: openstack at medberry.net (David Medberry) Date: Tue, 7 Aug 2018 23:18:26 -0500 Subject: [openstack-dev] PTG Denver Horns Message-ID: Requests have finally been made (today, August 7, 2018) to end the horns on the train from Denver to Denver International airport (within the city limits of Denver.) Prior approval had been given to remove the FLAGGERS that were stationed at each crossing intersection. Of particular note (at the bottom of the article): There’s no estimate for how long it could take the FRA to approve quiet zones. ref: https://www.9news.com/article/news/local/next/denver-officially-asks-fra-for-permission-to-quiet-a-line-horns/73-581499094 I'd recommend bringing your sleeping aids, ear plugs, etc, just in case not approved by next month's PTG. (The Renaissance is within Denver proper as near as I can tell so that nearby intersection should be covered by this ruling/decision if and when it comes down.) See y'all soon. -dave Praemonitus, praemunitus Forewarned is forearmed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Wed Aug 8 04:39:30 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 8 Aug 2018 14:39:30 +1000 Subject: [openstack-dev] [all][tc][election] Timing of the Upcoming Stein TC election Message-ID: <20180808043930.GK9540@thor.bakeyournoodle.com> Hello all, With the PTL elections behind us it's time to start looking at the TC election. Our charter[1] says: The election is held no later than 6 weeks prior to each OpenStack Summit (on or before ‘S-6’ week), with elections held open for no less than four business days. Assuming we have the same structure that gives us a timeline of: Summit is at: 2018-11-13 Latest possible completion is at: 2018-10-02 Moving back to Tuesday: 2018-10-02 TC Election from 2018-09-25T23:45 to 2018-10-02T23:45 TC Campaigning from 2018-09-18T23:45 to 2018-09-25T23:45 TC Nominations from 2018-09-11T23:45 to 2018-09-18T23:45 This puts the bulk of the nomination period during the PTG, which is sub-optimal as the nominations cause a distraction from the PTG but more so because the campaigning will coincide with travel home, and some community members take vacation along with the PTG. So I'd like to bring up the idea of moving the election forward a little so that it's actually the campaigning period that overlaps with the PTG: TC Election from 2018-09-18T23:45 to 2018-09-27T23:45 TC Campaigning from 2018-09-06T23:45 to 2018-09-18T23:45 TC Nominations from 2018-08-30T23:45 to 2018-09-06T23:45 This gives us longer campaigning and election periods. There are some advantages to doing this: * A panel style Q&A could be held formally or informally ;P * There's improved scope for for incoming, outgoing and staying put TC members to interact in a high bandwidth way. * In personi/private discussions with TC candidates/members. However it isn't without downsides: * Election fatigue, We've just had the PTL elections and the UC elections are currently running. Less break before the TC elections may not be a good thing. * TC candidates that can't travel to the PTG could be disadvantaged * The campaigning would all happen at the PTG and not on the mailing list disadvantaging community members not at the PTG. So thoughts? Yours Tony. [1] https://governance.openstack.org/tc/reference/charter.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From superuser151093 at gmail.com Wed Aug 8 04:54:30 2018 From: superuser151093 at gmail.com (super user) Date: Wed, 8 Aug 2018 13:54:30 +0900 Subject: [openstack-dev] [goal][python3] more updates to the goal tools In-Reply-To: <1533682621-sup-2284@lrrr.local> References: <1533682621-sup-2284@lrrr.local> Message-ID: Got it. Nguyen Hai On Wed, Aug 8, 2018 at 7:58 AM Doug Hellmann wrote: > Champions, > > I have made quite a few changes to the tools for generating the zuul > migration patches today. If you have any patches you generated locally > for testing, please check out the latest version of the tool (when all > of the changes merge) and regenerate them. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Wed Aug 8 05:01:17 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 8 Aug 2018 00:01:17 -0500 Subject: [openstack-dev] PTG Denver Horns In-Reply-To: References: Message-ID: <20180808050117.6rmi4k4ubqg4ntem@gentoo.org> On 18-08-07 23:18:26, David Medberry wrote: > Requests have finally been made (today, August 7, 2018) to end the horns on > the train from Denver to Denver International airport (within the city > limits of Denver.) Prior approval had been given to remove the FLAGGERS > that were stationed at each crossing intersection. > > Of particular note (at the bottom of the article): > > There’s no estimate for how long it could take the FRA to approve quiet > zones. > > ref: > https://www.9news.com/article/news/local/next/denver-officially-asks-fra-for-permission-to-quiet-a-line-horns/73-581499094 > > I'd recommend bringing your sleeping aids, ear plugs, etc, just in case not > approved by next month's PTG. (The Renaissance is within Denver proper as > near as I can tell so that nearby intersection should be covered by this > ruling/decision if and when it comes down.) > Thanks for the update, if you are up to it, keeping us informed on this would be nice, if only for the hilarity. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From bence.romsics at gmail.com Wed Aug 8 06:59:21 2018 From: bence.romsics at gmail.com (Bence Romsics) Date: Wed, 8 Aug 2018 08:59:21 +0200 Subject: [openstack-dev] [neutron] Does neutron support QinQ(vlan transparent) ? In-Reply-To: References: <6e1ff2b5.671a.16513747f10.Coremail.wangpeihuixyz@126.com> Message-ID: Hi, Just about a week ago Li Zhouzhou pushed a change for review to support vlan transparency with ovs too (building on the relatively new QinQ support in ovs): https://review.openstack.org/576687 I did not get time to look into the patch deeper yet, but I guess reviews are always welcome. I also cc-ed this mail so he/she can chime in. Cheers, Bence Romsics On Tue, Aug 7, 2018 at 1:32 PM Sean Mooney wrote: > > TL;DR > it wont work with the ovs agent but "should" work with linux bridge. > see full message below for details. > regards > sean. > > the linux bridge agent supports the vlan_transparent option only when > createing networks with an l3 segmentation type e.g. vxlan,gre... > > ovs using the neutron l2 agnet does not supprot vlan_transparent > netwroks because of how that agent use vlans for tenant isolation on > the br-int. > > it is possible to use achive vlan transparancy with ovs usign an sdn > controller such as odl or ovn but that was not what you asked in your > question so i wont expand on that futher. > > if you deploy openstack with linux bridge networking and then create a > tenant network of type vxlan with vlan_transparancy set to true and > your tenants > generate QinQ traffic with an mtu reduced so that it will fix within > the vxlan tunnel unfragmented then yes it should be possibly however > you may need to disable port_security/security groups on the port as > im not sure if the ip tables firewall driver will correctly handel > this case. > > an alternive to disabling security groups would be to add an explicit > rule that matched on the etehrnet type and allowed QinQ traffic on > ingress and egress from the vm. > > as far as i am aware this is not tested in the gate so while it should > work the lack of documentation and test coverage means you will > likely be one of the first to test it if you > choose to do so and it may fail for many reasons. > > > On 7 August 2018 at 09:15, Frank Wang wrote: > > Hello folks, > > > > I noted that the API already has the vlan_transparent attribute in the > > network, Do neutron-agents(linux-bridge, openvswitch) support QinQ? I > > didn't find any reference materials that could guide me on how to use or > > configure it. > > > > Thank for your time reading this, Any comments would be appreciated. > > > > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zigo at debian.org Wed Aug 8 07:40:26 2018 From: zigo at debian.org (Thomas Goirand) Date: Wed, 8 Aug 2018 09:40:26 +0200 Subject: [openstack-dev] Paste unmaintained In-Reply-To: <1533655006-sup-850@lrrr.local> References: <687c3ce92c327a15b6a37930e49ebc97f4d6a95f.camel@redhat.com> <1533219691-sup-5515@lrrr.local> <1533655006-sup-850@lrrr.local> Message-ID: On 08/07/2018 05:17 PM, Doug Hellmann wrote: > Excerpts from Thomas Goirand's message of 2018-08-07 16:57:59 +0200: >> On 08/02/2018 04:27 PM, Doug Hellmann wrote: >>> >>> The last I heard, a few years ago Ian moved away from Python to >>> JavaScript as part of his work at Mozilla. The support around >>> paste.deploy has been sporadic since then, and was one of the reasons >>> we discussed a goal of dropping paste.ini as a configuration file. >> >> Doug, >> >> That's nice to have direct dependency, but this doesn't cover >> everything. If using uwsgi, if you want any kind of logging from the >> wsgi application, you need to use pastescript, which itself runtimes >> depends on paste. So, anything which potentially has an API also depends >> indirectly on Paste. > > I'm not sure why that would be the case. Surely *any* middleware could > set up logging? > > Doug Doug, If you don't configure uwsgi to do any special logging, then then only thing you'll see in the log file is client requests, without any kind of logging from the wsgi application. To have proper logging, one needs to add, in the uwsgi config file: paste-logger = true If you do that, then you need the python3-pastescript installed, which itself depends on the python3-paste package. Really, I don't see how an operator could run without the paste-logger option activated. Without it, you see nothing in the logs. I hope this helps, Cheers, Thomas Goirand (zigo) From zigo at debian.org Wed Aug 8 07:43:25 2018 From: zigo at debian.org (Thomas Goirand) Date: Wed, 8 Aug 2018 09:43:25 +0200 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> <12b36042-0303-0c2b-5239-7e4084fe1219@debian.org> <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> <20180806190241.GA3368@devvm1> <30fd7e68-3a58-2ab1-bba0-c4c5e0eb2bf5@debian.org> <0f2f9e10-4419-8fc0-39a9-737ba2be00f4@redhat.com> Message-ID: On 08/07/2018 06:10 PM, Corey Bryant wrote: > I was concerned that there wouldn't be any > gating until Ubuntu 20.04 (April 2020) Same over here. I'm concerned that it takes another 2 years, which really, we cannot afford. > but Py3.7 is available in bionic today. Is Bionic going to be released with Py3.7? In Debconf18 in Taiwan, Doko didn't seem completely sure about it. Cheers, Thomas Goirand (zigo) From jpichon at redhat.com Wed Aug 8 08:09:41 2018 From: jpichon at redhat.com (Julie Pichon) Date: Wed, 8 Aug 2018 09:09:41 +0100 Subject: [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: On 8 August 2018 at 00:47, Victoria Martínez de la Cruz wrote: > I'm reaching you out to let you know that I'll be stepping down as > coordinator for OpenStack next round. I had been contributing to this effort > for several rounds now and I believe is a good moment for somebody else to > take the lead. You all know how important is Outreachy to me and I'm > grateful for all the amazing things I've done as part of the Outreachy > program and all the great people I've met in the way. I plan to keep > involved with the internships but leave the coordination tasks to somebody > else. Thanks for doing such a wonderful job and keeping Outreachy going the last few years! :) Julie > If you are interested in becoming an Outreachy coordinator, let me know and > I can share my experience and provide some guidance. > > Thanks, > > Victoria From cdent+os at anticdent.org Wed Aug 8 08:43:00 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 8 Aug 2018 09:43:00 +0100 (BST) Subject: [openstack-dev] Paste unmaintained In-Reply-To: References: <687c3ce92c327a15b6a37930e49ebc97f4d6a95f.camel@redhat.com> <1533219691-sup-5515@lrrr.local> <1533655006-sup-850@lrrr.local> Message-ID: On Wed, 8 Aug 2018, Thomas Goirand wrote: > If you don't configure uwsgi to do any special logging, then then only > thing you'll see in the log file is client requests, without any kind of > logging from the wsgi application. To have proper logging, one needs to > add, in the uwsgi config file: > > paste-logger = true > > If you do that, then you need the python3-pastescript installed, which > itself depends on the python3-paste package. > > Really, I don't see how an operator could run without the paste-logger > option activated. Without it, you see nothing in the logs. I'm pretty sure your statements here are not true. In the uwsgi configs for services in devstack, paste-logger is not used. In the uwsgi set up [1] I use in placedock [2], paste-logger is not used. Yet both have perfectly reasonable logs showing a variety of log levels, including request logs at INFO, and server debugging and warnings where you would expect it to be. Can you please point me to where you are seeing these problems? Clearly something is confused somewhere. Is the difference in our experiences that both of the situations I describe above are happy with logging being on stderr and you're talking about being able to config logging to files, within the application itself? If that's the case then my response would b: don't do that. Let systemd, or your container, or apache2, or whatever process/service orchestration system you have going manage that. That's what they are there for. [1] https://github.com/cdent/placedock/blob/master/shared/placement-uwsgi.ini [2] https://github.com/cdent/placedock -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From zhipengh512 at gmail.com Wed Aug 8 08:45:22 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 8 Aug 2018 16:45:22 +0800 Subject: [openstack-dev] [cyborg]Team Weekly Meeting 2018.08.08 Message-ID: Hi Team, We are rushing towards the end of Rocky cycle and let's use the meeting to sync up with any important features still on the fly. starting UTC1400 at #openstack-cyborg, as usual -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From wangpeihuixyz at 126.com Wed Aug 8 08:51:21 2018 From: wangpeihuixyz at 126.com (Frank Wang) Date: Wed, 8 Aug 2018 16:51:21 +0800 (CST) Subject: [openstack-dev] [neutron] Does neutron support QinQ(vlan transparent) ? In-Reply-To: References: <6e1ff2b5.671a.16513747f10.Coremail.wangpeihuixyz@126.com> Message-ID: <30728dd0.101f6.16518bbbbf5.Coremail.wangpeihuixyz@126.com> Awesome! Thanks, I'll take some time to review this patch. we can discuss it deeper during the review At 2018-08-08 14:59:21, "Bence Romsics" wrote: >Hi, > >Just about a week ago Li Zhouzhou pushed a change for review to >support vlan transparency with ovs too (building on the relatively new >QinQ support in ovs): > >https://review.openstack.org/576687 > >I did not get time to look into the patch deeper yet, but I guess >reviews are always welcome. I also cc-ed this mail so he/she can chime >in. > >Cheers, >Bence Romsics >On Tue, Aug 7, 2018 at 1:32 PM Sean Mooney wrote: >> >> TL;DR >> it wont work with the ovs agent but "should" work with linux bridge. >> see full message below for details. >> regards >> sean. >> >> the linux bridge agent supports the vlan_transparent option only when >> createing networks with an l3 segmentation type e.g. vxlan,gre... >> >> ovs using the neutron l2 agnet does not supprot vlan_transparent >> netwroks because of how that agent use vlans for tenant isolation on >> the br-int. >> >> it is possible to use achive vlan transparancy with ovs usign an sdn >> controller such as odl or ovn but that was not what you asked in your >> question so i wont expand on that futher. >> >> if you deploy openstack with linux bridge networking and then create a >> tenant network of type vxlan with vlan_transparancy set to true and >> your tenants >> generate QinQ traffic with an mtu reduced so that it will fix within >> the vxlan tunnel unfragmented then yes it should be possibly however >> you may need to disable port_security/security groups on the port as >> im not sure if the ip tables firewall driver will correctly handel >> this case. >> >> an alternive to disabling security groups would be to add an explicit >> rule that matched on the etehrnet type and allowed QinQ traffic on >> ingress and egress from the vm. >> >> as far as i am aware this is not tested in the gate so while it should >> work the lack of documentation and test coverage means you will >> likely be one of the first to test it if you >> choose to do so and it may fail for many reasons. >> >> >> On 7 August 2018 at 09:15, Frank Wang wrote: >> > Hello folks, >> > >> > I noted that the API already has the vlan_transparent attribute in the >> > network, Do neutron-agents(linux-bridge, openvswitch) support QinQ? I >> > didn't find any reference materials that could guide me on how to use or >> > configure it. >> > >> > Thank for your time reading this, Any comments would be appreciated. >> > >> > >> > >> > >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Aug 8 09:07:00 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 8 Aug 2018 11:07:00 +0200 Subject: [openstack-dev] [all][elections] Project Team Lead Election Conclusion and Results In-Reply-To: <20180808002015.GJ9540@thor.bakeyournoodle.com> References: <20180808002015.GJ9540@thor.bakeyournoodle.com> Message-ID: <3cd303f6-b457-4b51-595b-55bb074d3bb2@openstack.org> Tony Breeds wrote: > Thank you to the electorate, to all those who voted and to all > candidates who put their name forward for Project Team Lead (PTL) in > this election. A healthy, open process breeds trust in our decision > making capability thank you to all those who make this process possible. > > Now for the results of the PTL election process, please join me in > extending congratulations to the following PTLs: > [...] Congrats to all, and thank you for stepping up and helping stewarding our various project teams ! -- Thierry Carrez (ttx) From thierry at openstack.org Wed Aug 8 09:14:16 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 8 Aug 2018 11:14:16 +0200 Subject: [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: <99813704-07df-29ad-d5b5-04a205a1bb8a@openstack.org> Victoria Martínez de la Cruz wrote: > I'm reaching you out to let you know that I'll be stepping down as > coordinator for OpenStack next round. I had been contributing to this > effort for several rounds now and I believe is a good moment for > somebody else to take the lead. You all know how important is Outreachy > to me and I'm grateful for all the amazing things I've done as part of > the Outreachy program and all the great people I've met in the way. I > plan to keep involved with the internships but leave the coordination > tasks to somebody else. Thanks for helping with this effort for all this time ! -- Thierry Carrez (ttx) From no-reply at openstack.org Wed Aug 8 10:00:29 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Wed, 08 Aug 2018 10:00:29 -0000 Subject: [openstack-dev] murano 6.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for murano for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/murano/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/murano/log/?h=stable/rocky Release notes for murano can be found at: http://docs.openstack.org/releasenotes/murano/ From no-reply at openstack.org Wed Aug 8 10:04:01 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Wed, 08 Aug 2018 10:04:01 -0000 Subject: [openstack-dev] murano-dashboard 6.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for murano-dashboard for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/murano-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/murano-dashboard/log/?h=stable/rocky Release notes for murano-dashboard can be found at: http://docs.openstack.org/releasenotes/murano-dashboard/ From aspiers at suse.com Wed Aug 8 10:18:28 2018 From: aspiers at suse.com (Adam Spiers) Date: Wed, 8 Aug 2018 11:18:28 +0100 Subject: [openstack-dev] PTG Denver Horns In-Reply-To: <20180808050117.6rmi4k4ubqg4ntem@gentoo.org> References: <20180808050117.6rmi4k4ubqg4ntem@gentoo.org> Message-ID: <20180808101828.g3luqyef7gy6q5kp@pacific.linksys.moosehall> Matthew Thode wrote: >On 18-08-07 23:18:26, David Medberry wrote: >> Requests have finally been made (today, August 7, 2018) to end the horns on >> the train from Denver to Denver International airport (within the city >> limits of Denver.) Prior approval had been given to remove the FLAGGERS >> that were stationed at each crossing intersection. >> >> Of particular note (at the bottom of the article): >> >> There’s no estimate for how long it could take the FRA to approve quiet >> zones. >> >> ref: >> https://www.9news.com/article/news/local/next/denver-officially-asks-fra-for-permission-to-quiet-a-line-horns/73-581499094 >> >> I'd recommend bringing your sleeping aids, ear plugs, etc, just in case not >> approved by next month's PTG. (The Renaissance is within Denver proper as >> near as I can tell so that nearby intersection should be covered by this >> ruling/decision if and when it comes down.) > >Thanks for the update, if you are up to it, keeping us informed on this >would be nice, if only for the hilarity. Thanks indeed for the warning. If the approval doesn't go through, we may need to resume the design work started last year; see lines 187 onwards of https://etherpad.openstack.org/p/queens-PTG-feedback From colleen at gazlene.net Wed Aug 8 10:30:11 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Wed, 08 Aug 2018 12:30:11 +0200 Subject: [openstack-dev] PTG Denver Horns In-Reply-To: <20180808101828.g3luqyef7gy6q5kp@pacific.linksys.moosehall> References: <20180808050117.6rmi4k4ubqg4ntem@gentoo.org> <20180808101828.g3luqyef7gy6q5kp@pacific.linksys.moosehall> Message-ID: <1533724211.668141.1467312904.7046A12D@webmail.messagingengine.com> On Wed, Aug 8, 2018, at 12:18 PM, Adam Spiers wrote: > Matthew Thode wrote: > >On 18-08-07 23:18:26, David Medberry wrote: > >> Requests have finally been made (today, August 7, 2018) to end the horns on > >> the train from Denver to Denver International airport (within the city > >> limits of Denver.) Prior approval had been given to remove the FLAGGERS > >> that were stationed at each crossing intersection. > >> > >> Of particular note (at the bottom of the article): > >> > >> There’s no estimate for how long it could take the FRA to approve quiet > >> zones. > >> > >> ref: > >> https://www.9news.com/article/news/local/next/denver-officially-asks-fra-for-permission-to-quiet-a-line-horns/73-581499094 > >> > >> I'd recommend bringing your sleeping aids, ear plugs, etc, just in case not > >> approved by next month's PTG. (The Renaissance is within Denver proper as > >> near as I can tell so that nearby intersection should be covered by this > >> ruling/decision if and when it comes down.) > > > >Thanks for the update, if you are up to it, keeping us informed on this > >would be nice, if only for the hilarity. > > Thanks indeed for the warning. > > If the approval doesn't go through, we may need to resume the design > work started last year; see lines 187 onwards of > > https://etherpad.openstack.org/p/queens-PTG-feedback Luckily the client work for this is already started: https://github.com/dtroyer/osc-choochoo From aschadin at sbcloud.ru Wed Aug 8 10:37:05 2018 From: aschadin at sbcloud.ru (=?utf-8?B?0KfQsNC00LjQvSDQkNC70LXQutGB0LDQvdC00YAg0KHQtdGA0LPQtdC10LI=?= =?utf-8?B?0LjRhw==?=) Date: Wed, 8 Aug 2018 10:37:05 +0000 Subject: [openstack-dev] [watcher] Stein etherpad Message-ID: Greetings Watcher team, I’ve created etherpad page[1] where we can discuss long-term topics, blueprints for Stein release, comments to last release and whatever you want about Watcher. I’ll fill it with blueprints since next week (I’m still on vacation till the EOW). Come and share your feedback! [1]: https://etherpad.openstack.org/p/stein-watcher-ptg Best Regards, ____ Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From liu.xuefeng1 at zte.com.cn Wed Aug 8 11:05:50 2018 From: liu.xuefeng1 at zte.com.cn (liu.xuefeng1 at zte.com.cn) Date: Wed, 8 Aug 2018 19:05:50 +0800 (CST) Subject: [openstack-dev] =?utf-8?b?562U5aSNOiBSZTogIFtweXRob24tc2VubGlu?= =?utf-8?q?client=5D=5Brelease=5D=5Brequirements=5DFFE_for_python-s?= =?utf-8?q?enlinclient_1=2E8=2E0?= In-Reply-To: <20180807202909.GA11176@sm-workstation> References: 201808072344040292650@zte.com.cn, 20180807202909.GA11176@sm-workstation Message-ID: <201808081905507001038@zte.com.cn> Hi, Sean Yes, just need upper-constraints raised for this. Thanks XueFeng 原始邮件 发件人:SeanMcGinnis 收件人:OpenStack Development Mailing List (not for usage questions) 日 期 :2018年08月08日 04:30 主 题 :Re: [openstack-dev] [python-senlinclient][release][requirements]FFE for python-senlinclient 1.8.0 On Tue, Aug 07, 2018 at 03:25:39PM -0500, Sean McGinnis wrote: > Added requirements tag to the subject since this is a requirements FFE. > > On Tue, Aug 07, 2018 at 11:44:04PM +0800, liu.xuefeng1 at zte.com.cn wrote: > > hi, all > > > > > > I'd like to request an FFE to release 1.8.0(stable/rocky) > > for python-senlinclient. > > > > The CURRENT_API_VERSION has been changed to "1.10", we need this release. > > XueFeng, do you just need upper-constraints raised for this, or also the minimum version? From that last sentence, I'm assuming you need to ensure only 1.8.0 is used for Rocky deployments. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From wangzh21 at lenovo.com Wed Aug 8 11:10:56 2018 From: wangzh21 at lenovo.com (Zhenghao ZH21 Wang) Date: Wed, 8 Aug 2018 11:10:56 +0000 Subject: [openstack-dev] [Cyborg] Agent - Conductor update Message-ID: Hi Sundar, All look good to me. And I agreed with the new solution as your suggestion. But I still confused why we will lost some device info if we do diff on agent? Could u give me an example to explain how to lost and what we will lost? Best regards Zhenghao Wang Cloud Researcher Email: wangzh21 at lenovo.com Tel: (+86) 18519550096 Enterprise & Cloud Research Lab, Lenovo Research No.6 Shangdi West Road, Haidian District, Beijing -----Original Message----- From: openstack-dev-request at lists.openstack.org Sent: Tuesday, August 07, 2018 10:22 PM To: openstack-dev at lists.openstack.org Subject: [External] OpenStack-dev Digest, Vol 76, Issue 7 Send OpenStack-dev mailing list submissions to openstack-dev at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev or, via email, send a message with subject or body 'help' to openstack-dev-request at lists.openstack.org You can reach the person managing the list at openstack-dev-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of OpenStack-dev digest..." Today's Topics: 1. Re: Paste unmaintained (Lance Bragstad) 2. Re: [tempest] Small doubt in Tempest setup (Goutham Pratapa) 3. Re: Paste unmaintained (Thomas Herve) 4. Re: [tempest] Small doubt in Tempest setup (Goutham Pratapa) 5. Re: [tripleo] Proposing Lukas Bezdicka core on TripleO (Alex Schultz) 6. Re: [tripleo] Proposing Lukas Bezdicka core on TripleO (Dougal Matthews) 7. Re: [releease][ptl] Missing and forced releases (Spyros Trigazis) 8. UC nomination period is now open! (Ed Leafe) 9. [tripleo] 3rd party ovb jobs are down (Wesley Hayutin) 10. Re: OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate (Thomas Goirand) 11. Re: [tripleo] EOL process for newton branches (Andreas Jaeger) 12. Re: [release][requirements][python-magnumclient] Magnumclient FFE (Matthew Thode) 13. Denver PTG Registration Price Increases on August 23 (Kendall Waters) 14. zaqar-ui 5.0.0.0rc1 (rocky) (no-reply at openstack.org) 15. zaqar 7.0.0.0rc1 (rocky) (no-reply at openstack.org) 16. The state of the ironic universe - August 6th, 2018 (Julia Kreger) 17. Re: [OpenStack-dev][heat][keystone][security sig][all] SSL option for keystone session (Zane Bitter) 18. Re: OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate (Sean McGinnis) 19. Re: Paste unmaintained (Sean McGinnis) 20. Re: [i18n] Edge and Containers whitepapers ready for translation (Jimmy McArthur) 21. Re: [release][requirements][python-magnumclient] Magnumclient FFE (Spyros Trigazis) 22. Re: [release][requirements][python-magnumclient] Magnumclient FFE (Matthew Thode) 23. [neutron] Bug deputy report week July 30th - August 5th (Miguel Lavalle) 24. Re: OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate (Zane Bitter) 25. [python3] champions, please review the updated process (Doug Hellmann) 26. Re: [tripleo] 3rd party ovb jobs are down (Wesley Hayutin) 27. [nova] StarlingX diff analysis (Matt Riedemann) 28. [Cyborg] Agent - Conductor update (Nadathur, Sundar) 29. [all][election][senlin][tacker] Last chance to vote (Tony Breeds) 30. [Blazar] Stein etherpad (Masahito MUROI) 31. Re: [tripleo] EOL process for newton branches (Tony Breeds) 32. Re: [tripleo] EOL process for newton branches (Tony Breeds) 33. Re: [Openstack-operators] [nova] StarlingX diff analysis (Flint WALRUS) 34. Re: [tripleo] EOL process for newton branches (Andreas Jaeger) 35. Re: [i18n] Edge and Containers whitepapers ready for translation (Frank Kloeker) 36. [neutron] Does neutron support QinQ(vlan transparent) ? (Frank Wang) 37. [tc] [all] TC Report 18-32 (Chris Dent) 38. Re: [neutron] Does neutron support QinQ(vlan transparent) ? (Sean Mooney) 39. [nova]Notification update week 32 (Balázs Gibizer) 40. Re: OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate (Thomas Goirand) 41. [requirements][release] FFE for openstacksdk 0.17.2 (Monty Taylor) 42. Re: OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate (Sean Mooney) 43. Re: [Openstack-operators] [nova] StarlingX diff analysis (Matt Riedemann) 44. Re: OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate (Thomas Goirand) 45. Re: [tripleo] 3rd party ovb jobs are down (Wesley Hayutin) 46. Re: [requirements][release] FFE for openstacksdk 0.17.2 (Matthew Thode) ---------------------------------------------------------------------- Message: 1 Date: Mon, 6 Aug 2018 09:53:36 -0500 From: Lance Bragstad To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] Paste unmaintained Message-ID: Content-Type: text/plain; charset="utf-8" On 08/02/2018 09:36 AM, Chris Dent wrote: > On Thu, 2 Aug 2018, Stephen Finucane wrote: > >> Given that multiple projects are using this, we may want to think about >> reaching out to the author and seeing if there's anything we can do to >> at least keep this maintained going forward. I've talked to cdent about >> this already but if anyone else has ideas, please let me know. > > I've sent some exploratory email to Ian, the original author, to get > a sense of where things are and whether there's an option for us (or > if for some reason us wasn't okay, me) to adopt it. If email doesn't > land I'll try again with other media > > I agree with the idea of trying to move away from using it, as > mentioned elsewhere in this thread and in IRC, but it's not a simple > step as at least in some projects we are using paste files as > configuration that people are allowed (and do) change. Moving away > from that is the hard part, not figuring out how to load WSGI > middleware in a modern way. ++ Keystone has been battling this specific debate for several releases. The mutable configuration goal in addition to some much needed technical debt cleanup was the final nail. Long story short, moving off of paste eases the implementations for initiatives we've had in the pipe for a long time. We started an effort to move to flask in Rocky. Morgan has been working through the migration since June, and it's been quite involved [0]. At one point he mentioned trying to write-up how he approached the migration for keystone. I understand that not every project structures their APIs the same way, but a high-level guide might be helpful for some if the long-term goal is to eventually move off of paste (e.g. how we approached it, things that tripped us up, how we prepared the code base for flask, et cetera). I'd be happy to help coordinate a session or retrospective at the PTG if other groups find that helpful. [0] https://review.openstack.org/#/q/(status:open+OR+status:merged)+project:openstack/keystone+branch:master+topic:bug/1776504 > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: ------------------------------ Message: 2 Date: Mon, 6 Aug 2018 20:25:09 +0530 From: Goutham Pratapa To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [tempest] Small doubt in Tempest setup Message-ID: Content-Type: text/plain; charset="utf-8" Done thanks afazekas Thanks Goutham On Mon, 6 Aug 2018 at 7:27 PM, Attila Fazekas wrote: > I was tried to be quick and become wrong. ;-) > > Here are the working ways: > > On Mon, Aug 6, 2018 at 3:49 PM, Attila Fazekas > wrote: > >> Please use ostestr or stestr instead of testr. >> >> $ git clone https://github.com/openstack/tempest >> $ cd tempest/ >> $ stestr init >> $ stestr list >> >> $ git clone https://github.com/openstack/tempest >> $ cd tempest/ >> $ ostestr -l #old way, also worked, does to steps >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cheers !!! Goutham Pratapa -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Mon, 6 Aug 2018 17:13:23 +0200 From: Thomas Herve To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] Paste unmaintained Message-ID: Content-Type: text/plain; charset="UTF-8" On Thu, Aug 2, 2018 at 4:27 PM, Doug Hellmann wrote: > Excerpts from Stephen Finucane's message of 2018-08-02 15:11:25 +0100: >> tl;dr: It seems Paste [1] may be entering unmaintained territory and we >> may need to do something about it. >> >> I was cleaning up some warning messages that nova was issuing this >> morning and noticed a few coming from Paste. I was going to draft a PR >> to fix this, but a quick browse through the Bitbucket project [2] >> suggests there has been little to no activity on that for well over a >> year. One particular open PR - "Python 3.7 support" - is particularly >> concerning, given the recent mailing list threads on the matter. >> >> Given that multiple projects are using this, we may want to think about >> reaching out to the author and seeing if there's anything we can do to >> at least keep this maintained going forward. I've talked to cdent about >> this already but if anyone else has ideas, please let me know. >> >> Stephen >> >> [1] https://pypi.org/project/Paste/ >> [2] https://bitbucket.org/ianb/paste/ >> [3] https://bitbucket.org/ianb/paste/pull-requests/41 >> > > The last I heard, a few years ago Ian moved away from Python to > JavaScript as part of his work at Mozilla. The support around > paste.deploy has been sporadic since then, and was one of the reasons > we discussed a goal of dropping paste.ini as a configuration file. > > Do we have a real sense of how many of the projects below, which > list Paste in requirements.txt, actually use it directly or rely > on it for configuration? > > Doug > > $ beagle search --ignore-case --file requirements.txt 'paste[><=! ]' > +----------------------------------------+--------------------------------------------------------+------+--------------------+ > | Repository | Filename | Line | Text | > +----------------------------------------+--------------------------------------------------------+------+--------------------+ > | airship-armada | requirements.txt | 8 | Paste>=2.0.3 | > | airship-deckhand | requirements.txt | 12 | Paste # MIT | > | anchor | requirements.txt | 9 | Paste # MIT | > | apmec | requirements.txt | 6 | Paste>=2.0.2 # MIT | > | barbican | requirements.txt | 22 | Paste>=2.0.2 # MIT | > | cinder | requirements.txt | 37 | Paste>=2.0.2 # MIT | > | congress | requirements.txt | 11 | Paste>=2.0.2 # MIT | > | designate | requirements.txt | 25 | Paste>=2.0.2 # MIT | > | ec2-api | requirements.txt | 20 | Paste # MIT | > | freezer-api | requirements.txt | 8 | Paste>=2.0.2 # MIT | > | gce-api | requirements.txt | 16 | Paste>=2.0.2 # MIT | > | glance | requirements.txt | 31 | Paste>=2.0.2 # MIT | > | glare | requirements.txt | 29 | Paste>=2.0.2 # MIT | > | karbor | requirements.txt | 28 | Paste>=2.0.2 # MIT | > | kingbird | requirements.txt | 7 | Paste>=2.0.2 # MIT | > | manila | requirements.txt | 30 | Paste>=2.0.2 # MIT | > | meteos | requirements.txt | 29 | Paste # MIT | > | monasca-events-api | requirements.txt | 6 | Paste # MIT | > | monasca-log-api | requirements.txt | 6 | Paste>=2.0.2 # MIT | > | murano | requirements.txt | 28 | Paste>=2.0.2 # MIT | > | neutron | requirements.txt | 6 | Paste>=2.0.2 # MIT | > | nova | requirements.txt | 19 | Paste>=2.0.2 # MIT | > | novajoin | requirements.txt | 6 | Paste>=2.0.2 # MIT | > | oslo.service | requirements.txt | 17 | Paste>=2.0.2 # MIT | > | requirements | global-requirements.txt | 187 | Paste # MIT | > | searchlight | requirements.txt | 27 | Paste>=2.0.2 # MIT | > | tacker | requirements.txt | 6 | Paste>=2.0.2 # MIT | > | tatu | requirements.txt | 18 | Paste # MIT | > | tricircle | requirements.txt | 7 | Paste>=2.0.2 # MIT | > | trio2o | requirements.txt | 7 | Paste # MIT | > | trove | requirements.txt | 11 | Paste>=2.0.2 # MIT | > | upstream-institute-virtual-environment | elements/upstream-training/static/tmp/requirements.txt | 147 | Paste==2.0.3 | If you look for PasteDeploy you'll find quite a few more. I know at least Heat and Swift don't depend on Paste but on PasteDeploy. -- Thomas ------------------------------ Message: 4 Date: Mon, 6 Aug 2018 20:57:18 +0530 From: Goutham Pratapa To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [tempest] Small doubt in Tempest setup Message-ID: Content-Type: text/plain; charset="utf-8" stestr worked thanks but im getting the same error for ostestr -l any idea on what to do ?? On Mon, Aug 6, 2018 at 7:27 PM, Attila Fazekas wrote: > I was tried to be quick and become wrong. ;-) > > Here are the working ways: > > On Mon, Aug 6, 2018 at 3:49 PM, Attila Fazekas > wrote: > >> Please use ostestr or stestr instead of testr. >> >> $ git clone https://github.com/openstack/tempest >> $ cd tempest/ >> $ stestr init >> $ stestr list >> >> $ git clone https://github.com/openstack/tempest >> $ cd tempest/ >> $ ostestr -l #old way, also worked, does to steps >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Cheers !!! Goutham Pratapa -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 5 Date: Mon, 6 Aug 2018 09:28:11 -0600 From: Alex Schultz To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [tripleo] Proposing Lukas Bezdicka core on TripleO Message-ID: Content-Type: text/plain; charset="UTF-8" +1 On Mon, Aug 6, 2018 at 7:19 AM, Bogdan Dobrelya wrote: > +1 > > On 8/1/18 1:31 PM, Giulio Fidente wrote: >> >> Hi, >> >> I would like to propose Lukas Bezdicka core on TripleO. >> >> Lukas did a lot work in our tripleoclient, tripleo-common and >> tripleo-heat-templates repos to make FFU possible. >> >> FFU, which is meant to permit upgrades from Newton to Queens, requires >> in depth understanding of many TripleO components (for example Heat, >> Mistral and the TripleO client) but also of specific TripleO features >> which were added during the course of the three releases (for example >> config-download and upgrade tasks). I believe his FFU work to have been >> very challenging. >> >> Given his broad understanding, more recently Lukas started helping doing >> reviews in other areas. >> >> I am so sure he'll be a great addition to our group that I am not even >> looking for comments, just votes :D >> > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ------------------------------ Message: 6 Date: Mon, 6 Aug 2018 16:50:12 +0100 From: Dougal Matthews To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [tripleo] Proposing Lukas Bezdicka core on TripleO Message-ID: Content-Type: text/plain; charset="utf-8" +1 On 6 August 2018 at 16:28, Alex Schultz wrote: > +1 > > On Mon, Aug 6, 2018 at 7:19 AM, Bogdan Dobrelya > wrote: > > +1 > > > > On 8/1/18 1:31 PM, Giulio Fidente wrote: > >> > >> Hi, > >> > >> I would like to propose Lukas Bezdicka core on TripleO. > >> > >> Lukas did a lot work in our tripleoclient, tripleo-common and > >> tripleo-heat-templates repos to make FFU possible. > >> > >> FFU, which is meant to permit upgrades from Newton to Queens, requires > >> in depth understanding of many TripleO components (for example Heat, > >> Mistral and the TripleO client) but also of specific TripleO features > >> which were added during the course of the three releases (for example > >> config-download and upgrade tasks). I believe his FFU work to have been > >> very challenging. > >> > >> Given his broad understanding, more recently Lukas started helping doing > >> reviews in other areas. > >> > >> I am so sure he'll be a great addition to our group that I am not even > >> looking for comments, just votes :D > >> > > > > > > -- > > Best regards, > > Bogdan Dobrelya, > > Irc #bogdando > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 7 Date: Mon, 6 Aug 2018 18:34:42 +0200 From: Spyros Trigazis To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [releease][ptl] Missing and forced releases Message-ID: Content-Type: text/plain; charset="utf-8" Hello, I have requested a release for python-magnumclient [0]. Per Doug Hellmann's comment in [0], I am requesting a FFE for python-magnumclient. Apologies for the inconvenience, Spyros [0] https://review.openstack.org/#/c/589138/ On Fri, 3 Aug 2018 at 18:52, Sean McGinnis wrote: > Today the release team reviewed the rocky deliverables and their releases > done > so far this cycle. There are a few areas of concern right now. > > Unreleased cycle-with-intermediary > ================================== > There is a much longer list than we would like to see of > cycle-with-intermediary deliverables that have not done any releases so > far in > Rocky. These deliverables should not wait until the very end of the cycle > to > release so that pending changes can be made available earlier and there > are no > last minute surprises. > > For owners of cycle-with-intermediary deliverables, please take a look at > what > you have merged that has not been released and consider doing a release > ASAP. > We are not far from the final deadline for these projects, but it would > still > be good to do a release ahead of that to be safe. > > Deliverables that miss the final deadline will be at risk of being dropped > from > the Rocky coordinated release. > > Unrelease client libraries > ========================== > The following client libraries have not done a release: > > python-cloudkittyclient > python-designateclient > python-karborclient > python-magnumclient > python-searchlightclient* > python-senlinclient > python-tricircleclient > > The deadline for client library releases was last Thursday, July 26. This > coming Monday the release team will force a release on HEAD for these > clients. > The release I proposed in [0] is the current HEAD of the master branch. > > * python-searchlight client is currently planned on being dropped due to > searchlight itself not having met the minimum of two milestone releases > during the rocky cycle. > > Missing milestone 3 > =================== > The following projects missed tagging a milestone 3 release: > > cinder > designate > freezer > mistral > searchlight > > Following policy, a milestone 3 tag will be forced on HEAD for these > deliverables on Monday. > > Freezer and searchlight missed previous milestone deadlines and will be > dropped > from the Rocky coordinated release. > > If there are any questions or concerns, please respond here or get ahold of > someone from the release management team in the #openstack-release channel. > > -- > Sean McGinnis (smcginnis) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 8 Date: Mon, 6 Aug 2018 11:52:38 -0500 From: Ed Leafe To: openstack-sigs at lists.openstack.org, OpenStack Operators , "OpenStack Development Mailing List (not for usage questions)" , openstack at lists.openstack.org, user-committee Subject: [openstack-dev] UC nomination period is now open! Message-ID: <277DC0C9-C34D-47D9-B14F-81E41F136909 at leafe.com> Content-Type: text/plain; charset=utf-8 As the subject says, the nomination period for the summer[0] User Committee elections is now open. Any individual member of the Foundation who is an Active User Contributor (AUC) can propose their candidacy (except the three sitting UC members elected in the previous election). Self-nomination is common; no third party nomination is required. Nominations are made by sending an email to the user-committee at lists.openstack.org mailing-list, with the subject: “UC candidacy” by August 17, 05:59 UTC. The email can include a description of the candidate platform. The candidacy is then confirmed by one of the election officials, after verification of the electorate status of the candidate. [0] Sorry, southern hemisphere people! -- Ed Leafe ------------------------------ Message: 9 Date: Mon, 6 Aug 2018 10:56:49 -0600 From: Wesley Hayutin To: "OpenStack Development Mailing List (not for usage questions)" Subject: [openstack-dev] [tripleo] 3rd party ovb jobs are down Message-ID: Content-Type: text/plain; charset="utf-8" Greetings, There is currently an unplanned outtage atm for the tripleo 3rd party OVB based jobs. We will contact the list when there are more details. Thank you! -- Wes Hayutin Associate MANAGER Red Hat w hayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 10 Date: Mon, 6 Aug 2018 19:11:37 +0200 From: Thomas Goirand To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate Message-ID: <57e9dffb-26cd-e96a-cac9-49942f73ab11 at debian.org> Content-Type: text/plain; charset=utf-8 On 08/02/2018 10:43 AM, Andrey Kurilin wrote: > There's also some "raise StopIteration" issues in: > - ceilometer > - cinder > - designate > - glance > - glare > - heat > - karbor > - manila > - murano > - networking-ovn > - neutron-vpnaas > - nova > - rally > > > Can you provide any traceback or steps to reproduce the issue for Rally > project ? I'm not sure there's any. The only thing I know is that it has stop StopIteration stuff, but I'm not sure if they are part of generators, in which case they should simply be replaced by "return" if you want it to be py 3.7 compatible. I didn't have time to investigate these, but at least Glance was affected, and a patch was sent (as well as an async patch). None of them has been merged yet: https://review.openstack.org/#/c/586050/ https://review.openstack.org/#/c/586716/ That'd be ok if at least there was some reviews. It looks like nobody cares but Debian & Ubuntu people... :( Cheers, Thomas Goirand (zigo) ------------------------------ Message: 11 Date: Mon, 6 Aug 2018 19:27:37 +0200 From: Andreas Jaeger To: "OpenStack Development Mailing List (not for usage questions)" , "tony at bakeyournoodle.com >> Tony Breeds" Subject: Re: [openstack-dev] [tripleo] EOL process for newton branches Message-ID: <5565c598-7327-b7f3-773b-2cfb26c8326b at suse.com> Content-Type: text/plain; charset="utf-8"; format=flowed Tony, On 2018-07-19 06:59, Tony Breeds wrote: > On Wed, Jul 18, 2018 at 08:08:16PM -0400, Emilien Macchi wrote: >> Option 2, EOL everything. >> Thanks a lot for your help on this one, Tony. > > No problem. > > I've created: > https://review.openstack.org/583856 > to tag final releases for tripleo deliverables and then mark them as > EOL. This one has merged now. > > Once that merges we can arrange for someone, with appropriate > permissions to run: > > # EOL repos belonging to tripleo > eol_branch.sh -- stable/newton newton-eol \ > openstack/instack openstack/instack-undercloud \ > openstack/os-apply-config openstack/os-collect-config \ > openstack/os-net-config openstack/os-refresh-config \ > openstack/puppet-tripleo openstack/python-tripleoclient \ > openstack/tripleo-common openstack/tripleo-heat-templates \ > openstack/tripleo-image-elements \ > openstack/tripleo-puppet-elements openstack/tripleo-ui \ > openstack/tripleo-validations Tony, will you coordinate with infra to run this yourself again - or let them run it for you, please? Note that we removed the script with retiring release-tools repo, I propose to readd with https://review.openstack.org/589236 and https://review.openstack.org/589237 and would love your review on these, please. I want to be sure that we import the right version... thanks, Andreas > > Yours Tony. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 ------------------------------ Message: 12 Date: Mon, 6 Aug 2018 12:36:21 -0500 From: Matthew Thode To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [release][requirements][python-magnumclient] Magnumclient FFE Message-ID: <20180806173621.e7zgkkewmkg6qwkj at gentoo.org> Content-Type: text/plain; charset="utf-8" On 18-08-06 18:34:42, Spyros Trigazis wrote: > Hello, > > I have requested a release for python-magnumclient [0]. > Per Doug Hellmann's comment in [0], I am requesting a FFE for > python-magnumclient. > My question to you is if this needs to be a constraints only thing or if there is some project that REQUIRES this new version to work (in which case that project needs to update it's exclusions or minumum). -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: ------------------------------ Message: 13 Date: Mon, 6 Aug 2018 12:36:26 -0500 From: Kendall Waters To: OpenStack-operators at lists.openstack.org, "OpenStack Development Mailing List (not for usage questions)" Subject: [openstack-dev] Denver PTG Registration Price Increases on August 23 Message-ID: <00AB295F-05B2-4DE6-8D56-31BC924D9123 at openstack.org> Content-Type: text/plain; charset="utf-8" Hi everyone, The September 2018 PTG in Denver is right around the corner! Friendly reminder that ticket prices will increase to USD $599 on August 22 at 11:59pm PT (August 23 at 6:59 UTC). So purchase your tickets before the price increases. Register here: https://denver2018ptg.eventbrite.com Our discounted hotel block is filling up and will sell out. The last date to book in the hotel block is August 20 so book now here: www.openstack.org/ptg If you have any questions, please email ptg at openstack.org . Cheers, Kendall Kendall Waters OpenStack Marketing & Events kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 14 From: no-reply at openstack.org To: openstack-dev at lists.openstack.org Subject: [openstack-dev] zaqar-ui 5.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for zaqar-ui for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/zaqar-ui/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/zaqar-ui/log/?h=stable/rocky Release notes for zaqar-ui can be found at: http://docs.openstack.org/releasenotes/zaqar-ui/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/zaqar-ui and tag it *rocky-rc-potential* to bring it to the zaqar-ui release crew's attention. ------------------------------ Message: 15 From: no-reply at openstack.org To: openstack-dev at lists.openstack.org Subject: [openstack-dev] zaqar 7.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for zaqar for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/zaqar/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/zaqar/log/?h=stable/rocky Release notes for zaqar can be found at: http://docs.openstack.org/releasenotes/zaqar/ ------------------------------ Message: 16 Date: Mon, 6 Aug 2018 13:53:03 -0400 From: Julia Kreger To: "OpenStack Development Mailing List (not for usage questions)" Subject: [openstack-dev] The state of the ironic universe - August 6th, 2018 Message-ID: Content-Type: text/plain; charset="UTF-8" News! ===== In the past month we released ironic 11.0 and now this week we expect to release ironic 11.1. With 11.1, ironic has: * The ``deploy_steps`` framework in order to give better control over what consists of a deployment. * BIOS settings management interfaces for the ``ilo`` and ``irmc`` hardware types. * Ramdisk deploy interface has merged. We await your bug reports! * Conductors can now be grouped into specific failure domains with specific nodes assigned to those failure domains. This allows for an operator to configure a conductor in data center A to manage only the hardware in data center A, and not data center B. * Capability has been added to the API to allow driver interface values to be reset to the conductor default values when the driver name is being changed. * Support for partition images with ppc64le hardware has merged. Previously operators could only use whole disk images on that architecture. * Out-of-band RAID configuration is now available with the ``irmc`` hardware type. * Several bug fixes related to cleaning, PXE, and UEFI booting. In slightly depressing news the ``xclarity`` hardware type has been deprecated. This is due to the fact the third-party CI for the hardware type has not yet been established. The team working on the hardware type is continuing to work on getting CI up and running, and we expect to rescind the deprecation in the next release of ironic. Stein Planning -------------- Our Stein planning etherpad[0] has had some activity and we have started to started to place procedural -2s on major changes which will impact the Rocky release. Expect these to be removed once we've released Ironic 11.1. Recent New Specifications ========================= * Support for SmartNICs[1] * Rework inspector boot mangement[2] Specifications starting to see activity ======================================= * Make IPA to ironic API communication optional[3] * Cleanhold state to enable cleaning steps collection [4] Recently merged specifications ============================== * Owner information storage[5] * Direct Deploy with local HTTP server[6] [0]: https://etherpad.openstack.org/p/ironic-stein-ptg [1]: https://review.openstack.org/582767 [2]: https://review.openstack.org/589230 [3]: https://review.openstack.org/#/c/212206 [4]: https://review.openstack.org/507910 [5]: https://review.openstack.org/560089 [6]: https://review.openstack.org/504039 ------------------------------ Message: 17 Date: Mon, 6 Aug 2018 14:58:37 -0400 From: Zane Bitter To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [OpenStack-dev][heat][keystone][security sig][all] SSL option for keystone session Message-ID: Content-Type: text/plain; charset=utf-8; format=flowed On 06/08/18 00:46, Rico Lin wrote: > Hi all > I would like to trigger a discussion on providing directly SSL content > for KeyStone session. Since all team using SSL, I believe this maybe > concerns to other projects as well. > > As we consider to implement customize SSL option for Heat remote stack > [3] (and multicloud support [1]), I'm trying to figure out what is the > best solution for this. Current SSL option in KeyStone session didn't > allow us to provide directly CERT/Key string, instead only allow us to > provide CERT/Key file path. Which is actually a limitation of > python with the version less than 3.7 ([2]). As we not gonna easily get > ride of previous python versions, we try to figure out what is the best > solution we can approach here. > > Some way, we can think about, like using pipeline, or create a file, > encrypted it and send the file path out to KeyStone session. > > Would like to hear more from all for any advice or suggestion on how can > we approach this. Create a temporary directory using tempfile.mkdtemp() as shown here: https://security.openstack.org/guidelines/dg_using-temporary-files-securely.html#correct This probably only needs to happen once per process. (Also I would pass mode=0o600 when creating the file instead of using umask().) Assuming the data gets read only once, then I'd suggest rather than using a tempfile, create a named pipe using os.mkfifo(), open it, and write the data. Then pass the filename of the FIFO to the SSL lib. Close it again after and remove the pipe. > [1] https://etherpad.openstack.org/p/ptg-rocky-multi-cloud > [2] https://www.python.org/dev/peps/pep-0543/ > [3] https://review.openstack.org/#/c/480923/ >  -- > May The Force of OpenStack Be With You, > */Rico Lin > /*irc: ricolin > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > ------------------------------ Message: 18 Date: Mon, 6 Aug 2018 19:02:41 +0000 From: Sean McGinnis To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate Message-ID: <20180806190241.GA3368 at devvm1> Content-Type: text/plain; charset=us-ascii > > I didn't have time to investigate these, but at least Glance was > affected, and a patch was sent (as well as an async patch). None of them > has been merged yet: > > https://review.openstack.org/#/c/586050/ > https://review.openstack.org/#/c/586716/ > > That'd be ok if at least there was some reviews. It looks like nobody > cares but Debian & Ubuntu people... :( > Keep in mind that your priorities are different than everyone elses. There are large parts of the community still working on Python 3.5 support (our officially supported Python 3 version), as well as smaller teams overall working on things like critical bugs. Unless and until we declare Python 3.7 as our new target (which I don't think we are ready to do yet), these kinds of patches will be on a best effort basis. Making sure that duplicate patches are not pushed up will also help increase the chances that they will eventually make it through as well. Sean ------------------------------ Message: 19 Date: Mon, 6 Aug 2018 19:06:35 +0000 From: Sean McGinnis To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] Paste unmaintained Message-ID: <20180806190634.GB3368 at devvm1> Content-Type: text/plain; charset=us-ascii On Mon, Aug 06, 2018 at 09:53:36AM -0500, Lance Bragstad wrote: > > > > > Morgan has been working through the migration since June, and it's been > quite involved [0]. At one point he mentioned trying to write-up how he > approached the migration for keystone. I understand that not every > project structures their APIs the same way, but a high-level guide might > be helpful for some if the long-term goal is to eventually move off of > paste (e.g. how we approached it, things that tripped us up, how we > prepared the code base for flask, et cetera). > > I'd be happy to help coordinate a session or retrospective at the PTG if > other groups find that helpful. > I would find this very useful. I'm not sure the Cinder team has the resources to tackle something like this immediately, but having a better understanding of what would be involved would really help scope the work. And if we have existing examples to follow and at least an outline of the steps to do the work, it might be a good low-hanging-fruit type of thing for someone to tackle if they are looking to get involved. ------------------------------ Message: 20 Date: Mon, 06 Aug 2018 14:07:24 -0500 From: Jimmy McArthur To: Frank Kloeker Cc: "OpenStack Development Mailing List \(not for usage questions\)" , Ildiko Vancsa , Sebastian Marcet Subject: Re: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation Message-ID: <5B689C6C.2010006 at openstack.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed A heads up that the Translators are now listed at the bottom of the page as well, along with the rest of the paper contributors: https://www.openstack.org/edge-computing/cloud-edge-computing-beyond-the-data-center?lang=ja_JP Cheers! Jimmy Frank Kloeker wrote: > Hi Jimmy, > > thanks for announcement. Great stuff! It looks really great and it's > easy to navigate. I think a special thanks goes to Sebastian for > designing the pages. One small remark: have you tried text-align: > justify? I think it would be a little bit more readable, like a > science paper (German word is: Ordnung) > I put the projects again on the frontpage of the translation platform, > so we'll get more translations shortly. > > kind regards > > Frank > > Am 2018-08-02 21:07, schrieb Jimmy McArthur: >> The Edge and Containers translations are now live. As new >> translations become available, we will add them to the page. >> >> https://www.openstack.org/containers/ >> https://www.openstack.org/edge-computing/ >> >> Note that the Chinese translation has not been added to Zanata at this >> time, so I've left the PDF download up on that page. >> >> Thanks everyone and please let me know if you have questions or >> concerns! >> >> Cheers! >> Jimmy >> >> Jimmy McArthur wrote: >>> Frank, >>> >>> We expect to have these papers up this afternoon. I'll update this >>> thread when we do. >>> >>> Thanks! >>> Jimmy >>> >>> Frank Kloeker wrote: >>>> Hi Sebastian, >>>> >>>> okay, it's translated now. In Edge whitepaper is the problem with >>>> XML-Parsing of the term AT&T. Don't know how to escape this. Maybe >>>> you will see the warning during import too. >>>> >>>> kind regards >>>> >>>> Frank >>>> >>>> Am 2018-07-30 20:09, schrieb Sebastian Marcet: >>>>> Hi Frank, >>>>> i was double checking pot file and realized that original pot missed >>>>> some parts of the original paper (subsections of the paper) >>>>> apologizes >>>>> on that >>>>> i just re uploaded an updated pot file with missing subsections >>>>> >>>>> regards >>>>> >>>>> On Mon, Jul 30, 2018 at 2:20 PM, Frank Kloeker >>>>> wrote: >>>>> >>>>>> Hi Jimmy, >>>>>> >>>>>> from the GUI I'll get this link: >>>>>> >>>>> https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >>>>> >>>>>> [1] >>>>>> >>>>>> paper version are only in container whitepaper: >>>>>> >>>>>> >>>>> https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >>>>> >>>>>> [2] >>>>>> >>>>>> In general there is no group named papers >>>>>> >>>>>> kind regards >>>>>> >>>>>> Frank >>>>>> >>>>>> Am 2018-07-30 17:06, schrieb Jimmy McArthur: >>>>>> Frank, >>>>>> >>>>>> We're getting a 404 when looking for the pot file on the Zanata API: >>>>>> >>>>> https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >>>>> >>>>>> [3] >>>>>> >>>>>> As a result, we can't pull the po files. Any idea what might be >>>>>> happening? >>>>>> >>>>>> Seeing the same thing with both papers... >>>>>> >>>>>> Thank you, >>>>>> Jimmy >>>>>> >>>>>> Frank Kloeker wrote: >>>>>> Hi Jimmy, >>>>>> >>>>>> Korean and German version are now done on the new format. Can you >>>>>> check publishing? >>>>>> >>>>>> thx >>>>>> >>>>>> Frank >>>>>> >>>>>> Am 2018-07-19 16:47, schrieb Jimmy McArthur: >>>>>> Hi all - >>>>>> >>>>>> Follow up on the Edge paper specifically: >>>>>> >>>>> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >>>>> >>>>>> [4] This is now available. As I mentioned on IRC this morning, it >>>>>> should >>>>>> be VERY close to the PDF. Probably just needs a quick review. >>>>>> >>>>>> Let me know if I can assist with anything. >>>>>> >>>>>> Thank you to i18n team for all of your help!!! >>>>>> >>>>>> Cheers, >>>>>> Jimmy >>>>>> >>>>>> Jimmy McArthur wrote: >>>>>> Ian raises some great points :) I'll try to address below... >>>>>> >>>>>> Ian Y. Choi wrote: >>>>>> Hello, >>>>>> >>>>>> When I saw overall translation source strings on container >>>>>> whitepaper, I would infer that new edge computing whitepaper >>>>>> source strings would include HTML markup tags. >>>>>> One of the things I discussed with Ian and Frank in Vancouver is >>>>>> the expense of recreating PDFs with new translations. It's >>>>>> prohibitively expensive for the Foundation as it requires design >>>>>> resources which we just don't have. As a result, we created the >>>>>> Containers whitepaper in HTML, so that it could be easily updated >>>>>> w/o working with outside design contractors. I indicated that we >>>>>> would also be moving the Edge paper to HTML so that we could prevent >>>>>> that additional design resource cost. >>>>>> On the other hand, the source strings of edge computing whitepaper >>>>>> which I18n team previously translated do not include HTML markup >>>>>> tags, since the source strings are based on just text format. >>>>>> The version that Akihiro put together was based on the Edge PDF, >>>>>> which we unfortunately didn't have the resources to implement in the >>>>>> same format. >>>>>> >>>>>> I really appreciate Akihiro's work on RST-based support on >>>>>> publishing translated edge computing whitepapers, since >>>>>> translators do not have to re-translate all the strings. >>>>>> I would like to second this. It took a lot of initiative to work on >>>>>> the RST-based translation. At the moment, it's just not usable for >>>>>> the reasons mentioned above. >>>>>> On the other hand, it seems that I18n team needs to investigate on >>>>>> translating similar strings of HTML-based edge computing whitepaper >>>>>> source strings, which would discourage translators. >>>>>> Can you expand on this? I'm not entirely clear on why the HTML >>>>>> based translation is more difficult. >>>>>> >>>>>> That's my point of view on translating edge computing whitepaper. >>>>>> >>>>>> For translating container whitepaper, I want to further ask the >>>>>> followings since *I18n-based tools* >>>>>> would mean for translators that translators can test and publish >>>>>> translated whitepapers locally: >>>>>> >>>>>> - How to build translated container whitepaper using original >>>>>> Silverstripe-based repository? >>>>>> https://docs.openstack.org/i18n/latest/tools.html [5] describes >>>>>> well how to build translated artifacts for RST-based OpenStack >>>>>> repositories >>>>>> but I could not find the way how to build translated container >>>>>> whitepaper with translated resources on Zanata. >>>>>> This is a little tricky. It's possible to set up a local version >>>>>> of the OpenStack website >>>>>> >>>>> (https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >>>>> >>>>>> [6]). However, we have to manually ingest the po files as they are >>>>>> completed and then push them out to production, so that wouldn't do >>>>>> much to help with your local build. I'm open to suggestions on how >>>>>> we can make this process easier for the i18n team. >>>>>> >>>>>> Thank you, >>>>>> Jimmy >>>>>> >>>>>> With many thanks, >>>>>> >>>>>> /Ian >>>>>> >>>>>> Jimmy McArthur wrote on 7/17/2018 11:01 PM: >>>>>> Frank, >>>>>> >>>>>> I'm sorry to hear about the displeasure around the Edge paper. As >>>>>> mentioned in a prior thread, the RST format that Akihiro worked did >>>>>> not work with the Zanata process that we have been using with our >>>>>> CMS. Additionally, the existing EDGE page is a PDF, so we had to >>>>>> build a new template to work with the new HTML whitepaper layout we >>>>>> created for the Containers paper. I outlined this in the thread " >>>>>> [OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge Computing >>>>>> Whitepaper Translation" on 6/25/18 and mentioned we would be ready >>>>>> with the template around 7/13. >>>>>> >>>>>> We completed the work on the new whitepaper template and then put >>>>>> out the pot files on Zanata so we can get the po language files >>>>>> back. If this process is too cumbersome for the translation team, >>>>>> I'm open to discussion, but right now our entire translation process >>>>>> is based on the official OpenStack Docs translation process outlined >>>>>> by the i18n team: >>>>>> https://docs.openstack.org/i18n/latest/en_GB/tools.html [7] >>>>>> >>>>>> Again, I realize Akihiro put in some work on his own proposing the >>>>>> new translation type. If the i18n team is moving to this format >>>>>> instead, we can work on redoing our process. >>>>>> >>>>>> Please let me know if I can clarify further. >>>>>> >>>>>> Thanks, >>>>>> Jimmy >>>>>> >>>>>> Frank Kloeker wrote: >>>>>> Hi Jimmy, >>>>>> >>>>>> permission was added for you and Sebastian. The Container Whitepaper >>>>>> is on the Zanata frontpage now. But we removed Edge Computing >>>>>> whitepaper last week because there is a kind of displeasure in the >>>>>> team since the results of translation are still not published beside >>>>>> Chinese version. It would be nice if we have a commitment from the >>>>>> Foundation that results are published in a specific timeframe. This >>>>>> includes your requirements until the translation should be >>>>>> available. >>>>>> >>>>>> thx Frank >>>>>> >>>>>> Am 2018-07-16 17:26, schrieb Jimmy McArthur: >>>>>> Sorry, I should have also added... we additionally need permissions >>>>>> so >>>>>> that we can add the a new version of the pot file to this project: >>>>>> >>>>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>>>> >>>>>> [8] Thanks! >>>>>> Jimmy >>>>>> >>>>>> Jimmy McArthur wrote: >>>>>> Hi all - >>>>>> >>>>>> We have both of the current whitepapers up and available for >>>>>> translation. Can we promote these on the Zanata homepage? >>>>>> >>>>>> >>>>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>>>> >>>>>> [9] >>>>>> >>>>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>>>> >>>>>> [10] Thanks all! >>>>>> Jimmy >>>>>> >>>>>> >>>>> __________________________________________________________________________ >>>>> >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> [12] >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> [12] >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> [12] >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> [12] >>>>> >>>>> >>>>> >>>>> Links: >>>>> ------ >>>>> [1] >>>>> https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >>>>> [2] >>>>> https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >>>>> [3] >>>>> https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >>>>> [4] >>>>> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >>>>> [5] https://docs.openstack.org/i18n/latest/tools.html >>>>> [6] >>>>> https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >>>>> [7] https://docs.openstack.org/i18n/latest/en_GB/tools.html >>>>> [8] >>>>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>>>> [9] >>>>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>>>> [10] >>>>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>>>> [11] >>>>> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> [12] >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > ------------------------------ Message: 21 Date: Mon, 6 Aug 2018 21:37:12 +0200 From: Spyros Trigazis To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [release][requirements][python-magnumclient] Magnumclient FFE Message-ID: Content-Type: text/plain; charset="utf-8" It is constraints only. There is no project that requires the new version. Spyros On Mon, 6 Aug 2018, 19:36 Matthew Thode, wrote: > On 18-08-06 18:34:42, Spyros Trigazis wrote: > > Hello, > > > > I have requested a release for python-magnumclient [0]. > > Per Doug Hellmann's comment in [0], I am requesting a FFE for > > python-magnumclient. > > > > My question to you is if this needs to be a constraints only thing or if > there is some project that REQUIRES this new version to work (in which > case that project needs to update it's exclusions or minumum). > > -- > Matthew Thode (prometheanfire) > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 22 Date: Mon, 6 Aug 2018 15:06:23 -0500 From: Matthew Thode To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [release][requirements][python-magnumclient] Magnumclient FFE Message-ID: <20180806200623.vwsepip3mh2wpa6i at gentoo.org> Content-Type: text/plain; charset="utf-8" On 18-08-06 21:37:12, Spyros Trigazis wrote: > It is constraints only. There is no project > that requires the new version. > > Spyros > > On Mon, 6 Aug 2018, 19:36 Matthew Thode, wrote: > > > On 18-08-06 18:34:42, Spyros Trigazis wrote: > > > Hello, > > > > > > I have requested a release for python-magnumclient [0]. > > > Per Doug Hellmann's comment in [0], I am requesting a FFE for > > > python-magnumclient. > > > > > > > My question to you is if this needs to be a constraints only thing or if > > there is some project that REQUIRES this new version to work (in which > > case that project needs to update it's exclusions or minumum). > > Has my ack then -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: ------------------------------ Message: 23 Date: Mon, 6 Aug 2018 15:44:00 -0500 From: Miguel Lavalle To: OpenStack Development Mailing List Subject: [openstack-dev] [neutron] Bug deputy report week July 30th - August 5th Message-ID: Content-Type: text/plain; charset="utf-8" Dear Neutron Team, I was the bugs deputy for the week of July 39th - August 6th (inclusive, so bcafarel has to start on the 7th). Here's the summary of the bugs that were filed: High: https://bugs.launchpad.net/neutron/+bug/1785656 test_internal_dns.InternalDNSTest fails even though dns-integration extension isn't loaded. Proposed fixes: https://review.openstack.org/#/c/589247, https://review.openstack.org/# /c/589255 Medium: https://bugs.launchpad.net/neutron/+bug/1784837 Test tempest.scenario.test_security_groups_basic_ops.TestSecurity GroupsBasicOps.test_in_tenant_traffic fails in neutron-tempest-dvr-ha-multinode-full job https://bugs.launchpad.net/neutron/+bug/1784836 Functional tests from neutron.tests.functional.db.migrations fails randomly https://bugs.launchpad.net/neutron/+bug/1785582 Connectivity to instance after L3 router migration from Legacy to HA fails. Assigned to Slawek Low: https://bugs.launchpad.net/neutron/+bug/1785025 Install and configure controller node in Neutron https://bugs.launchpad.net/neutron/+bug/1784586 Networking guide doesn't clarify that subnets inherit the RBAC policies of their network. Fix: https://review.openstack.org/#/c/588844 In discussion: https://bugs.launchpad.net/neutron/+bug/1784484 intermittent issue getting assigned MACs for SRIOV nics, causes nova timeout https://bugs.launchpad.net/neutron/+bug/1784259 Neutron RBAC not working for multiple extensions https://bugs.launchpad.net/neutron/+bug/1785615 DNS resolution through eventlet contact nameservers if there's an IPv4 or IPv6 entry present in hosts file RFEs: https://bugs.launchpad.net/neutron/+bug/1784879 Neutron doesn't update Designate with some use cases https://bugs.launchpad.net/neutron/+bug/1784590 neutron-dynamic-routing bgp agent should have options for MP-BGP https://bugs.launchpad.net/neutron/+bug/1785608 [RFE] neutron ovs agent support baremetal port using smart nic Invalid: https://bugs.launchpad.net/neutron/+bug/1784950 get_device_details RPC fails if host not specified https://bugs.launchpad.net/neutron/+bug/1785189 Floatingip and router bandwidth speed limit failure Incomplete: https://bugs.launchpad.net/neutron/+bug/1785349 policy.json does not contain rule for auto-allocated-topologies removal https://bugs.launchpad.net/neutron/+bug/1785539 Some notifications related to l3 flavor pass context Best regards -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 24 Date: Mon, 6 Aug 2018 16:52:04 -0400 From: Zane Bitter To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate Message-ID: <658519a3-e79e-cdac-6f55-7dad77df043b at redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed On 06/08/18 13:11, Thomas Goirand wrote: > On 08/02/2018 10:43 AM, Andrey Kurilin wrote: >> There's also some "raise StopIteration" issues in: >> - ceilometer >> - cinder >> - designate >> - glance >> - glare >> - heat >> - karbor >> - manila >> - murano >> - networking-ovn >> - neutron-vpnaas >> - nova >> - rally >> >> >> Can you provide any traceback or steps to reproduce the issue for Rally >> project ? I assume Thomas is only trying to run the unit tests, since that's what he has to do to verify the package? > I'm not sure there's any. The only thing I know is that it has stop > StopIteration stuff, but I'm not sure if they are part of generators, in > which case they should simply be replaced by "return" if you want it to > be py 3.7 compatible. I was about to say nobody is doing 'raise StopIteration' where they mean 'return' until I saw that the Glance tests apparently were :D The main issue though is when StopIteration is raised by one thing that happens to be called from *another* generator. e.g. many of the Heat tests that are failing are because we supplied a too-short list of side-effects to a mock and calling next() on them raises StopIteration, but because the calls were happening from inside a generator the StopIterations previously just got swallowed. If no generator were involved the test would have failed with the StopIteration exception. (Note: this was a bug - either in the code or more likely the tests. The purpose of the change in py37 was to expose this kind of bug wherever it exists.) > I didn't have time to investigate these, but at least Glance was > affected, and a patch was sent (as well as an async patch). None of them > has been merged yet: > > https://review.openstack.org/#/c/586050/ > https://review.openstack.org/#/c/586716/ > > That'd be ok if at least there was some reviews. It looks like nobody > cares but Debian & Ubuntu people... :( > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > ------------------------------ Message: 25 Date: Mon, 06 Aug 2018 16:56:45 -0400 From: Doug Hellmann To: openstack-dev Subject: [openstack-dev] [python3] champions, please review the updated process Message-ID: <1533588883-sup-4209 at lrrr.local> Content-Type: text/plain; charset=UTF-8 I have updated the README.rst in the goal-tools repository with an updated process for preparing, proposing, and tracking the zuul migration patches. I need the other champions to look over the instructions and let me know if any parts are confusing or incomplete. Please do that as soon as you can, so we can be prepared to start generating patches after the final release for Rocky is done. http://git.openstack.org/cgit/openstack/goal-tools/tree/README.rst#n22 Doug ------------------------------ Message: 26 Date: Mon, 6 Aug 2018 15:55:05 -0600 From: Wesley Hayutin To: "OpenStack Development Mailing List (not for usage questions)" Cc: Derek Higgins , Kieran Forde Subject: Re: [openstack-dev] [tripleo] 3rd party ovb jobs are down Message-ID: Content-Type: text/plain; charset="utf-8" On Mon, Aug 6, 2018 at 12:56 PM Wesley Hayutin wrote: > Greetings, > > There is currently an unplanned outtage atm for the tripleo 3rd party OVB > based jobs. > We will contact the list when there are more details. > > Thank you! > OK, I'm going to call an end to the current outtage. We are closely monitoring the ovb 3rd party jobs. I'll called for the outtage when we hit [1]. Once I deleted the stack that moved teh HA routers to back_up state, the networking came back online. Additionally Kieran and I had to work through a number of instances that required admin access to remove. Once those resources were cleaned up our CI tooling removed the rest of the stacks in delete_failed status. The stacks in delete_failed status were holding ip address that were causing new stacks to fail [2] There are still active issues that could cause OVB jobs to fail. This connection issues [3] was originaly thought to be DNS, however that turned out to not be the case. You may also see your job have a "node_failure" status, Paul has sent updates about this issue and is working on a patch and integration into rdo software factory. The CI team is close to including all the console logs into the regular job logs, however if needed atm they can be viewed at [5]. We are also adding the bmc to the list of instances that we collect logs from. *To summarize* the most recent outtage was infra related and the errors were swallowed up in the bmc console log that at the time was not available to users. We continue to monitor that ovb jobs at http://cistatus.tripleo.org/ The legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master job is at a 53% pass rate, it needs to move to a > 85% pass rate to match other check jobs. Thanks all! [1] https://bugzilla.redhat.com/show_bug.cgi?id=1570136 [2] http://paste.openstack.org/show/727444/ [3] https://bugs.launchpad.net/tripleo/+bug/1785342 [4] https://review.openstack.org/#/c/584488/ [5] http://38.145.34.41/console-logs/?C=M;O=D > > -- > > Wes Hayutin > > Associate MANAGER > > Red Hat > > > > w hayutin at redhat.com T: +1919 <+19197544114> > 4232509 IRC: weshay > > > View my calendar and check my availability for meetings HERE > > -- Wes Hayutin Associate MANAGER Red Hat w hayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 27 Date: Mon, 6 Aug 2018 17:03:28 -0500 From: Matt Riedemann To: "OpenStack Development Mailing List (not for usage questions)" , "openstack-operators at lists.openstack.org" , openstack-sigs at lists.openstack.org Subject: [openstack-dev] [nova] StarlingX diff analysis Message-ID: Content-Type: text/plain; charset=utf-8; format=flowed In case you haven't heard, there was this StarlingX thing announced at the last summit. I have gone through the enormous nova diff in their repo and the results are in a spreadsheet [1]. Given the enormous spreadsheet (see a pattern?), I have further refined that into a set of high-level charts [2]. I suspect there might be some negative reactions to even doing this type of analysis lest it might seem like promoting throwing a huge pile of code over the wall and expecting the OpenStack (or more specifically the nova) community to pick it up. That's not my intention at all, nor do I expect nova maintainers to be responsible for upstreaming any of this. This is all educational to figure out what the major differences and overlaps are and what could be constructively upstreamed from the starlingx staging repo since it's not all NFV and Edge dragons in here, there are some legitimate bug fixes and good ideas. I'm sharing it because I want to feel like my time spent on this in the last week wasn't all for nothing. [1] https://docs.google.com/spreadsheets/d/1ugp1FVWMsu4x3KgrmPf7HGX8Mh1n80v-KVzweSDZunU/edit?usp=sharing [2] https://docs.google.com/presentation/d/1P-__JnxCFUbSVlEoPX26Jz6VaOyNg-jZbBsmmKA2f0c/edit?usp=sharing -- Thanks, Matt ------------------------------ Message: 28 Date: Mon, 6 Aug 2018 15:16:31 -0700 From: "Nadathur, Sundar" To: openstack-dev at lists.openstack.org Subject: [openstack-dev] [Cyborg] Agent - Conductor update Message-ID: Content-Type: text/plain; charset="utf-8"; Format="flowed" Hi,    The Cyborg agent in a compute node collects information about devices from the Cyborg drivers on that node. It then needs to push that information to the Cyborg conductor in the controller, which then needs to persist it in the Cyborg db and update Placement. Further, the agent needs to collect and update this information periodically (or possibly in response to notifications) to handle hot add/delete of devices, reprogramming (for FPGAs), health failure of devices, etc. In this morning's call, we discussed how to do this periodic update [1]. In particular, we talked about how to compute the difference between the previous device configuration in a compute node and the current one, whether the agent do should do that diff or the controller, etc. Since there are many fields per device, and they are tree-structured, the complexity of doing the diff seemed large. On taking a closer look, however, the amount of computation needed to do the update is not huge. Say, for discussion's sake, that the controller has a snapshot of the entire device config for a specific compute node, i.e. an array of device structures NewConfig[]. It reads the current list of devices for that node from the db, CurrentConfig[]. Then the controller's logic is like this: * Determine the list of devices in NewConfig[] but not in CurrentConfig[] (this is a set difference in Python [2]): they are the newly added ones. For each newly added device, do a single transaction to add all the fields to the db together. * Determine the list of devices in CurrentConfig[] but not in NewConfig[]: they are the deleted devices.For each such device, do a single transaction to delete that entry. * For each modified device, compute what has changed, and update that alone. This is the per-field diff. Say each field in the device structure is a string of 100 characters, and it takes 1 nanosecond to add, delete or modify a character. So, each field takes 100 ns to update (add/delete/modify). Say 20 fields per device: so 2 us to add, delete or modify a device. Say 10 devices per compute node: so 20 us per node. 500 nodes will take 10 milliseconds. So, if each node sends a refresh every second, the controller will spend a very small fraction of that time in updating the db, even including transaction costs, set difference computation, etc. This back-of-the-envelope calculation shows that we need not try to optimize too early: the agent should send the entire device config over to the controller, and let it update the db per-device and per-field. [1] https://etherpad.openstack.org/p/cyborg-rocky-development [2] https://docs.python.org/2/library/sets.html Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 29 Date: Tue, 7 Aug 2018 10:20:45 +1000 From: Tony Breeds To: OpenStack Development Mailing List Subject: [openstack-dev] [all][election][senlin][tacker] Last chance to vote Message-ID: <20180807002010.GB9540 at thor.bakeyournoodle.com> Content-Type: text/plain; charset="utf-8" Hello Senlin and Tacker contributors, Just a quick reminder that elections are closing soon, if you haven't already you should use your right to vote and pick your favourite candidate! You have until Aug 07, 2018 23:45 UTC. Thanks for your time! Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: ------------------------------ Message: 30 Date: Tue, 7 Aug 2018 11:17:14 +0900 From: Masahito MUROI To: "OpenStack Development Mailing List (not for usage questions)" Subject: [openstack-dev] [Blazar] Stein etherpad Message-ID: Content-Type: text/plain; charset=utf-8; format=flowed Hi Blazar folks, I prepared the etherpad page for the Stein PTG. https://etherpad.openstack.org/p/blazar-ptg-stein best regards, Masahito ------------------------------ Message: 31 Date: Tue, 7 Aug 2018 14:34:45 +1000 From: Tony Breeds To: Andreas Jaeger Cc: "OpenStack Development Mailing List \(not for usage questions\)" Subject: Re: [openstack-dev] [tripleo] EOL process for newton branches Message-ID: <20180807043445.GH9540 at thor.bakeyournoodle.com> Content-Type: text/plain; charset="utf-8" On Mon, Aug 06, 2018 at 07:27:37PM +0200, Andreas Jaeger wrote: > Tony, > > On 2018-07-19 06:59, Tony Breeds wrote: > > On Wed, Jul 18, 2018 at 08:08:16PM -0400, Emilien Macchi wrote: > > > Option 2, EOL everything. > > > Thanks a lot for your help on this one, Tony. > > > > No problem. > > > > I've created: > > https://review.openstack.org/583856 > > to tag final releases for tripleo deliverables and then mark them as > > EOL. > > This one has merged now. Thanks. > > > > Once that merges we can arrange for someone, with appropriate > > permissions to run: > > > > # EOL repos belonging to tripleo > > eol_branch.sh -- stable/newton newton-eol \ > > openstack/instack openstack/instack-undercloud \ > > openstack/os-apply-config openstack/os-collect-config \ > > openstack/os-net-config openstack/os-refresh-config \ > > openstack/puppet-tripleo openstack/python-tripleoclient \ > > openstack/tripleo-common openstack/tripleo-heat-templates \ > > openstack/tripleo-image-elements \ > > openstack/tripleo-puppet-elements openstack/tripleo-ui \ > > openstack/tripleo-validations > > Tony, will you coordinate with infra to run this yourself again - or let > them run it for you, please? I'm happy with either option. If it hasn't been run when I get online tomorrow I'll ask on #openstack-infra and I'll do it myself. > Note that we removed the script with retiring release-tools repo, I propose > to readd with https://review.openstack.org/589236 and > https://review.openstack.org/589237 and would love your review on these, > please. I want to be sure that we import the right version... Thanks for doing that! LGTM +1 :) Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: ------------------------------ Message: 32 Date: Tue, 7 Aug 2018 15:08:10 +1000 From: Tony Breeds To: Andreas Jaeger Cc: "OpenStack Development Mailing List \(not for usage questions\)" Subject: Re: [openstack-dev] [tripleo] EOL process for newton branches Message-ID: <20180807050810.GI9540 at thor.bakeyournoodle.com> Content-Type: text/plain; charset="utf-8" On Tue, Aug 07, 2018 at 02:34:45PM +1000, Tony Breeds wrote: > On Mon, Aug 06, 2018 at 07:27:37PM +0200, Andreas Jaeger wrote: > > Tony, > > > > On 2018-07-19 06:59, Tony Breeds wrote: > > > On Wed, Jul 18, 2018 at 08:08:16PM -0400, Emilien Macchi wrote: > > > > Option 2, EOL everything. > > > > Thanks a lot for your help on this one, Tony. > > > > > > No problem. > > > > > > I've created: > > > https://review.openstack.org/583856 > > > to tag final releases for tripleo deliverables and then mark them as > > > EOL. > > > > This one has merged now. > > Thanks. > > > > > > > Once that merges we can arrange for someone, with appropriate > > > permissions to run: > > > > > > # EOL repos belonging to tripleo > > > eol_branch.sh -- stable/newton newton-eol \ > > > openstack/instack openstack/instack-undercloud \ > > > openstack/os-apply-config openstack/os-collect-config \ > > > openstack/os-net-config openstack/os-refresh-config \ > > > openstack/puppet-tripleo openstack/python-tripleoclient \ > > > openstack/tripleo-common openstack/tripleo-heat-templates \ > > > openstack/tripleo-image-elements \ > > > openstack/tripleo-puppet-elements openstack/tripleo-ui \ > > > openstack/tripleo-validations > > > > Tony, will you coordinate with infra to run this yourself again - or let > > them run it for you, please? > > I'm happy with either option. If it hasn't been run when I get online > tomorrow I'll ask on #openstack-infra and I'll do it myself. Okay Ian gave me permission to do this. Those repos have been tagged newton-eol and had the branches deleted. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 484 bytes Desc: not available URL: ------------------------------ Message: 33 Date: Tue, 7 Aug 2018 08:10:53 +0200 From: Flint WALRUS To: Matt Riedemann Cc: "OpenStack Development Mailing List \(not for usage questions\)" , openstack-sigs at lists.openstack.org, "openstack-operators at lists.openstack.org" Subject: Re: [openstack-dev] [Openstack-operators] [nova] StarlingX diff analysis Message-ID: Content-Type: text/plain; charset="utf-8" Hi matt, everyone, I just read your analysis and would like to thank you for such work. I really think there are numerous features included/used on this Nova rework that would be highly beneficial for Nova and users of it. I hope people will fairly appreciate you work. I didn’t had time to check StarlingX code quality, how did you feel it while you were doing your analysis? Thanks a lot for this share. I’ll have a closer look at it this afternoon as my company may be interested by some features. Kind regards, G. Le mar. 7 août 2018 à 00:03, Matt Riedemann a écrit : > In case you haven't heard, there was this StarlingX thing announced at > the last summit. I have gone through the enormous nova diff in their > repo and the results are in a spreadsheet [1]. Given the enormous > spreadsheet (see a pattern?), I have further refined that into a set of > high-level charts [2]. > > I suspect there might be some negative reactions to even doing this type > of analysis lest it might seem like promoting throwing a huge pile of > code over the wall and expecting the OpenStack (or more specifically the > nova) community to pick it up. That's not my intention at all, nor do I > expect nova maintainers to be responsible for upstreaming any of this. > > This is all educational to figure out what the major differences and > overlaps are and what could be constructively upstreamed from the > starlingx staging repo since it's not all NFV and Edge dragons in here, > there are some legitimate bug fixes and good ideas. I'm sharing it > because I want to feel like my time spent on this in the last week > wasn't all for nothing. > > [1] > > https://docs.google.com/spreadsheets/d/1ugp1FVWMsu4x3KgrmPf7HGX8Mh1n80v-KVzweSDZunU/edit?usp=sharing > [2] > > https://docs.google.com/presentation/d/1P-__JnxCFUbSVlEoPX26Jz6VaOyNg-jZbBsmmKA2f0c/edit?usp=sharing > > -- > > Thanks, > > Matt > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 34 Date: Tue, 7 Aug 2018 08:38:45 +0200 From: Andreas Jaeger To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [tripleo] EOL process for newton branches Message-ID: <441e4cb8-10ee-2797-ee5c-fd6d212d3bc5 at suse.com> Content-Type: text/plain; charset="utf-8" On 2018-08-07 07:08, Tony Breeds wrote: > Okay Ian gave me permission to do this. Those repos have been tagged > newton-eol and had the branches deleted. Thanks, Tony! Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 ------------------------------ Message: 35 Date: Tue, 07 Aug 2018 09:49:17 +0200 From: Frank Kloeker To: Jimmy McArthur Cc: "OpenStack Development Mailing List \(not for usage questions\)" , Ildiko Vancsa , Sebastian Marcet Subject: Re: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation Message-ID: <4ef95237972fb567d9eaebba82bf9066 at arcor.de> Content-Type: text/plain; charset=US-ASCII; format=flowed Many thanks, Jimmy! At last I draw your attention to Stackalytics. Translation metrics for whitepapers not counted there. Maybe you have an advice for https://review.openstack.org/#/c/588965/ kind regards Frank Am 2018-08-06 21:07, schrieb Jimmy McArthur: > A heads up that the Translators are now listed at the bottom of the > page as well, along with the rest of the paper contributors: > > https://www.openstack.org/edge-computing/cloud-edge-computing-beyond-the-data-center?lang=ja_JP > > Cheers! > Jimmy > > Frank Kloeker wrote: >> Hi Jimmy, >> >> thanks for announcement. Great stuff! It looks really great and it's >> easy to navigate. I think a special thanks goes to Sebastian for >> designing the pages. One small remark: have you tried text-align: >> justify? I think it would be a little bit more readable, like a >> science paper (German word is: Ordnung) >> I put the projects again on the frontpage of the translation platform, >> so we'll get more translations shortly. >> >> kind regards >> >> Frank >> >> Am 2018-08-02 21:07, schrieb Jimmy McArthur: >>> The Edge and Containers translations are now live. As new >>> translations become available, we will add them to the page. >>> >>> https://www.openstack.org/containers/ >>> https://www.openstack.org/edge-computing/ >>> >>> Note that the Chinese translation has not been added to Zanata at >>> this >>> time, so I've left the PDF download up on that page. >>> >>> Thanks everyone and please let me know if you have questions or >>> concerns! >>> >>> Cheers! >>> Jimmy >>> >>> Jimmy McArthur wrote: >>>> Frank, >>>> >>>> We expect to have these papers up this afternoon. I'll update this >>>> thread when we do. >>>> >>>> Thanks! >>>> Jimmy >>>> >>>> Frank Kloeker wrote: >>>>> Hi Sebastian, >>>>> >>>>> okay, it's translated now. In Edge whitepaper is the problem with >>>>> XML-Parsing of the term AT&T. Don't know how to escape this. Maybe >>>>> you will see the warning during import too. >>>>> >>>>> kind regards >>>>> >>>>> Frank >>>>> >>>>> Am 2018-07-30 20:09, schrieb Sebastian Marcet: >>>>>> Hi Frank, >>>>>> i was double checking pot file and realized that original pot >>>>>> missed >>>>>> some parts of the original paper (subsections of the paper) >>>>>> apologizes >>>>>> on that >>>>>> i just re uploaded an updated pot file with missing subsections >>>>>> >>>>>> regards >>>>>> >>>>>> On Mon, Jul 30, 2018 at 2:20 PM, Frank Kloeker >>>>>> wrote: >>>>>> >>>>>>> Hi Jimmy, >>>>>>> >>>>>>> from the GUI I'll get this link: >>>>>>> >>>>>> https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >>>>>>> [1] >>>>>>> >>>>>>> paper version are only in container whitepaper: >>>>>>> >>>>>>> >>>>>> https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >>>>>>> [2] >>>>>>> >>>>>>> In general there is no group named papers >>>>>>> >>>>>>> kind regards >>>>>>> >>>>>>> Frank >>>>>>> >>>>>>> Am 2018-07-30 17:06, schrieb Jimmy McArthur: >>>>>>> Frank, >>>>>>> >>>>>>> We're getting a 404 when looking for the pot file on the Zanata >>>>>>> API: >>>>>>> >>>>>> https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >>>>>>> [3] >>>>>>> >>>>>>> As a result, we can't pull the po files. Any idea what might be >>>>>>> happening? >>>>>>> >>>>>>> Seeing the same thing with both papers... >>>>>>> >>>>>>> Thank you, >>>>>>> Jimmy >>>>>>> >>>>>>> Frank Kloeker wrote: >>>>>>> Hi Jimmy, >>>>>>> >>>>>>> Korean and German version are now done on the new format. Can you >>>>>>> check publishing? >>>>>>> >>>>>>> thx >>>>>>> >>>>>>> Frank >>>>>>> >>>>>>> Am 2018-07-19 16:47, schrieb Jimmy McArthur: >>>>>>> Hi all - >>>>>>> >>>>>>> Follow up on the Edge paper specifically: >>>>>>> >>>>>> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >>>>>>> [4] This is now available. As I mentioned on IRC this morning, it >>>>>>> should >>>>>>> be VERY close to the PDF. Probably just needs a quick review. >>>>>>> >>>>>>> Let me know if I can assist with anything. >>>>>>> >>>>>>> Thank you to i18n team for all of your help!!! >>>>>>> >>>>>>> Cheers, >>>>>>> Jimmy >>>>>>> >>>>>>> Jimmy McArthur wrote: >>>>>>> Ian raises some great points :) I'll try to address below... >>>>>>> >>>>>>> Ian Y. Choi wrote: >>>>>>> Hello, >>>>>>> >>>>>>> When I saw overall translation source strings on container >>>>>>> whitepaper, I would infer that new edge computing whitepaper >>>>>>> source strings would include HTML markup tags. >>>>>>> One of the things I discussed with Ian and Frank in Vancouver is >>>>>>> the expense of recreating PDFs with new translations. It's >>>>>>> prohibitively expensive for the Foundation as it requires design >>>>>>> resources which we just don't have. As a result, we created the >>>>>>> Containers whitepaper in HTML, so that it could be easily updated >>>>>>> w/o working with outside design contractors. I indicated that we >>>>>>> would also be moving the Edge paper to HTML so that we could >>>>>>> prevent >>>>>>> that additional design resource cost. >>>>>>> On the other hand, the source strings of edge computing >>>>>>> whitepaper >>>>>>> which I18n team previously translated do not include HTML markup >>>>>>> tags, since the source strings are based on just text format. >>>>>>> The version that Akihiro put together was based on the Edge PDF, >>>>>>> which we unfortunately didn't have the resources to implement in >>>>>>> the >>>>>>> same format. >>>>>>> >>>>>>> I really appreciate Akihiro's work on RST-based support on >>>>>>> publishing translated edge computing whitepapers, since >>>>>>> translators do not have to re-translate all the strings. >>>>>>> I would like to second this. It took a lot of initiative to work >>>>>>> on >>>>>>> the RST-based translation. At the moment, it's just not usable >>>>>>> for >>>>>>> the reasons mentioned above. >>>>>>> On the other hand, it seems that I18n team needs to investigate >>>>>>> on >>>>>>> translating similar strings of HTML-based edge computing >>>>>>> whitepaper >>>>>>> source strings, which would discourage translators. >>>>>>> Can you expand on this? I'm not entirely clear on why the HTML >>>>>>> based translation is more difficult. >>>>>>> >>>>>>> That's my point of view on translating edge computing whitepaper. >>>>>>> >>>>>>> For translating container whitepaper, I want to further ask the >>>>>>> followings since *I18n-based tools* >>>>>>> would mean for translators that translators can test and publish >>>>>>> translated whitepapers locally: >>>>>>> >>>>>>> - How to build translated container whitepaper using original >>>>>>> Silverstripe-based repository? >>>>>>> https://docs.openstack.org/i18n/latest/tools.html [5] describes >>>>>>> well how to build translated artifacts for RST-based OpenStack >>>>>>> repositories >>>>>>> but I could not find the way how to build translated container >>>>>>> whitepaper with translated resources on Zanata. >>>>>>> This is a little tricky. It's possible to set up a local version >>>>>>> of the OpenStack website >>>>>>> >>>>>> (https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >>>>>>> [6]). However, we have to manually ingest the po files as they >>>>>>> are >>>>>>> completed and then push them out to production, so that wouldn't >>>>>>> do >>>>>>> much to help with your local build. I'm open to suggestions on >>>>>>> how >>>>>>> we can make this process easier for the i18n team. >>>>>>> >>>>>>> Thank you, >>>>>>> Jimmy >>>>>>> >>>>>>> With many thanks, >>>>>>> >>>>>>> /Ian >>>>>>> >>>>>>> Jimmy McArthur wrote on 7/17/2018 11:01 PM: >>>>>>> Frank, >>>>>>> >>>>>>> I'm sorry to hear about the displeasure around the Edge paper. >>>>>>> As >>>>>>> mentioned in a prior thread, the RST format that Akihiro worked >>>>>>> did >>>>>>> not work with the Zanata process that we have been using with >>>>>>> our >>>>>>> CMS. Additionally, the existing EDGE page is a PDF, so we had to >>>>>>> build a new template to work with the new HTML whitepaper layout >>>>>>> we >>>>>>> created for the Containers paper. I outlined this in the thread " >>>>>>> [OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge Computing >>>>>>> Whitepaper Translation" on 6/25/18 and mentioned we would be >>>>>>> ready >>>>>>> with the template around 7/13. >>>>>>> >>>>>>> We completed the work on the new whitepaper template and then put >>>>>>> out the pot files on Zanata so we can get the po language files >>>>>>> back. If this process is too cumbersome for the translation team, >>>>>>> I'm open to discussion, but right now our entire translation >>>>>>> process >>>>>>> is based on the official OpenStack Docs translation process >>>>>>> outlined >>>>>>> by the i18n team: >>>>>>> https://docs.openstack.org/i18n/latest/en_GB/tools.html [7] >>>>>>> >>>>>>> Again, I realize Akihiro put in some work on his own proposing >>>>>>> the >>>>>>> new translation type. If the i18n team is moving to this format >>>>>>> instead, we can work on redoing our process. >>>>>>> >>>>>>> Please let me know if I can clarify further. >>>>>>> >>>>>>> Thanks, >>>>>>> Jimmy >>>>>>> >>>>>>> Frank Kloeker wrote: >>>>>>> Hi Jimmy, >>>>>>> >>>>>>> permission was added for you and Sebastian. The Container >>>>>>> Whitepaper >>>>>>> is on the Zanata frontpage now. But we removed Edge Computing >>>>>>> whitepaper last week because there is a kind of displeasure in >>>>>>> the >>>>>>> team since the results of translation are still not published >>>>>>> beside >>>>>>> Chinese version. It would be nice if we have a commitment from >>>>>>> the >>>>>>> Foundation that results are published in a specific timeframe. >>>>>>> This >>>>>>> includes your requirements until the translation should be >>>>>>> available. >>>>>>> >>>>>>> thx Frank >>>>>>> >>>>>>> Am 2018-07-16 17:26, schrieb Jimmy McArthur: >>>>>>> Sorry, I should have also added... we additionally need >>>>>>> permissions >>>>>>> so >>>>>>> that we can add the a new version of the pot file to this >>>>>>> project: >>>>>>> >>>>>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>>>>>> [8] Thanks! >>>>>>> Jimmy >>>>>>> >>>>>>> Jimmy McArthur wrote: >>>>>>> Hi all - >>>>>>> >>>>>>> We have both of the current whitepapers up and available for >>>>>>> translation. Can we promote these on the Zanata homepage? >>>>>>> >>>>>>> >>>>>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>>>>>> [9] >>>>>>> >>>>>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>>>>>> [10] Thanks all! >>>>>>> Jimmy >>>>>>> >>>>>>> >>>>>> __________________________________________________________________________ >>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>> Unsubscribe: >>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>>> [11] >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> [12] >>>>>> >>>>>> __________________________________________________________________________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> [12] >>>>>> >>>>>> __________________________________________________________________________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> [12] >>>>>> >>>>>> __________________________________________________________________________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> [12] >>>>>> >>>>>> >>>>>> >>>>>> Links: >>>>>> ------ >>>>>> [1] >>>>>> https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >>>>>> [2] >>>>>> https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >>>>>> [3] >>>>>> https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >>>>>> [4] >>>>>> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >>>>>> [5] https://docs.openstack.org/i18n/latest/tools.html >>>>>> [6] >>>>>> https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >>>>>> [7] https://docs.openstack.org/i18n/latest/en_GB/tools.html >>>>>> [8] >>>>>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>>>>> [9] >>>>>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>>>>> [10] >>>>>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>>>>> [11] >>>>>> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>> [12] >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> ------------------------------ Message: 36 Date: Tue, 7 Aug 2018 16:15:26 +0800 (CST) From: "Frank Wang" To: "OpenStack Development Mailing List" Subject: [openstack-dev] [neutron] Does neutron support QinQ(vlan transparent) ? Message-ID: <6e1ff2b5.671a.16513747f10.Coremail.wangpeihuixyz at 126.com> Content-Type: text/plain; charset="gbk" Hello folks, I noted that the API already has the vlan_transparent attribute in the network, Do neutron-agents(linux-bridge, openvswitch) support QinQ? I didn't find any reference materials that could guide me on how to use or configure it. Thank for your time reading this, Any comments would be appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 37 Date: Tue, 7 Aug 2018 12:28:03 +0100 (BST) From: Chris Dent To: OpenStack-dev at lists.openstack.org Subject: [openstack-dev] [tc] [all] TC Report 18-32 Message-ID: Content-Type: text/plain; charset="utf-8"; Format="flowed" HTML: https://anticdent.org/tc-report-18-32.html The TC discussions of interest in the past week have been related to the recent [PTL elections](https://governance.openstack.org/election/) and planning for the [forthcoming PTG](https://www.openstack.org/ptg). ## PTL Election Gaps A few official projects had no nominee for the PTL position. An [etherpad](https://etherpad.openstack.org/p/stein-leaderless) was created to track this, and most of the situations have been resolved. Pointers to some of the discussion: * [Near the end of nomination period](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-31.log.html#t2018-07-31T17:39:28). * [Discussion about Trove](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-02.log.html#t2018-08-02T13:59:11). There's quite a bit here about how we evaluate the health of a project and the value of volunteers, and for how long we are willing to extend grace periods for projects which have a history of imperfect health. * [What to do about RefStack](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-02.log.html#t2018-08-02T16:01:12) which evolved to become a discussion about the role of the QA team. * [Freezer and Searchlight](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-07.log.html#t2018-08-07T09:06:37). Where we (the TC) seem to have some minor disagreement is the extent to which we should be extending a lifeline to official projects which are (for whatever reason) struggling to keep up with responsibilities or we should be using the power to remove official status as a way to highlight need. ## PTG Planning The PTG is a month away, so the TC is doing a bit of planning to prepare. There will be two different days during which the TC will meet: Sunday afternoon before the PTG, and all day Friday. Most planning is happening on [this etherpad](https://etherpad.openstack.org/p/tc-stein-ptg). There is also of specific etherpad about [the relationship between the TC and the Foundation and Foundation corporate members](https://etherpad.openstack.org/p/tc-board-foundation). And one for [post-lunch topics](https://etherpad.openstack.org/p/PTG4-postlunch). IRC links: * [Discussion about limiting the agenda](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-03.log.html#t2018-08-03T12:31:38). If there's any disagreement in this planning process, it is over whether we should focus our time on topics we have some chance of resolving or at least making some concrete progress, or we should spend the time having open-ended discussions. Ideally there would be time for both, as the latter is required to develop the shared language that is needed to take real action. But as is rampant in the community we are constrained by time and other responsibilities. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent ------------------------------ Message: 38 Date: Tue, 7 Aug 2018 12:32:44 +0100 From: Sean Mooney To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [neutron] Does neutron support QinQ(vlan transparent) ? Message-ID: Content-Type: text/plain; charset="UTF-8" TL;DR it wont work with the ovs agent but "should" work with linux bridge. see full message below for details. regards sean. the linux bridge agent supports the vlan_transparent option only when createing networks with an l3 segmentation type e.g. vxlan,gre... ovs using the neutron l2 agnet does not supprot vlan_transparent netwroks because of how that agent use vlans for tenant isolation on the br-int. it is possible to use achive vlan transparancy with ovs usign an sdn controller such as odl or ovn but that was not what you asked in your question so i wont expand on that futher. if you deploy openstack with linux bridge networking and then create a tenant network of type vxlan with vlan_transparancy set to true and your tenants generate QinQ traffic with an mtu reduced so that it will fix within the vxlan tunnel unfragmented then yes it should be possibly however you may need to disable port_security/security groups on the port as im not sure if the ip tables firewall driver will correctly handel this case. an alternive to disabling security groups would be to add an explicit rule that matched on the etehrnet type and allowed QinQ traffic on ingress and egress from the vm. as far as i am aware this is not tested in the gate so while it should work the lack of documentation and test coverage means you will likely be one of the first to test it if you choose to do so and it may fail for many reasons. On 7 August 2018 at 09:15, Frank Wang wrote: > Hello folks, > > I noted that the API already has the vlan_transparent attribute in the > network, Do neutron-agents(linux-bridge, openvswitch) support QinQ? I > didn't find any reference materials that could guide me on how to use or > configure it. > > Thank for your time reading this, Any comments would be appreciated. > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > ------------------------------ Message: 39 Date: Tue, 07 Aug 2018 13:48:36 +0200 From: Balázs Gibizer To: OpenStack-dev Subject: [openstack-dev] [nova]Notification update week 32 Message-ID: <1533642516.26377.2 at smtp.office365.com> Content-Type: text/plain; charset=us-ascii; format=flowed Hi, Here is the latest notification subteam update. Bugs ---- No RC potential notification bug is tracked. No new bug since last week. Weekly meeting -------------- No meeting is planned for this week. Cheers, gibi ------------------------------ Message: 40 Date: Tue, 7 Aug 2018 13:52:48 +0200 From: Thomas Goirand To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate Message-ID: Content-Type: text/plain; charset=utf-8 On 08/06/2018 09:02 PM, Sean McGinnis wrote: >> >> I didn't have time to investigate these, but at least Glance was >> affected, and a patch was sent (as well as an async patch). None of them >> has been merged yet: >> >> https://review.openstack.org/#/c/586050/ >> https://review.openstack.org/#/c/586716/ >> >> That'd be ok if at least there was some reviews. It looks like nobody >> cares but Debian & Ubuntu people... :( >> > > Keep in mind that your priorities are different than everyone elses. There are > large parts of the community still working on Python 3.5 support (our > officially supported Python 3 version), as well as smaller teams overall > working on things like critical bugs. > > Unless and until we declare Python 3.7 as our new target (which I don't think > we are ready to do yet), these kinds of patches will be on a best effort basis. This is exactly what I'm complaining about. OpenStack upstream has very wrong priorities. If we really are to switch to Python 3, then we got to make sure we're current, because that's the version distros are end up running. Or maybe we only care if "it works on devstack" (tm)? Cheers, Thomas Goirand (zigo) ------------------------------ Message: 41 Date: Tue, 7 Aug 2018 07:35:55 -0500 From: Monty Taylor To: "OpenStack Development Mailing List (not for usage questions)" Subject: [openstack-dev] [requirements][release] FFE for openstacksdk 0.17.2 Message-ID: <082089a7-124e-2d20-77a5-8b5e9c0a8748 at inaugust.com> Content-Type: text/plain; charset=utf-8; format=flowed Hey all, I'd like to request an FFE to release 0.17.2 of openstacksdk from stable/rocky. Infra discovered an issue that affects the production nodepool related to the multi-threaded TaskManager and exception propagation. When it gets triggered, we lose an entire cloud of capacity (whoops) until we restart the associated nodepool-launcher process. Nothing in OpenStack uses the particular feature in openstacksdk in question (yet), so nobody should need to even bump constraints. Thanks! Monty ------------------------------ Message: 42 Date: Tue, 7 Aug 2018 14:24:44 +0100 From: Sean Mooney To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate Message-ID: Content-Type: text/plain; charset="UTF-8" On 7 August 2018 at 12:52, Thomas Goirand wrote: > On 08/06/2018 09:02 PM, Sean McGinnis wrote: >>> >>> I didn't have time to investigate these, but at least Glance was >>> affected, and a patch was sent (as well as an async patch). None of them >>> has been merged yet: >>> >>> https://review.openstack.org/#/c/586050/ >>> https://review.openstack.org/#/c/586716/ >>> >>> That'd be ok if at least there was some reviews. It looks like nobody >>> cares but Debian & Ubuntu people... :( >>> >> >> Keep in mind that your priorities are different than everyone elses. There are >> large parts of the community still working on Python 3.5 support (our >> officially supported Python 3 version), as well as smaller teams overall >> working on things like critical bugs. >> >> Unless and until we declare Python 3.7 as our new target (which I don't think >> we are ready to do yet), these kinds of patches will be on a best effort basis. > > This is exactly what I'm complaining about. OpenStack upstream has very > wrong priorities. If we really are to switch to Python 3, then we got to > make sure we're current, because that's the version distros are end up > running. Or maybe we only care if "it works on devstack" (tm)? python 3.7 has some backward incompatible changes if i recall correctly such as forked thread not inheriting open file descriptor form the parent. i dont think that will bite us but it might mess with privsep deamon though i think we fork a full process not a thread in that case. the point im trying to make here is that following the latest python versions is likely going to require us to either A.) use only the backwards compatible subset or B.) make some code test what versions of python 3 we are using the same way the six package does. so im not sure pushing for python 3.7 is the right thing to do. also i would not assume all distros will ship 3.7 in the near term. i have not check lately but i believe cento 7 unless make 3.4 and 3.6 available in the default repos. ubuntu 18.04 ships with 3.6 i believe im not sure about other linux distros but since most openstack deployment are done on LTS releases of operating systems i would suspect that python 3.6 will be the main python 3 versions we see deployed in production for some time. having a 3.7 gate is not a bad idea but priority wise have a 3.6 gate would be much higher on my list. i think we as a community will have to decide on the minimum and maximum python 3 versions we support for each release and adjust as we go forward. i would suggst a min of 3.5 and max of 3.6 for rocky. for stien perhaps bump that to min of 3.6 max 3.7 but i think this is something that needs to be address community wide via a governance resolution rather then per project. it will also impact the external python lib we can depend on too which is another reason i think thie need to be a comuntiy wide discussion and goal that is informed by what distros are doing but not mandated by what any one distro is doing. regards sean. > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ------------------------------ Message: 43 Date: Tue, 7 Aug 2018 08:29:04 -0500 From: Matt Riedemann To: Flint WALRUS , Matt Riedemann Cc: "OpenStack Development Mailing List \(not for usage questions\)" , openstack-sigs at lists.openstack.org, "openstack-operators at lists.openstack.org" Subject: Re: [openstack-dev] [Openstack-operators] [nova] StarlingX diff analysis Message-ID: <45bd7236-b9f8-026d-620b-7356d4effa49 at gmail.com> Content-Type: text/plain; charset=utf-8; format=flowed On 8/7/2018 1:10 AM, Flint WALRUS wrote: > I didn’t had time to check StarlingX code quality, how did you feel it > while you were doing your analysis? I didn't dig into the test diffs themselves, but it was my impression that from what I was poking around in the local git repo, there were several changes which didn't have any test coverage. For the really big full stack changes (L3 CAT, CPU scaling and shared/pinned CPUs on same host), toward the end I just started glossing over a lot of that because it's so much code in so many places, so I can't really speak very well to how it was written or how well it is tested (maybe WindRiver had a more robust CI system running integration tests, I don't know). There were also some things which would have been caught in code review upstream. For example, they ignore the "force" parameter for live migration so that live migration requests always go through the scheduler. However, the "force" parameter is only on newer microversions. Before that, if you specified a host at all it would bypass the scheduler, but the change didn't take that into account, so they still have gaps in some of the things they were trying to essentially disable in the API. On the whole I think the quality is OK. It's not really possible to accurately judge that when looking at a single diff this large. -- Thanks, Matt ------------------------------ Message: 44 Date: Tue, 7 Aug 2018 16:11:43 +0200 From: Thomas Goirand To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate Message-ID: <30fd7e68-3a58-2ab1-bba0-c4c5e0eb2bf5 at debian.org> Content-Type: text/plain; charset=utf-8 On 08/07/2018 03:24 PM, Sean Mooney wrote: > so im not sure pushing for python 3.7 is the right thing to do. also i would not > assume all distros will ship 3.7 in the near term. i have not check lately but > i believe cento 7 unless make 3.4 and 3.6 available in the default repos. > ubuntu 18.04 ships with 3.6 i believe The current plan for Debian is that we'll be trying to push for Python 3.7 for Buster, which freezes in January. This freeze date means that it's going to be Rocky that will end up in the next Debian release. If Python 3.7 is a failure, then late November, we will remove Python 3.7 from Unstable and let Buster release with 3.6. As for Ubuntu, it is currently unclear if 18.10 will be released with Python 3.7 or not, but I believe they are trying to do that. If not, then 19.04 will for sure be released with Python 3.7. > im not sure about other linux distros but since most openstack > deployment are done > on LTS releases of operating systems i would suspect that python 3.6 > will be the main > python 3 versions we see deployed in production for some time. In short: that's wrong. > having a 3.7 gate is not a bad idea but priority wise have a 3.6 gate > would be much higher on my list. Wrong list. One version behind. > i think we as a community will have to decide on the minimum and > maximum python 3 versions > we support for each release and adjust as we go forward. Whatever the OpenStack community decides is not going to change what distributions like Debian will do. This type of reasoning lacks a much needed humility. > i would suggst a min of 3.5 and max of 3.6 for rocky. My suggestion is that these bugs are of very high importance and that they should at least deserve attention. That the gate for Python 3.7 isn't ready, I can understand, as everyone's time is limited. This doesn't mean that the OpenStack community at large should just dismiss patches that are important for downstream. > for stien perhaps bump that to min of 3.6 max 3.7 but i think this is > something that needs to be address community wide > via a governance resolution rather then per project. At this point, dropping 3.5 isn't a good idea either, even for Stein. > it will also > impact the external python lib we can depend on too which is > another reason i think thie need to be a comuntiy wide discussion and > goal that is informed by what distros are doing but > not mandated by what any one distro is doing. > regards > sean. Postponing any attempt to support anything current is always a bad idea. I don't see why there's even a controversy when one attempts to fix bugs that will, sooner or later, also hit the gate. Cheers, Thomas Goirand (zigo) ------------------------------ Message: 45 Date: Tue, 7 Aug 2018 08:14:22 -0600 From: Wesley Hayutin To: "OpenStack Development Mailing List (not for usage questions)" Cc: Derek Higgins , Kieran Forde Subject: Re: [openstack-dev] [tripleo] 3rd party ovb jobs are down Message-ID: Content-Type: text/plain; charset="utf-8" On Mon, Aug 6, 2018 at 5:55 PM Wesley Hayutin wrote: > On Mon, Aug 6, 2018 at 12:56 PM Wesley Hayutin > wrote: > >> Greetings, >> >> There is currently an unplanned outtage atm for the tripleo 3rd party OVB >> based jobs. >> We will contact the list when there are more details. >> >> Thank you! >> > > OK, > I'm going to call an end to the current outtage. We are closely monitoring > the ovb 3rd party jobs. > I'll called for the outtage when we hit [1]. Once I deleted the stack > that moved teh HA routers to back_up state, the networking came back online. > > Additionally Kieran and I had to work through a number of instances that > required admin access to remove. > Once those resources were cleaned up our CI tooling removed the rest of > the stacks in delete_failed status. The stacks in delete_failed status > were holding ip address that were causing new stacks to fail [2] > > There are still active issues that could cause OVB jobs to fail. > This connection issues [3] was originaly thought to be DNS, however that > turned out to not be the case. > You may also see your job have a "node_failure" status, Paul has sent > updates about this issue and is working on a patch and integration into rdo > software factory. > > The CI team is close to including all the console logs into the regular > job logs, however if needed atm they can be viewed at [5]. > We are also adding the bmc to the list of instances that we collect logs > from. > > *To summarize* the most recent outtage was infra related and the errors > were swallowed up in the bmc console log that at the time was not available > to users. > > We continue to monitor that ovb jobs at http://cistatus.tripleo.org/ > The legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master job > is at a 53% pass rate, it needs to move to a > 85% pass rate to match other > check jobs. > > Thanks all! > Following up, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master job is at a 78.6% pass rate today. Certainly an improvement. We had a quick sync meeting this morning w/ RDO-Cloud admins, tripleo and infra folks. There are two remaining issues. There is an active issue w/ network connections, and an issue w/ instances booting into node_failure status. New issues creep up all the time and we're actively monitoring those as well. Still shooting for 85% pass rate. Thanks all > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1570136 > [2] http://paste.openstack.org/show/727444/ > [3] https://bugs.launchpad.net/tripleo/+bug/1785342 > [4] https://review.openstack.org/#/c/584488/ > [5] http://38.145.34.41/console-logs/?C=M;O=D > > > > > > >> >> -- >> >> Wes Hayutin >> >> Associate MANAGER >> >> Red Hat >> >> >> >> w hayutin at redhat.com T: +1919 <+19197544114> >> 4232509 IRC: weshay >> >> >> View my calendar and check my availability for meetings HERE >> >> > -- > > Wes Hayutin > > Associate MANAGER > > Red Hat > > > > w hayutin at redhat.com T: +1919 <+19197544114> > 4232509 IRC: weshay > > > View my calendar and check my availability for meetings HERE > > -- Wes Hayutin Associate MANAGER Red Hat w hayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 46 Date: Tue, 7 Aug 2018 09:21:39 -0500 From: Matthew Thode To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [requirements][release] FFE for openstacksdk 0.17.2 Message-ID: <20180807142139.kk2jmbokrhkkzprk at gentoo.org> Content-Type: text/plain; charset="utf-8" On 18-08-07 07:35:55, Monty Taylor wrote: > Hey all, > > I'd like to request an FFE to release 0.17.2 of openstacksdk from > stable/rocky. > > Infra discovered an issue that affects the production nodepool related to > the multi-threaded TaskManager and exception propagation. When it gets > triggered, we lose an entire cloud of capacity (whoops) until we restart the > associated nodepool-launcher process. > > Nothing in OpenStack uses the particular feature in openstacksdk in question > (yet), so nobody should need to even bump constraints. > Well, considering constraints is the minimum you can ask for an FFE for, we'll go with that :P FFE approved -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: ------------------------------ Subject: Digest Footer _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ------------------------------ End of OpenStack-dev Digest, Vol 76, Issue 7 ******************************************** From dabarren at gmail.com Wed Aug 8 11:23:09 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Wed, 8 Aug 2018 13:23:09 +0200 Subject: [openstack-dev] [kolla] Dropping core reviewer In-Reply-To: <1533652097121.31214@cisco.com> References: <1533652097121.31214@cisco.com> Message-ID: Steve, Is sad to see you leaving kolla core team, hope to still see you around IRC and Summit/PTGs. I truly appreciate your leadership, guidance and commitment to make kolla the great project it is now. Best luck on your new projects and board of directors. Regards 2018-08-07 16:28 GMT+02:00 Steven Dake (stdake) : > Kollians, > > > Many of you that know me well know my feelings towards participating as a > core reviewer in a project. Folks with the ability to +2/+W gerrit changes > can sometimes unintentionally harm a codebase if they are not consistently > reviewing and maintaining codebase context. I also believe in leading an > exception-free life, and I'm no exception to my own rules. As I am not > reviewing Kolla actively given my OpenStack individually elected board of > directors service and other responsibilities, I am dropping core reviewer > ability for the Kolla repositories. > > > I want to take a moment to thank the thousands of people that have > contributed and shaped Kolla into the modern deployment system for > OpenStack that it is today. I personally find Kolla to be my finest body > of work as a leader. Kolla would not have been possible without the > involvement of the OpenStack global community working together to resolve > the operational pain points of OpenStack. Thank you for your contributions. > > > Finally, quoting Thierry [1] from our initial application to OpenStack, > " ... Long live Kolla!" > > > Cheers! > > -steve > > > [1] https://review.openstack.org/#/c/206789/ > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at medberry.net Wed Aug 8 11:51:27 2018 From: openstack at medberry.net (David Medberry) Date: Wed, 8 Aug 2018 06:51:27 -0500 Subject: [openstack-dev] PTG Denver Horns In-Reply-To: <1533724211.668141.1467312904.7046A12D@webmail.messagingengine.com> References: <20180808050117.6rmi4k4ubqg4ntem@gentoo.org> <20180808101828.g3luqyef7gy6q5kp@pacific.linksys.moosehall> <1533724211.668141.1467312904.7046A12D@webmail.messagingengine.com> Message-ID: So basically, we have added "sl" to osc. Duly noted. (FWIW, I frequently use "sl" as a demo of how "live" a VM is during live migration. The train "stutters" a bit during the cutover.) Now I can base it on PTG design work in a backronym fashion. On Wed, Aug 8, 2018 at 5:30 AM, Colleen Murphy wrote: > On Wed, Aug 8, 2018, at 12:18 PM, Adam Spiers wrote: > > Matthew Thode wrote: > > >On 18-08-07 23:18:26, David Medberry wrote: > > >> Requests have finally been made (today, August 7, 2018) to end the > horns on > > >> the train from Denver to Denver International airport (within the city > > >> limits of Denver.) Prior approval had been given to remove the > FLAGGERS > > >> that were stationed at each crossing intersection. > > >> > > >> Of particular note (at the bottom of the article): > > >> > > >> There’s no estimate for how long it could take the FRA to approve > quiet > > >> zones. > > >> > > >> ref: > > >> https://www.9news.com/article/news/local/next/denver- > officially-asks-fra-for-permission-to-quiet-a-line-horns/73-581499094 > > >> > > >> I'd recommend bringing your sleeping aids, ear plugs, etc, just in > case not > > >> approved by next month's PTG. (The Renaissance is within Denver > proper as > > >> near as I can tell so that nearby intersection should be covered by > this > > >> ruling/decision if and when it comes down.) > > > > > >Thanks for the update, if you are up to it, keeping us informed on this > > >would be nice, if only for the hilarity. > > > > Thanks indeed for the warning. > > > > If the approval doesn't go through, we may need to resume the design > > work started last year; see lines 187 onwards of > > > > https://etherpad.openstack.org/p/queens-PTG-feedback > > Luckily the client work for this is already started: > https://github.com/dtroyer/osc-choochoo > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Aug 8 12:14:57 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 8 Aug 2018 07:14:57 -0500 Subject: [openstack-dev] [cinder][api] strict schema validation and microversioning In-Reply-To: <16517087c88.bea8030a80762.5381697592028592635@ghanshyammann.com> References: <16517087c88.bea8030a80762.5381697592028592635@ghanshyammann.com> Message-ID: <20180808121456.GA10886@sm-workstation> > > > > > > Previously, Cinder API like 3.0 accepts unused fields in POST requests > > > but after [1] landed unused fields are now rejected even when Cinder API > > > 3.0 is used. > > > In my understanding on the microversioning, the existing behavior for > > > older versions should be kept. > > > Is it correct? > > > > I agree with your assessment that 3.0 was used there - and also that I > > would expect the api validation to only change if 3.53 microversion was > > used. > > +1. As you know, neutron also implemented strict validation in Rocky but with discovery via config option and extensions mechanism. Same way Cinder should make it with backward compatible way till 3.53 version. > I agree. I _thought_ that was the way it was implemented, but apparently something was missed. I will try to look at this soon and see what would need to be changed to get this behaving correctly. Unless someone else has the time and can beat me to it, which would be very much appreciated. From andr.kurilin at gmail.com Wed Aug 8 12:25:01 2018 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Wed, 8 Aug 2018 15:25:01 +0300 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> <12b36042-0303-0c2b-5239-7e4084fe1219@debian.org> <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> Message-ID: Thanks Thomas for pointing to the issue, I checked it locally and here is an update for openstack/rally (rally framework without in-tree OpenStack plugins) project: - added unittest job with py37 env - fixed all issues - released a new version (1.1.0) As for the openstack/rally-openstack (rally plugins for OpenStack platform), I'm planning to do the same this week. пн, 6 авг. 2018 г. в 20:11, Thomas Goirand : > On 08/02/2018 10:43 AM, Andrey Kurilin wrote: > > There's also some "raise StopIteration" issues in: > > - ceilometer > > - cinder > > - designate > > - glance > > - glare > > - heat > > - karbor > > - manila > > - murano > > - networking-ovn > > - neutron-vpnaas > > - nova > > - rally > > > > > > Can you provide any traceback or steps to reproduce the issue for Rally > > project ? > > I'm not sure there's any. The only thing I know is that it has stop > StopIteration stuff, but I'm not sure if they are part of generators, in > which case they should simply be replaced by "return" if you want it to > be py 3.7 compatible. > > I didn't have time to investigate these, but at least Glance was > affected, and a patch was sent (as well as an async patch). None of them > has been merged yet: > > https://review.openstack.org/#/c/586050/ > https://review.openstack.org/#/c/586716/ > > That'd be ok if at least there was some reviews. It looks like nobody > cares but Debian & Ubuntu people... :( > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Wed Aug 8 12:35:20 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Wed, 8 Aug 2018 08:35:20 -0400 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> <12b36042-0303-0c2b-5239-7e4084fe1219@debian.org> <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> <20180806190241.GA3368@devvm1> <30fd7e68-3a58-2ab1-bba0-c4c5e0eb2bf5@debian.org> <0f2f9e10-4419-8fc0-39a9-737ba2be00f4@redhat.com> Message-ID: On Wed, Aug 8, 2018 at 3:43 AM, Thomas Goirand wrote: > On 08/07/2018 06:10 PM, Corey Bryant wrote: > > I was concerned that there wouldn't be any > > gating until Ubuntu 20.04 (April 2020) > Same over here. I'm concerned that it takes another 2 years, which > really, we cannot afford. > > > but Py3.7 is available in bionic today. > > Is Bionic going to be released with Py3.7? In Debconf18 in Taiwan, Doko > didn't seem completely sure about it. > > Bionic was released with py3.7 in April 2018. Py3.6 is the default for Bionic but Py3.7 is available. https://launchpad.net/ubuntu/+source/python3.7 Corey Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Wed Aug 8 12:43:10 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 8 Aug 2018 08:43:10 -0400 Subject: [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: <99813704-07df-29ad-d5b5-04a205a1bb8a@openstack.org> References: <99813704-07df-29ad-d5b5-04a205a1bb8a@openstack.org> Message-ID: --- Emilien Macchi On Wed, Aug 8, 2018, 5:14 AM Thierry Carrez, wrote: > Victoria Martínez de la Cruz wrote: > > I'm reaching you out to let you know that I'll be stepping down as > > coordinator for OpenStack next round. I had been contributing to this > > effort for several rounds now and I believe is a good moment for > > somebody else to take the lead. You all know how important is Outreachy > > to me and I'm grateful for all the amazing things I've done as part of > > the Outreachy program and all the great people I've met in the way. I > > plan to keep involved with the internships but leave the coordination > > tasks to somebody else. > > Thanks for helping with this effort for all this time ! > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Wed Aug 8 12:43:40 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 8 Aug 2018 08:43:40 -0400 Subject: [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: Thanks Victoria for all your efforts, highly recognized! --- Emilien Macchi On Tue, Aug 7, 2018, 7:48 PM Victoria Martínez de la Cruz, < victoria at vmartinezdelacruz.com> wrote: > Hi all, > > I'm reaching you out to let you know that I'll be stepping down as > coordinator for OpenStack next round. I had been contributing to this > effort for several rounds now and I believe is a good moment for somebody > else to take the lead. You all know how important is Outreachy to me and > I'm grateful for all the amazing things I've done as part of the Outreachy > program and all the great people I've met in the way. I plan to keep > involved with the internships but leave the coordination tasks to somebody > else. > > If you are interested in becoming an Outreachy coordinator, let me know > and I can share my experience and provide some guidance. > > Thanks, > > Victoria > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.bourke at oracle.com Wed Aug 8 12:44:16 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Wed, 8 Aug 2018 13:44:16 +0100 Subject: [openstack-dev] [kolla] Dropping core reviewer In-Reply-To: References: <1533652097121.31214@cisco.com> Message-ID: +1. Will always have good memories of when Steve was getting the project off the ground. Thanks Steve for doing a great job of building the community around Kolla, and for all your help in general! Best of luck, -Paul On 08/08/18 12:23, Eduardo Gonzalez wrote: > Steve, > > Is sad to see you leaving kolla core team, hope to still see you around > IRC and Summit/PTGs. > > I truly appreciate your leadership, guidance and commitment to make > kolla the great project it is now. > > Best luck on your new projects and board of directors. > > Regards > > > > > > 2018-08-07 16:28 GMT+02:00 Steven Dake (stdake) >: > > Kollians, > > > Many of you that know me well know my feelings towards participating > as a core reviewer in a project.  Folks with the ability to +2/+W > gerrit changes can sometimes unintentionally harm a codebase if they > are not consistently reviewing and maintaining codebase context.  I > also believe in leading an exception-free life, and I'm no exception > to my own rules.  As I am not reviewing Kolla actively given my > OpenStack individually elected board of directors service and other > responsibilities, I am dropping core reviewer ability for the Kolla > repositories. > > > I want to take a moment to thank the thousands of people that have > contributed and shaped Kolla into the modern deployment system for > OpenStack that it is today.  I personally find Kolla to be my finest > body of work as a leader.  Kolla would not have been possible > without the involvement of the OpenStack global community working > together to resolve the operational pain points of OpenStack.  Thank > you for your contributions. > > > Finally, quoting Thierry [1] from our initial application to > OpenStack, " ... Long live Kolla!" > > > Cheers! > > -steve > > > [1] https://review.openstack.org/#/c/206789/ > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tenobreg at redhat.com Wed Aug 8 12:50:13 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Wed, 8 Aug 2018 09:50:13 -0300 Subject: [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: Thanks Victoria, you did a great job!!! Thanks for the effort On Wed, Aug 8, 2018 at 9:44 AM Emilien Macchi wrote: > Thanks Victoria for all your efforts, highly recognized! > > --- > Emilien Macchi > > On Tue, Aug 7, 2018, 7:48 PM Victoria Martínez de la Cruz, < > victoria at vmartinezdelacruz.com> wrote: > >> Hi all, >> >> I'm reaching you out to let you know that I'll be stepping down as >> coordinator for OpenStack next round. I had been contributing to this >> effort for several rounds now and I believe is a good moment for somebody >> else to take the lead. You all know how important is Outreachy to me and >> I'm grateful for all the amazing things I've done as part of the Outreachy >> program and all the great people I've met in the way. I plan to keep >> involved with the internships but leave the coordination tasks to somebody >> else. >> >> If you are interested in becoming an Outreachy coordinator, let me know >> and I can share my experience and provide some guidance. >> >> Thanks, >> >> Victoria >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mahati.chamarthy at gmail.com Wed Aug 8 12:59:04 2018 From: mahati.chamarthy at gmail.com (Mahati C) Date: Wed, 8 Aug 2018 18:29:04 +0530 Subject: [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: Thank you Victoria for the initiative and the effort all these years! On a related note, I will continue to coordinate OpenStack Outreachy for the next round and if anyone else would like to join the effort, please feel free to contact me or Victoria. Best, Mahati On Wed, Aug 8, 2018 at 5:17 AM, Victoria Martínez de la Cruz < victoria at vmartinezdelacruz.com> wrote: > Hi all, > > I'm reaching you out to let you know that I'll be stepping down as > coordinator for OpenStack next round. I had been contributing to this > effort for several rounds now and I believe is a good moment for somebody > else to take the lead. You all know how important is Outreachy to me and > I'm grateful for all the amazing things I've done as part of the Outreachy > program and all the great people I've met in the way. I plan to keep > involved with the internships but leave the coordination tasks to somebody > else. > > If you are interested in becoming an Outreachy coordinator, let me know > and I can share my experience and provide some guidance. > > Thanks, > > Victoria > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Aug 8 13:18:44 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 08 Aug 2018 09:18:44 -0400 Subject: [openstack-dev] =?utf-8?q?=5Boslo=5D_proposing_Mois=C3=A9s_Guimar?= =?utf-8?q?=C3=A3es_for_oslo=2Econfig_core?= In-Reply-To: <1533129742-sup-2007@lrrr.local> References: <1533129742-sup-2007@lrrr.local> Message-ID: <1533733971-sup-7865@lrrr.local> Excerpts from Doug Hellmann's message of 2018-08-01 09:27:09 -0400: > Moisés Guimarães (moguimar) did quite a bit of work on oslo.config > during the Rocky cycle to add driver support. Based on that work, > and a discussion we have had since then about general cleanup needed > in oslo.config, I think he would make a good addition to the > oslo.config review team. > > Please indicate your approval or concerns with +1/-1. > > Doug Normally I would have added moguimar to the oslo-config-core team today, after a week's wait. Funny story, though. There is no oslo-config-core team. oslo.config is one of a few of our libraries that we never set up with a separate review team. It is managed by oslo-core. We could set up a new review team for that library, but after giving it some thought I realized that *most* of the libraries are fairly stable, our team is pretty small, and Moisés is a good guy so maybe we don't need to worry about that. I spoke with Moisés, and he agreed to be part of the larger core team. He pointed out that the next phase of the driver work is going to happen in castellan, so it would be useful to have another reviewer there. And I'm sure we can trust him to be careful with reviews in other repos until he learns his way around. So, I would like to amend my original proposal and suggest that we add Moisés to the oslo-core team. Please indicate support with +1 or present any concerns you have. I apologize for the confusion on my part. Doug From kgiusti at gmail.com Wed Aug 8 13:31:07 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Wed, 8 Aug 2018 09:31:07 -0400 Subject: [openstack-dev] =?utf-8?q?=5Boslo=5D_proposing_Mois=C3=A9s_Guimar?= =?utf-8?q?=C3=A3es_for_oslo=2Econfig_core?= In-Reply-To: <1533733971-sup-7865@lrrr.local> References: <1533129742-sup-2007@lrrr.local> <1533733971-sup-7865@lrrr.local> Message-ID: On Wed, Aug 8, 2018 at 9:19 AM Doug Hellmann wrote: > > Excerpts from Doug Hellmann's message of 2018-08-01 09:27:09 -0400: > > Moisés Guimarães (moguimar) did quite a bit of work on oslo.config > > during the Rocky cycle to add driver support. Based on that work, > > and a discussion we have had since then about general cleanup needed > > in oslo.config, I think he would make a good addition to the > > oslo.config review team. > > > > Please indicate your approval or concerns with +1/-1. > > > > Doug > > Normally I would have added moguimar to the oslo-config-core team > today, after a week's wait. Funny story, though. There is no > oslo-config-core team. > > oslo.config is one of a few of our libraries that we never set up with a > separate review team. It is managed by oslo-core. We could set up a new > review team for that library, but after giving it some thought I > realized that *most* of the libraries are fairly stable, our team is > pretty small, and Moisés is a good guy so maybe we don't need to worry > about that. > > I spoke with Moisés, and he agreed to be part of the larger core team. > He pointed out that the next phase of the driver work is going to happen > in castellan, so it would be useful to have another reviewer there. And > I'm sure we can trust him to be careful with reviews in other repos > until he learns his way around. > > So, I would like to amend my original proposal and suggest that we add > Moisés to the oslo-core team. > > Please indicate support with +1 or present any concerns you have. I > apologize for the confusion on my part. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev +1 -- Ken Giusti (kgiusti at gmail.com) From doug at doughellmann.com Wed Aug 8 13:32:12 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 08 Aug 2018 09:32:12 -0400 Subject: [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: <1533735015-sup-282@lrrr.local> Excerpts from Victoria Martínez de la Cruz's message of 2018-08-07 20:47:28 -0300: > Hi all, > > I'm reaching you out to let you know that I'll be stepping down as > coordinator for OpenStack next round. I had been contributing to this > effort for several rounds now and I believe is a good moment for somebody > else to take the lead. You all know how important is Outreachy to me and > I'm grateful for all the amazing things I've done as part of the Outreachy > program and all the great people I've met in the way. I plan to keep > involved with the internships but leave the coordination tasks to somebody > else. > > If you are interested in becoming an Outreachy coordinator, let me know and > I can share my experience and provide some guidance. > > Thanks, > > Victoria Thank you, Victoria. Mentoring new developers is an important responsibility, and your patient service in working with the Outreachy program has set a good example. Doug From doug at doughellmann.com Wed Aug 8 13:35:04 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 08 Aug 2018 09:35:04 -0400 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> <12b36042-0303-0c2b-5239-7e4084fe1219@debian.org> <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> Message-ID: <1533735233-sup-6263@lrrr.local> Excerpts from Andrey Kurilin's message of 2018-08-08 15:25:01 +0300: > Thanks Thomas for pointing to the issue, I checked it locally and here is > an update for openstack/rally (rally framework without in-tree OpenStack > plugins) project: > > - added unittest job with py37 env It would be really useful if you could help set up a job definition in openstack-zuul-jobs like we have for openstack-tox-py36 [1], so that other projects can easily add the job, too. Do you have time to do that? Doug [1] http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/jobs.yaml#n354 From doug at doughellmann.com Wed Aug 8 13:43:07 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 08 Aug 2018 09:43:07 -0400 Subject: [openstack-dev] [all][elections] Project Team Lead Election Conclusion and Results In-Reply-To: <20180808002015.GJ9540@thor.bakeyournoodle.com> References: <20180808002015.GJ9540@thor.bakeyournoodle.com> Message-ID: <1533735684-sup-9934@lrrr.local> Excerpts from Tony Breeds's message of 2018-08-08 10:20:15 +1000: > Thank you to the electorate, to all those who voted and to all > candidates who put their name forward for Project Team Lead (PTL) in > this election. A healthy, open process breeds trust in our decision > making capability thank you to all those who make this process possible. > > Now for the results of the PTL election process, please join me in > extending congratulations to the following PTLs: > > * Adjutant : Adrian Turjak > * Barbican : Ade Lee > * Blazar : Pierre Riteau > * Chef OpenStack : Samuel Cassiba > * Cinder : Jay Bryant > * Cloudkitty : Luka Peschke > * Congress : Eric Kao > * Cyborg : Li Liu > * Designate : Graham Hayes > * Documentation : Petr Kovar > * Dragonflow : [1] > * Ec2 Api : Andrey Pavlov > * Freezer : [1] > * Glance : Erno Kuvaja > * Heat : Rico Lin > * Horizon : Ivan Kolodyazhny > * I18n : Frank Kloeker > * Infrastructure : Clark Boylan > * Ironic : Julia Kreger > * Karbor : Pengju Jiao > * Keystone : Lance Bragstad > * Kolla : Eduardo Gonzalez Gutierrez > * Kuryr : Daniel Mellado > * Loci : [1] > * Magnum : Spyros Trigazis > * Manila : Thomas Barron > * Masakari : Sampath Priyankara > * Mistral : Dougal Matthews > * Monasca : Witek Bedyk > * Murano : Rong Zhu > * Neutron : Miguel Lavalle > * Nova : Melanie Witt > * Octavia : Michael Johnson > * OpenStackAnsible : Mohammed Naser > * OpenStackClient : Dean Troyer > * OpenStackSDK : Monty Taylor > * OpenStack Charms : Frode Nordahl > * OpenStack Helm : Pete Birley > * Oslo : Ben Nemec > * Packaging Rpm : [1] > * PowerVMStackers : Matthew Edmonds > * Puppet OpenStack : Tobias Urdin > * Qinling : Lingxian Kong > * Quality Assurance : Ghanshyam Mann > * Rally : Andrey Kurilin > * RefStack : [1] > * Release Management : Sean McGinnis > * Requirements : Matthew Thode > * Sahara : Telles Nobrega > * Searchlight : [1] > * Security : [1] > * Senlin : Duc Truong > * Solum : Rong Zhu > * Storlets : Kota Tsuyuzaki > * Swift : John Dickinson > * Tacker : dharmendra kushwaha > * Telemetry : Julien Danjou > * Tricircle : baisen song > * Tripleo : Juan Osorio Robles > * Trove : [1] > * Vitrage : Ifat Afek > * Watcher : Alexander Chadin > * Winstackers : [1] > * Zaqar : wang hao > * Zun : Wei Ji > > Elections: > * Senlin: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_5655e3b3821ece95 > * Tacker: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_fe41cc8acc6ead91 > > Election process details and results are also available here: https://governance.openstack.org/election/ > > Thank you to all involved in the PTL election process, > > Yours Tony. > > [1] The TC is currently evaluating options for these projects. Thank you all for agreeing to take on this additional responsibility to keep OpenStack humming along smoothly. And thank you to our election officials for conducting another election. Doug From jaypipes at gmail.com Wed Aug 8 13:46:56 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 8 Aug 2018 09:46:56 -0400 Subject: [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: <1533735015-sup-282@lrrr.local> References: <1533735015-sup-282@lrrr.local> Message-ID: <87a1b8c3-374b-a02b-a2ee-204e495ba815@gmail.com> On 08/08/2018 09:32 AM, Doug Hellmann wrote: > Excerpts from Victoria Martínez de la Cruz's message of 2018-08-07 20:47:28 -0300: >> Hi all, >> >> I'm reaching you out to let you know that I'll be stepping down as >> coordinator for OpenStack next round. I had been contributing to this >> effort for several rounds now and I believe is a good moment for somebody >> else to take the lead. You all know how important is Outreachy to me and >> I'm grateful for all the amazing things I've done as part of the Outreachy >> program and all the great people I've met in the way. I plan to keep >> involved with the internships but leave the coordination tasks to somebody >> else. >> >> If you are interested in becoming an Outreachy coordinator, let me know and >> I can share my experience and provide some guidance. >> >> Thanks, >> >> Victoria > > Thank you, Victoria. Mentoring new developers is an important > responsibility, and your patient service in working with the Outreachy > program has set a good example. Big +1. -jay From fungi at yuggoth.org Wed Aug 8 14:12:43 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 8 Aug 2018 14:12:43 +0000 Subject: [openstack-dev] PTG Denver Horns In-Reply-To: References: <20180808050117.6rmi4k4ubqg4ntem@gentoo.org> <20180808101828.g3luqyef7gy6q5kp@pacific.linksys.moosehall> <1533724211.668141.1467312904.7046A12D@webmail.messagingengine.com> Message-ID: <20180808141243.w4hw7zcptrahqovm@yuggoth.org> On 2018-08-08 06:51:27 -0500 (-0500), David Medberry wrote: > So basically, we have added "sl" to osc. Duly noted. > > (FWIW, I frequently use "sl" as a demo of how "live" a VM is during live > migration. The train "stutters" a bit during the cutover.) > > Now I can base it on PTG design work in a backronym fashion. [...] Speaking of which, is it too soon to put in bids to name the Denver summit and associated release in 2019 "OpenStack Train"? I feel like we're all honorary railroad engineers by now. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From zigo at debian.org Wed Aug 8 14:18:07 2018 From: zigo at debian.org (Thomas Goirand) Date: Wed, 8 Aug 2018 16:18:07 +0200 Subject: [openstack-dev] Paste unmaintained In-Reply-To: References: <687c3ce92c327a15b6a37930e49ebc97f4d6a95f.camel@redhat.com> <1533219691-sup-5515@lrrr.local> <1533655006-sup-850@lrrr.local> Message-ID: On 08/08/2018 10:43 AM, Chris Dent wrote: > On Wed, 8 Aug 2018, Thomas Goirand wrote: > >> If you don't configure uwsgi to do any special logging, then then only >> thing you'll see in the log file is client requests, without any kind of >> logging from the wsgi application. To have proper logging, one needs to >> add, in the uwsgi config file: >> >> paste-logger = true >> >> If you do that, then you need the python3-pastescript installed, which >> itself depends on the python3-paste package. >> >> Really, I don't see how an operator could run without the paste-logger >> option activated. Without it, you see nothing in the logs. > > I'm pretty sure your statements here are not true. In the uwsgi > configs for services in devstack, paste-logger is not used. I have never mentioned devstack ! :) > Can you please point me to where you are seeing these problems? In the Debian packages, if I don't do paste-logger = true, I will not see any debug output. > Clearly something is confused somewhere. Is the difference in our > experiences that both of the situations I describe above are happy > with logging being on stderr and you're talking about being able to > config logging to files, within the application itself? If there's no paste-logger, what the application prints on stderr doesn't appear in the log file that uwsgi logs into. That's precisely what paste-logger fixes. > If that's > the case then my response would b: don't do that. Let systemd, or > your container, or apache2, or whatever process/service orchestration > system you have going manage that. That's what they are there for. I'd be more than happy to have a better logging without the need of paste/pastescript, but so far, that's the only way I found that worked with uwsgi. Do you know any other way? Cheers, Thomas Goirand (zigo) From mordred at inaugust.com Wed Aug 8 14:24:56 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 8 Aug 2018 09:24:56 -0500 Subject: [openstack-dev] PTG Denver Horns In-Reply-To: <20180808141243.w4hw7zcptrahqovm@yuggoth.org> References: <20180808050117.6rmi4k4ubqg4ntem@gentoo.org> <20180808101828.g3luqyef7gy6q5kp@pacific.linksys.moosehall> <1533724211.668141.1467312904.7046A12D@webmail.messagingengine.com> <20180808141243.w4hw7zcptrahqovm@yuggoth.org> Message-ID: <22883a13-da31-b029-ca5f-104e1f247673@inaugust.com> On 08/08/2018 09:12 AM, Jeremy Stanley wrote: > On 2018-08-08 06:51:27 -0500 (-0500), David Medberry wrote: >> So basically, we have added "sl" to osc. Duly noted. >> >> (FWIW, I frequently use "sl" as a demo of how "live" a VM is during live >> migration. The train "stutters" a bit during the cutover.) >> >> Now I can base it on PTG design work in a backronym fashion. > [...] > > Speaking of which, is it too soon to put in bids to name the Denver > summit and associated release in 2019 "OpenStack Train"? I feel like > we're all honorary railroad engineers by now. It seems like a good opportunity to apply the Brian Waldon exception. From jean-philippe at evrard.me Wed Aug 8 14:25:24 2018 From: jean-philippe at evrard.me (=?utf-8?q?jean-philippe=40evrard=2Eme?=) Date: Wed, 08 Aug 2018 16:25:24 +0200 Subject: [openstack-dev] =?utf-8?b?Pz09P3V0Zi04P3E/ICBQVEcgRGVudmVyIEhv?= =?utf-8?q?rns?= In-Reply-To: <20180808141243.w4hw7zcptrahqovm@yuggoth.org> Message-ID: <346c-5b6afd80-3-3cb29180@195442028> On Wednesday, August 08, 2018 16:12 CEST, Jeremy Stanley wrote: > On 2018-08-08 06:51:27 -0500 (-0500), David Medberry wrote: > > So basically, we have added "sl" to osc. Duly noted. > > > > (FWIW, I frequently use "sl" as a demo of how "live" a VM is during live > > migration. The train "stutters" a bit during the cutover.) > > > > Now I can base it on PTG design work in a backronym fashion. > [...] > > Speaking of which, is it too soon to put in bids to name the Denver > summit and associated release in 2019 "OpenStack Train"? I feel like > we're all honorary railroad engineers by now. > -- > Jeremy Stanley +1 From fungi at yuggoth.org Wed Aug 8 14:26:19 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 8 Aug 2018 14:26:19 +0000 Subject: [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: Message-ID: <20180808142619.nnurgmi4l54xtqvo@yuggoth.org> On 2018-08-07 20:47:28 (-0300), Victoria Martínez de la Cruz wrote: > I'm reaching you out to let you know that I'll be stepping down as > coordinator for OpenStack next round. [...] You've done a great job, and mentoring a new coordinator is just another great example of that; thanks for all you've done and all you have yet to do, it's important! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sileht at sileht.net Wed Aug 8 14:28:10 2018 From: sileht at sileht.net (Mehdi Abaakouk) Date: Wed, 8 Aug 2018 16:28:10 +0200 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: <1533735233-sup-6263@lrrr.local> References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> <12b36042-0303-0c2b-5239-7e4084fe1219@debian.org> <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> <1533735233-sup-6263@lrrr.local> Message-ID: <20180808142810.dsrrzyufok23hr4i@sileht.net> On Wed, Aug 08, 2018 at 09:35:04AM -0400, Doug Hellmann wrote: >Excerpts from Andrey Kurilin's message of 2018-08-08 15:25:01 +0300: >> Thanks Thomas for pointing to the issue, I checked it locally and here is >> an update for openstack/rally (rally framework without in-tree OpenStack >> plugins) project: >> >> - added unittest job with py37 env > >It would be really useful if you could help set up a job definition in >openstack-zuul-jobs like we have for openstack-tox-py36 [1], so that other >projects can easily add the job, too. Do you have time to do that? I have already done this kind of stuff for Telemetry project. And our project already gate on py37. The only restriction is that we have to use a fedora-28 instance with the python-3.7 package installed manually (via bindep.txt). Ubuntu 18.04 LTS only a beta version of python 3.7 in universe repo. So I'm guessing we have to wait next Ubuntu LTS to add this job everywhere. https://github.com/openstack/ceilometer/blob/master/.zuul.yaml#L12 https://github.com/openstack/ceilometer/blob/master/bindep.txt#L7 Cheers, -- Mehdi Abaakouk mail: sileht at sileht.net irc: sileht From sileht at sileht.net Wed Aug 8 14:29:33 2018 From: sileht at sileht.net (Mehdi Abaakouk) Date: Wed, 8 Aug 2018 16:29:33 +0200 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: References: <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> <20180806190241.GA3368@devvm1> <30fd7e68-3a58-2ab1-bba0-c4c5e0eb2bf5@debian.org> <0f2f9e10-4419-8fc0-39a9-737ba2be00f4@redhat.com> Message-ID: <20180808142933.nzul7maewha4ptol@sileht.net> On Wed, Aug 08, 2018 at 08:35:20AM -0400, Corey Bryant wrote: >On Wed, Aug 8, 2018 at 3:43 AM, Thomas Goirand wrote: > >> On 08/07/2018 06:10 PM, Corey Bryant wrote: >> > I was concerned that there wouldn't be any >> > gating until Ubuntu 20.04 (April 2020) >> Same over here. I'm concerned that it takes another 2 years, which >> really, we cannot afford. >> >> > but Py3.7 is available in bionic today. Yeah but it's the beta3 version. -- Mehdi Abaakouk mail: sileht at sileht.net irc: sileht From fungi at yuggoth.org Wed Aug 8 14:31:35 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 8 Aug 2018 14:31:35 +0000 Subject: [openstack-dev] [cyborg]Team Weekly Meeting 2018.08.08 In-Reply-To: References: Message-ID: <20180808143135.jvgacs4vqzuptxk5@yuggoth.org> On 2018-08-08 16:45:22 +0800 (+0800), Zhipeng Huang wrote: > We are rushing towards the end of Rocky cycle and let's use the meeting to > sync up with any important features still on the fly. > > starting UTC1400 at #openstack-cyborg, as usual Please consider adding your meeting to the IRC meetings schedule at http://eavesdrop.openstack.org/ (the first sentence in the section contains instructions for doing so), that way people in the community can find out when it is held without having to rely solely on announcements. Thanks! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From no-reply at openstack.org Wed Aug 8 14:33:48 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Wed, 08 Aug 2018 14:33:48 -0000 Subject: [openstack-dev] storlets 2.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for storlets for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/storlets/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/storlets/log/?h=stable/rocky Release notes for storlets can be found at: http://docs.openstack.org/releasenotes/storlets/ From cdent+os at anticdent.org Wed Aug 8 14:38:45 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 8 Aug 2018 15:38:45 +0100 (BST) Subject: [openstack-dev] Paste unmaintained In-Reply-To: References: <687c3ce92c327a15b6a37930e49ebc97f4d6a95f.camel@redhat.com> <1533219691-sup-5515@lrrr.local> <1533655006-sup-850@lrrr.local> Message-ID: On Wed, 8 Aug 2018, Thomas Goirand wrote: > I'd be more than happy to have a better logging without the need of > paste/pastescript, but so far, that's the only way I found that worked > with uwsgi. Do you know any other way? Yes, use systemd or some other supervisor which is responsible for catching stderr. That's why I pointed to devstack and my container thing. Not because I think devstack is glorious or anything, but because the logging works and presumably something can be learned from that. Apparently what you're doing in the debian packages doesn't work (without logging middleware), which isn't surprising because that's exactly how uwsgi and WSGI is supposed to work. What I've been trying to suggest throughout this subthread is that it sounds like however things are being packaged in debian is not right, and that something needs to be changed. Also that your bold assertion that uwsgi doesn't work without paste is only true in the narrow way in which you are using it (which is the wrong way to use it). -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From amy at demarco.com Wed Aug 8 14:48:30 2018 From: amy at demarco.com (Amy Marrich) Date: Wed, 8 Aug 2018 09:48:30 -0500 Subject: [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: Victoria, Thank you for everything you've down with the Outreachy program! Amy (spotz) On Tue, Aug 7, 2018 at 6:47 PM, Victoria Martínez de la Cruz < victoria at vmartinezdelacruz.com> wrote: > Hi all, > > I'm reaching you out to let you know that I'll be stepping down as > coordinator for OpenStack next round. I had been contributing to this > effort for several rounds now and I believe is a good moment for somebody > else to take the lead. You all know how important is Outreachy to me and > I'm grateful for all the amazing things I've done as part of the Outreachy > program and all the great people I've met in the way. I plan to keep > involved with the internships but leave the coordination tasks to somebody > else. > > If you are interested in becoming an Outreachy coordinator, let me know > and I can share my experience and provide some guidance. > > Thanks, > > Victoria > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christophe.sauthier at objectif-libre.com Wed Aug 8 14:49:51 2018 From: christophe.sauthier at objectif-libre.com (Christophe Sauthier) Date: Wed, 08 Aug 2018 16:49:51 +0200 Subject: [openstack-dev] [release] FFE for python-cloudkittyclient 2.0.0 Message-ID: <66c100cb9c561f67f95f3d773d7bf1a1@objectif-libre.com> Hello all I'd like to ask for a FFE to release for python-cloudkittyclient 2.0.0 The review is located here : Since it is the first time we are asking for such thing so please do not hesitate to point me if I am not doing things right. Cheers Christophe ---- Christophe Sauthier CEO Objectif Libre : Au service de votre Cloud +33 (0) 6 16 98 63 96 | christophe.sauthier at objectif-libre.com www.objectif-libre.com | @objectiflibre | www.linkedin.com/company/objectif-libre Recevez la Pause Cloud Et DevOps : olib.re/abo-pause From sundar.nadathur at intel.com Wed Aug 8 15:06:27 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 8 Aug 2018 08:06:27 -0700 Subject: [openstack-dev] [Cyborg] Agent - Conductor update In-Reply-To: References: Message-ID: <32594261-16f2-541a-fa05-114e03e1dba1@intel.com> Hi Zhenghao, On 8/8/2018 4:10 AM, Zhenghao ZH21 Wang wrote: > Hi Sundar, > All look good to me. And I agreed with the new solution as your suggestion. But I still confused why we will lost some device info if we do diff on agent? > Could u give me an example to explain how to lost and what we will lost? To do the diff, the agent would need the previous configuration of devices on the host. If it keeps that previous config in its process memory, it will lose it if it dies and restarts for any reason. So, it should persist it. The ideal place to persist that is the Cyborg db. So, let us say the agent writes the config to the db each time via the conductor. However, consider the scenario where the agent pushes an update to the conductor, and restarts before the conductor has written it to the db. This can result in a race condition. If we don't address that properly, the agent may get the copy in the db and not the latest update. That is the loss we were talking about. To prevent that race, the restarted agent should ask the conductor to get the latest, and the conductor must be smart enough to 'synchronize' with the previous unfinished update. This seems like unnecessary complication. I think this is what you are asking about. If not, please let me know what you meant. > Best regards > Zhenghao Wang > Cloud Researcher > > Email: wangzh21 at lenovo.com > Tel: (+86) 18519550096 > Enterprise & Cloud Research Lab, Lenovo Research > No.6 Shangdi West Road, Haidian District, Beijing Regards, Sundar From jungleboyj at gmail.com Wed Aug 8 15:29:00 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Wed, 8 Aug 2018 10:29:00 -0500 Subject: [openstack-dev] PTG Denver Horns In-Reply-To: <20180808141243.w4hw7zcptrahqovm@yuggoth.org> References: <20180808050117.6rmi4k4ubqg4ntem@gentoo.org> <20180808101828.g3luqyef7gy6q5kp@pacific.linksys.moosehall> <1533724211.668141.1467312904.7046A12D@webmail.messagingengine.com> <20180808141243.w4hw7zcptrahqovm@yuggoth.org> Message-ID: On 8/8/2018 9:12 AM, Jeremy Stanley wrote: > On 2018-08-08 06:51:27 -0500 (-0500), David Medberry wrote: >> So basically, we have added "sl" to osc. Duly noted. >> >> (FWIW, I frequently use "sl" as a demo of how "live" a VM is during live >> migration. The train "stutters" a bit during the cutover.) >> >> Now I can base it on PTG design work in a backronym fashion. > [...] > > Speaking of which, is it too soon to put in bids to name the Denver > summit and associated release in 2019 "OpenStack Train"? I feel like > we're all honorary railroad engineers by now. > +1 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Wed Aug 8 15:39:55 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 8 Aug 2018 10:39:55 -0500 Subject: [openstack-dev] =?utf-8?b?562U5aSNOiBSZTogIFtweXRob24tc2VubGlu?= =?utf-8?q?client=5D=5Brelease=5D=5Brequirements=5DFFE_for_python-senlincl?= =?utf-8?q?ient_1=2E8=2E0?= In-Reply-To: <201808081905507001038@zte.com.cn> References: <20180807202909.GA11176@sm-workstation> <201808081905507001038@zte.com.cn> Message-ID: <20180808153955.lk25opskd2as3esf@gentoo.org> On 18-08-08 19:05:50, liu.xuefeng1 at zte.com.cn wrote: > Yes, just need upper-constraints raised for this. > > On Tue, Aug 07, 2018 at 03:25:39PM -0500, Sean McGinnis wrote: > > Added requirements tag to the subject since this is a requirements FFE. > > > > On Tue, Aug 07, 2018 at 11:44:04PM +0800, liu.xuefeng1 at zte.com.cn wrote: > > > hi, all > > > > > > > > > I'd like to request an FFE to release 1.8.0(stable/rocky) > > > for python-senlinclient. > > > > > > The CURRENT_API_VERSION has been changed to "1.10", we need this release. > > > > > XueFeng, do you just need upper-constraints raised for this, or also the > minimum version? From that last sentence, I'm assuming you need to ensure only > 1.8.0 is used for Rocky deployments. > OK, if it's JUST upper-constraints that needs to change then FFE approved by requirements. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From prometheanfire at gentoo.org Wed Aug 8 15:44:07 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 8 Aug 2018 10:44:07 -0500 Subject: [openstack-dev] [release] FFE for python-cloudkittyclient 2.0.0 In-Reply-To: <66c100cb9c561f67f95f3d773d7bf1a1@objectif-libre.com> References: <66c100cb9c561f67f95f3d773d7bf1a1@objectif-libre.com> Message-ID: <20180808154407.6gtjsm4bqqy2jkfb@gentoo.org> On 18-08-08 16:49:51, Christophe Sauthier wrote: > Hello all > > I'd like to ask for a FFE to release for python-cloudkittyclient 2.0.0 > > The review is located here : > > Since it is the first time we are asking for such thing so please do not > hesitate to point me if I am not doing things right. Will you require a bump to the minimum version required in requirements files? -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From zigo at debian.org Wed Aug 8 15:54:27 2018 From: zigo at debian.org (Thomas Goirand) Date: Wed, 8 Aug 2018 17:54:27 +0200 Subject: [openstack-dev] Paste unmaintained In-Reply-To: References: <687c3ce92c327a15b6a37930e49ebc97f4d6a95f.camel@redhat.com> <1533219691-sup-5515@lrrr.local> <1533655006-sup-850@lrrr.local> Message-ID: <27e5d479-0901-64c4-29f7-07414a24b49c@debian.org> On 08/08/2018 04:38 PM, Chris Dent wrote: > On Wed, 8 Aug 2018, Thomas Goirand wrote: > >> I'd be more than happy to have a better logging without the need of >> paste/pastescript, but so far, that's the only way I found that worked >> with uwsgi. Do you know any other way? > > Yes, use systemd or some other supervisor which is responsible for > catching stderr. That's why I pointed to devstack and my container > thing. Not because I think devstack is glorious or anything, but > because the logging works and presumably something can be learned > from that. > > Apparently what you're doing in the debian packages doesn't work > (without logging middleware), which isn't surprising because that's > exactly how uwsgi and WSGI is supposed to work. > > What I've been trying to suggest throughout this subthread is that > it sounds like however things are being packaged in debian is not > right, and that something needs to be changed. Also that your bold > assertion that uwsgi doesn't work without paste is only true in the > narrow way in which you are using it (which is the wrong way to use > it). Thanks. I'll try to investigate then. However, the way you're suggesting mandates systemd which is probably not desirable. Cheers, Thomas Goirand (zigo) From cdent+os at anticdent.org Wed Aug 8 15:57:00 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 8 Aug 2018 16:57:00 +0100 (BST) Subject: [openstack-dev] Paste unmaintained In-Reply-To: <27e5d479-0901-64c4-29f7-07414a24b49c@debian.org> References: <687c3ce92c327a15b6a37930e49ebc97f4d6a95f.camel@redhat.com> <1533219691-sup-5515@lrrr.local> <1533655006-sup-850@lrrr.local> <27e5d479-0901-64c4-29f7-07414a24b49c@debian.org> Message-ID: On Wed, 8 Aug 2018, Thomas Goirand wrote: > I'll try to investigate then. However, the way you're suggesting > mandates systemd which is probably not desirable. "or some other supervisor" -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From christophe.sauthier at objectif-libre.com Wed Aug 8 15:57:32 2018 From: christophe.sauthier at objectif-libre.com (Christophe Sauthier) Date: Wed, 08 Aug 2018 17:57:32 +0200 Subject: [openstack-dev] [release] FFE for python-cloudkittyclient 2.0.0 In-Reply-To: <20180808154407.6gtjsm4bqqy2jkfb@gentoo.org> References: <66c100cb9c561f67f95f3d773d7bf1a1@objectif-libre.com> <20180808154407.6gtjsm4bqqy2jkfb@gentoo.org> Message-ID: <3e0ace58d380fa3628939257cc70d1ad@objectif-libre.com> Le 2018-08-08 17:44, Matthew Thode a écrit : > On 18-08-08 16:49:51, Christophe Sauthier wrote: >> Hello all >> >> I'd like to ask for a FFE to release for python-cloudkittyclient >> 2.0.0 >> >> The review is located here : >> >> Since it is the first time we are asking for such thing so please do >> not >> hesitate to point me if I am not doing things right. > > Will you require a bump to the minimum version required in > requirements > files? We can do a bump since only the cloudkitty-dashboard depends on the client. Thanks for your help ! Christophe ---- Christophe Sauthier CEO Objectif Libre : Au service de votre Cloud +33 (0) 6 16 98 63 96 | christophe.sauthier at objectif-libre.com www.objectif-libre.com | @objectiflibre | www.linkedin.com/company/objectif-libre Recevez la Pause Cloud Et DevOps : olib.re/abo-pause From mriedemos at gmail.com Wed Aug 8 16:08:25 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 8 Aug 2018 11:08:25 -0500 Subject: [openstack-dev] [nova]Notification update week 30 In-Reply-To: <1532354290.11749.1@smtp.office365.com> References: <1532354290.11749.1@smtp.office365.com> Message-ID: On 7/23/2018 8:58 AM, Balázs Gibizer wrote: > Versioned notification transformation > ------------------------------------- > We have only a handfull of patches left before we can finally finish the > multi year effort of transforming every legacy notifiaction to the > versioned format. 3 of those patches already have a +2: > https://review.openstack.org/#/q/status:open+topic:bp/versioned-notification-transformation-rocky Since we're past feature freeze for Rocky I had assumed this blueprint was going to be closed and we'd wrap up early in Stein, then start talking about communicating deprecation of the legacy notifications at the PTG. -- Thanks, Matt From ifatafekn at gmail.com Wed Aug 8 16:13:07 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Wed, 8 Aug 2018 19:13:07 +0300 Subject: [openstack-dev] [vitrage][ptl] PTL on vacation Message-ID: Hi all, I'll be on vacation between 12th and 31st of August. Anna Reznikov ( anna.reznikov at nokia.com) will replace me during this time and will handle the Vitrage release. Thanks, Ifat -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Wed Aug 8 16:20:53 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 8 Aug 2018 11:20:53 -0500 Subject: [openstack-dev] [release] FFE for python-cloudkittyclient 2.0.0 In-Reply-To: <3e0ace58d380fa3628939257cc70d1ad@objectif-libre.com> References: <66c100cb9c561f67f95f3d773d7bf1a1@objectif-libre.com> <20180808154407.6gtjsm4bqqy2jkfb@gentoo.org> <3e0ace58d380fa3628939257cc70d1ad@objectif-libre.com> Message-ID: <20180808162053.2no7mse47iystjf3@gentoo.org> On 18-08-08 17:57:32, Christophe Sauthier wrote: > > > Le 2018-08-08 17:44, Matthew Thode a écrit : > > On 18-08-08 16:49:51, Christophe Sauthier wrote: > > > Hello all > > > > > > I'd like to ask for a FFE to release for python-cloudkittyclient > > > 2.0.0 > > > > > > The review is located here : > > > > > > Since it is the first time we are asking for such thing so please do > > > not > > > hesitate to point me if I am not doing things right. > > > > Will you require a bump to the minimum version required in requirements > > files? > > We can do a bump since only the cloudkitty-dashboard depends on the client. > SGTM then, ack from reqs -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From whayutin at redhat.com Wed Aug 8 16:42:07 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 8 Aug 2018 10:42:07 -0600 Subject: [openstack-dev] [tripleo] 3rd party ovb jobs are down In-Reply-To: References: Message-ID: [image: rdo-ci-3rd-party.png] RDO 3rd party jobs at 85% :) If you round up. Nice work all! On Tue, Aug 7, 2018 at 10:14 AM Wesley Hayutin wrote: > On Mon, Aug 6, 2018 at 5:55 PM Wesley Hayutin wrote: > >> On Mon, Aug 6, 2018 at 12:56 PM Wesley Hayutin >> wrote: >> >>> Greetings, >>> >>> There is currently an unplanned outtage atm for the tripleo 3rd party >>> OVB based jobs. >>> We will contact the list when there are more details. >>> >>> Thank you! >>> >> >> OK, >> I'm going to call an end to the current outtage. We are closely >> monitoring the ovb 3rd party jobs. >> I'll called for the outtage when we hit [1]. Once I deleted the stack >> that moved teh HA routers to back_up state, the networking came back online. >> >> Additionally Kieran and I had to work through a number of instances that >> required admin access to remove. >> Once those resources were cleaned up our CI tooling removed the rest of >> the stacks in delete_failed status. The stacks in delete_failed status >> were holding ip address that were causing new stacks to fail [2] >> >> There are still active issues that could cause OVB jobs to fail. >> This connection issues [3] was originaly thought to be DNS, however that >> turned out to not be the case. >> You may also see your job have a "node_failure" status, Paul has sent >> updates about this issue and is working on a patch and integration into rdo >> software factory. >> >> The CI team is close to including all the console logs into the regular >> job logs, however if needed atm they can be viewed at [5]. >> We are also adding the bmc to the list of instances that we collect logs >> from. >> >> *To summarize* the most recent outtage was infra related and the errors >> were swallowed up in the bmc console log that at the time was not available >> to users. >> >> We continue to monitor that ovb jobs at http://cistatus.tripleo.org/ >> The legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master job >> is at a 53% pass rate, it needs to move to a > 85% pass rate to match other >> check jobs. >> >> Thanks all! >> > > Following up, > legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master job is at > a 78.6% pass rate today. Certainly an improvement. > > We had a quick sync meeting this morning w/ RDO-Cloud admins, tripleo and > infra folks. There are two remaining issues. > There is an active issue w/ network connections, and an issue w/ instances > booting into node_failure status. New issues > creep up all the time and we're actively monitoring those as well. Still > shooting for 85% pass rate. > > Thanks all > > > >> >> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1570136 >> [2] http://paste.openstack.org/show/727444/ >> [3] https://bugs.launchpad.net/tripleo/+bug/1785342 >> [4] https://review.openstack.org/#/c/584488/ >> [5] http://38.145.34.41/console-logs/?C=M;O=D >> >> >> >> >> >> >>> >>> -- >>> >>> Wes Hayutin >>> >>> Associate MANAGER >>> >>> Red Hat >>> >>> >>> >>> w hayutin at redhat.com T: +1919 <+19197544114> >>> 4232509 IRC: weshay >>> >>> >>> View my calendar and check my availability for meetings HERE >>> >>> >> -- >> >> Wes Hayutin >> >> Associate MANAGER >> >> Red Hat >> >> >> >> w hayutin at redhat.com T: +1919 <+19197544114> >> 4232509 IRC: weshay >> >> >> View my calendar and check my availability for meetings HERE >> >> > -- > > Wes Hayutin > > Associate MANAGER > > Red Hat > > > > w hayutin at redhat.com T: +1919 <+19197544114> > 4232509 IRC: weshay > > > View my calendar and check my availability for meetings HERE > > -- Wes Hayutin Associate MANAGER Red Hat w hayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rdo-ci-3rd-party.png Type: image/png Size: 161915 bytes Desc: not available URL: From jungleboyj at gmail.com Wed Aug 8 17:04:14 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Wed, 8 Aug 2018 12:04:14 -0500 Subject: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ... Message-ID: <9b0850aa-2c4d-57c6-5a65-746c28607122@gmail.com> Team, A reminder that we have our weekly Cinder meeting on Wednesdays at 16:00 UTC.  I bring this up as I can no longer send the courtesy pings without being kicked from IRC.  So, if you wish to join the meeting please add a reminder to your calendar of choice. Thank you! Jay From thierry at openstack.org Wed Aug 8 17:14:44 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 8 Aug 2018 19:14:44 +0200 Subject: [openstack-dev] PTG Denver Horns In-Reply-To: <22883a13-da31-b029-ca5f-104e1f247673@inaugust.com> References: <20180808050117.6rmi4k4ubqg4ntem@gentoo.org> <20180808101828.g3luqyef7gy6q5kp@pacific.linksys.moosehall> <1533724211.668141.1467312904.7046A12D@webmail.messagingengine.com> <20180808141243.w4hw7zcptrahqovm@yuggoth.org> <22883a13-da31-b029-ca5f-104e1f247673@inaugust.com> Message-ID: <1a9ab68d-464b-8807-36ff-85b87f09a777@openstack.org> Monty Taylor wrote: > On 08/08/2018 09:12 AM, Jeremy Stanley wrote: >> Speaking of which, is it too soon to put in bids to name the Denver >> summit and associated release in 2019 "OpenStack Train"? I feel like >> we're all honorary railroad engineers by now. > > It seems like a good opportunity to apply the Brian Waldon exception. I'm not even sure we need to apply the Brian Waldon exception, as noisy trains seem to be a permanent geographic feature of Denver. -- Thierry Carrez (ttx) From sean.mcginnis at gmx.com Wed Aug 8 17:15:26 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 8 Aug 2018 17:15:26 +0000 Subject: [openstack-dev] [cinder][api] strict schema validation and microversioning In-Reply-To: References: Message-ID: <20180808171525.GA3281@devvm1> On Tue, Aug 07, 2018 at 05:27:06PM -0500, Monty Taylor wrote: > On 08/07/2018 05:03 PM, Akihiro Motoki wrote: > >Hi Cinder and API-SIG folks, > > > >During reviewing a horizon bug [0], I noticed the behavior of Cinder API > >3.0 was changed. > >Cinder introduced more strict schema validation for creating/updating > >volume encryption type > >during Rocky and a new micro version 3.53 was introduced[1]. > > > >Previously, Cinder API like 3.0 accepts unused fields in POST requests > >but after [1] landed unused fields are now rejected even when Cinder API > >3.0 is used. > >In my understanding on the microversioning, the existing behavior for > >older versions should be kept. > >Is it correct? > > I agree with your assessment that 3.0 was used there - and also that I would > expect the api validation to only change if 3.53 microversion was used. > I filed a bug to track this: https://bugs.launchpad.net/cinder/+bug/1786054 But something doesn't seem right from what I've seen. I've put up a patch to add some extra unit testing around this. I expected some of those unit tests to fail, but everything seemed happy and working the way it is supposed to with prior to 3.53 accepting anything and 3.53 or later rejecting extra parameters. Since that didn't work, I tried reproducing this against a running system using curl. With no version specified (defaulting to the base 3.0 microversion) creation succeeded: curl -g -i -X POST http://192.168.1.234/volume/v3/95ae21ce92a34b3c92601f3304ea0a46/volumes -H "Accept: "Content-Type: application/json" -H "User-Agent: python-cinderclient" -H "X-Auth-Token: $OS_TOKEN" -d '{"volume": {"backup_id": null, "description": null, "multiattach": false, "source_volid": null, "consistencygroup_id": null, "snapshot_id": null, "size": 1, "name": "New", "imageRef": null, "availability_zone": null, "volume_type": null, "metadata": {}, "project_id": "testing", "junk": "garbage"}}' I then tried specifying the microversion that introduces the strict schema checking to make sure I was able to get the appropriate failure, which worked as expected: curl -g -i -X POST http://192.168.1.234/volume/v3/95ae21ce92a34b3c92601f3304ea0a46/volumes -H "Accept: "Content-Type: application/json" -H "User-Agent: python-cinderclient" -H "X-Auth-Token: $OS_TOKEN" -d '{"volume": {"backup_id": null, "description": null, "multiattach": false, "source_volid": null, "consistencygroup_id": null, "snapshot_id": null, "size": 1, "name": "New-mv353", "imageRef": null, "availability_zone": null, "volume_type": null, "metadata": {}, "project_id": "testing", "junk": "garbage"}}' -H "OpenStack-API-Version: volume 3.53" HTTP/1.1 400 Bad Request ... And to test boundary conditions, I then specified the microversion just prior to the one that enabled strict checking: curl -g -i -X POST http://192.168.1.234/volume/v3/95ae21ce92a34b3c92601f3304ea0a46/volumes -H "Ac "Content-Type: application/json" -H "User-Agent: python-cinderclient" -H "X-Auth-Token: $OS_TOKEN" -d '{"volume": {"backup_id": null, "description": null, "multiattach": false, "source_volid": null, "consistencygroup_id": null, "snapshot_id": null, "size": 1, "name": "New-mv352", "imageRef": null, "availability_zone": null, "volume_type": null, "metadata": {}, "project_id": "testing", "junk": "garbage"}}' -H "OpenStack-API-Version: volume 3.52" HTTP/1.1 202 Accepted In all cases except the strict checking one, the volume was created successfully even though the junk extra parameters ("project_id": "testing", "junk": "garbage") were provided. So I'm missing something here. Is it possible horizon is requesting the latest API version and not defaulting to 3.0? Sean From jtomasek at redhat.com Wed Aug 8 17:45:44 2018 From: jtomasek at redhat.com (Jiri Tomasek) Date: Wed, 8 Aug 2018 19:45:44 +0200 Subject: [openstack-dev] [tripleo] Patches to speed up plan operations In-Reply-To: References: Message-ID: Hello, thanks for bringing this up. I am going to try to test this patch with TripleO UI tomorrow. Without properly looking at the patch, questions I would like to get answers for are: How is this going to affect ways to create/update deployment plan? Currently user is able to create deployment plan by: - not providing any files - creating deployment plan from default files in /usr/share/openstack-tripleo-heat-templates - providing a tarball - providing a local directory of files to create plan from - providing a git repository link These changes will have an impact on certain TripleO UI operations where (in rare cases) we reach directly for a swift object IIUC it seems we are deciding to consider deployment plan as a black box packed in a tarball, which I quite like, we'll need to provide a standard way how to provide custom files to the plan. How is this going to affect CLI vs GUI workflow as currently CLI creates the plan as part of the deploy command, rather than GUI starts its workflow by selecting/creating deployment plan and whole configuration of the plan is performed on the deployment plan. Then the deployment plan gets deployed. We are aiming to introduce CLI commands to consolidate the behaviour of both clients to what GUI workflow is currently. I am going to try to find answers to these questions and identify potential problems in next couple of days. -- Jirka On Tue, Aug 7, 2018 at 5:34 PM Dan Prince wrote: > Thanks for taking this on Ian! I'm fully on board with the effort. I > like the consolidation and performance improvements. Storing t-h-t > templates in Swift worked okay 3-4 years ago. Now that we have more > templates, many of which need .j2 rendering the storage there has > become quite a bottleneck. > > Additionally, since we'd be sending commands to Heat via local > filesystem template storage we could consider using softlinks again > within t-h-t which should help with refactoring and deprecation > efforts. > > Dan > On Wed, Aug 1, 2018 at 7:35 PM Ian Main wrote: > > > > > > Hey folks! > > > > So I've been working on some patches to speed up plan operations in > TripleO. This was originally driven by the UI needing to be able to > perform a 'plan upload' in something less than several minutes. :) > > > > https://review.openstack.org/#/c/581153/ > > https://review.openstack.org/#/c/581141/ > > > > I have a functioning set of patches, and it actually cuts over 2 minutes > off the overcloud deployment time. > > > > Without patch: > > + openstack overcloud plan create --templates > /home/stack/tripleo-heat-templates/ overcloud > > Creating Swift container to store the plan > > Creating plan from template files in: /home/stack/tripleo-heat-templates/ > > Plan created. > > real 3m3.415s > > > > With patch: > > + openstack overcloud plan create --templates > /home/stack/tripleo-heat-templates/ overcloud > > Creating Swift container to store the plan > > Creating plan from template files in: /home/stack/tripleo-heat-templates/ > > Plan created. > > real 0m44.694s > > > > This is on VMs. On real hardware it now takes something like 15-20 > seconds to do the plan upload which is much more manageable from the UI > standpoint. > > > > Some things about what this patch does: > > > > - It makes use of process-templates.py (written for the undercloud) to > process the jinjafied templates. This reduces replication with the > existing version in the code base and is very fast as it's all done on > local disk. > > - It stores the bulk of the templates as a tarball in swift. Any > individual files in swift take precedence over the contents of the tarball > so it should be backwards compatible. This is a great speed up as we're > not accessing a lot of individual files in swift. > > > > There's still some work to do; cleaning up and fixing the unit tests, > testing upgrades etc. I just wanted to get some feedback on the general > idea and hopefully some reviews and/or help - especially with the unit test > stuff. > > > > Thanks everyone! > > > > Ian > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Wed Aug 8 18:38:19 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Wed, 8 Aug 2018 14:38:19 -0400 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: <20180808142933.nzul7maewha4ptol@sileht.net> References: <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> <20180806190241.GA3368@devvm1> <30fd7e68-3a58-2ab1-bba0-c4c5e0eb2bf5@debian.org> <0f2f9e10-4419-8fc0-39a9-737ba2be00f4@redhat.com> <20180808142933.nzul7maewha4ptol@sileht.net> Message-ID: On Wed, Aug 8, 2018 at 10:29 AM, Mehdi Abaakouk wrote: > On Wed, Aug 08, 2018 at 08:35:20AM -0400, Corey Bryant wrote: > >> On Wed, Aug 8, 2018 at 3:43 AM, Thomas Goirand wrote: >> >> On 08/07/2018 06:10 PM, Corey Bryant wrote: >>> > I was concerned that there wouldn't be any >>> > gating until Ubuntu 20.04 (April 2020) >>> Same over here. I'm concerned that it takes another 2 years, which >>> really, we cannot afford. >>> >>> > but Py3.7 is available in bionic today. >>> >> > Yeah but it's the beta3 version. > > Yes, that's something I mentioned before but it was snipped from the conversation. We're going to try and get that updated. Corey -- > Mehdi Abaakouk > mail: sileht at sileht.net > irc: sileht > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Aug 8 19:44:00 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 8 Aug 2018 19:44:00 +0000 Subject: [openstack-dev] [cinder][api] strict schema validation and microversioning In-Reply-To: <20180808171525.GA3281@devvm1> References: <20180808171525.GA3281@devvm1> Message-ID: <20180808194400.GA15936@devvm1> On Wed, Aug 08, 2018 at 05:15:26PM +0000, Sean McGinnis wrote: > On Tue, Aug 07, 2018 at 05:27:06PM -0500, Monty Taylor wrote: > > On 08/07/2018 05:03 PM, Akihiro Motoki wrote: > > >Hi Cinder and API-SIG folks, > > > > > >During reviewing a horizon bug [0], I noticed the behavior of Cinder API > > >3.0 was changed. > > >Cinder introduced more strict schema validation for creating/updating > > >volume encryption type > > >during Rocky and a new micro version 3.53 was introduced[1]. > > > > > >Previously, Cinder API like 3.0 accepts unused fields in POST requests > > >but after [1] landed unused fields are now rejected even when Cinder API > > >3.0 is used. > > >In my understanding on the microversioning, the existing behavior for > > >older versions should be kept. > > >Is it correct? > > > > I agree with your assessment that 3.0 was used there - and also that I would > > expect the api validation to only change if 3.53 microversion was used. > > > > I filed a bug to track this: > > https://bugs.launchpad.net/cinder/+bug/1786054 > Sorry, between lack of attention to detail (lack of coffee?) and an incorrect link, I think I went down the wrong rabbit hole. The change was actually introduced in [0]. I have submitted [1] to allow the additional parameters in the volume type encryption API. This was definitely an oversight when we allowed that one through. Apologies for the hassle this has caused. [0] https://review.openstack.org/#/c/561140/ [1] https://review.openstack.org/#/c/590014/ From ekcs.openstack at gmail.com Wed Aug 8 20:14:08 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Wed, 8 Aug 2018 13:14:08 -0700 Subject: [openstack-dev] [requirements][release][congress] FFE request to bump python-monascaclient to 1.12.1 Message-ID: python-monascaclient 1.12.0 paired with osc-lib 1.11.0 seems to experience a problem around Session. python-monascaclient 1.12.1 fixes the issue [1]. So I'd like to bump congress requirements to python-monascaclient>=1.12.1 if it is not disruptive to packaging [2]. If it is disruptive, we can just note it as a known issue. Thanks! [1] https://review.openstack.org/#/c/579139/ [2] https://review.openstack.org/#/c/590021/ From ekcs.openstack at gmail.com Wed Aug 8 20:20:47 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Wed, 8 Aug 2018 13:20:47 -0700 Subject: [openstack-dev] [requirements][release][congress] FFE request to bump doc req sphinx to 1.7.3 Message-ID: Lower versions of sphinx seems to experience a problem where the exclude_patterns option is not in effect. I'd like to bump the docs/requirements to sphinx>=1.7.3 if it is not disruptive to packaging. If it is disruptive to packaging we can leave it as is. Thanks! https://review.openstack.org/#/c/589995/ From prometheanfire at gentoo.org Wed Aug 8 21:06:50 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 8 Aug 2018 16:06:50 -0500 Subject: [openstack-dev] [requirements][release][congress] FFE request to bump python-monascaclient to 1.12.1 In-Reply-To: References: Message-ID: <20180808210650.xajr5o56ujjkeaos@gentoo.org> On 18-08-08 13:14:08, Eric K wrote: > python-monascaclient 1.12.0 paired with osc-lib 1.11.0 seems to > experience a problem around Session. > > python-monascaclient 1.12.1 fixes the issue [1]. So I'd like to bump > congress requirements to python-monascaclient>=1.12.1 if it is not > disruptive to packaging [2]. If it is disruptive, we can just note it > as a known issue. Which project(s) will need the new minimum? Those projects would need re-releases. Then my question then becomes if those projects need a raised minumum too, and for which project(s). And so on. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From prometheanfire at gentoo.org Wed Aug 8 21:07:35 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 8 Aug 2018 16:07:35 -0500 Subject: [openstack-dev] [requirements][release][congress] FFE request to bump doc req sphinx to 1.7.3 In-Reply-To: References: Message-ID: <20180808210735.upu25sleged6yses@gentoo.org> On 18-08-08 13:20:47, Eric K wrote: > Lower versions of sphinx seems to experience a problem where the > exclude_patterns option is not in effect. I'd like to bump the > docs/requirements to sphinx>=1.7.3 if it is not disruptive to > packaging. If it is disruptive to packaging we can leave it as is. > > Thanks! > > https://review.openstack.org/#/c/589995/ > Which project(s) will need the new minimum? Those projects would need re-releases. Then my question then becomes if those projects need a raised minumum too, and for which project(s). And so on. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From kennelson11 at gmail.com Wed Aug 8 23:24:23 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 8 Aug 2018 16:24:23 -0700 Subject: [openstack-dev] [all][tc][election] Timing of the Upcoming Stein TC election In-Reply-To: <20180808043930.GK9540@thor.bakeyournoodle.com> References: <20180808043930.GK9540@thor.bakeyournoodle.com> Message-ID: Hello! On Tue, Aug 7, 2018 at 9:39 PM Tony Breeds wrote: > Hello all, > With the PTL elections behind us it's time to start looking at the > TC election. Our charter[1] says: > > The election is held no later than 6 weeks prior to each OpenStack > Summit (on or before ‘S-6’ week), with elections held open for no less > than four business days. > > Assuming we have the same structure that gives us a timeline of: > > Summit is at: 2018-11-13 > Latest possible completion is at: 2018-10-02 > Moving back to Tuesday: 2018-10-02 > TC Election from 2018-09-25T23:45 to 2018-10-02T23:45 > TC Campaigning from 2018-09-18T23:45 to 2018-09-25T23:45 > TC Nominations from 2018-09-11T23:45 to 2018-09-18T23:45 > > This puts the bulk of the nomination period during the PTG, which is > sub-optimal as the nominations cause a distraction from the PTG but more > so because the campaigning will coincide with travel home, and some > community members take vacation along with the PTG. > > So I'd like to bring up the idea of moving the election forward a > little so that it's actually the campaigning period that overlaps with > the PTG: > > TC Election from 2018-09-18T23:45 to 2018-09-27T23:45 > TC Campaigning from 2018-09-06T23:45 to 2018-09-18T23:45 > TC Nominations from 2018-08-30T23:45 to 2018-09-06T23:45 > +2! > This gives us longer campaigning and election periods. > > There are some advantages to doing this: > > * A panel style Q&A could be held formally or informally ;P > * There's improved scope for for incoming, outgoing and staying put TC > members to interact in a high bandwidth way. > * In personi/private discussions with TC candidates/members. > > However it isn't without downsides: > > * Election fatigue, We've just had the PTL elections and the UC > elections are currently running. Less break before the TC elections > may not be a good thing. > Simultaneously, would be nice to get it done with. > * TC candidates that can't travel to the PTG could be disadvantaged > We also can and should post things on the ML for longer discussion as the 'debate' likely wouldn't be much longer than over lunch one day. > * The campaigning would all happen at the PTG and not on the mailing > list disadvantaging community members not at the PTG. > > So thoughts? > I think this is a good plan :) Ready to +2 the config change as soon as I see it. > > Yours Tony. > > [1] https://governance.openstack.org/tc/reference/charter.html > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Thu Aug 9 01:00:14 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Wed, 08 Aug 2018 18:00:14 -0700 Subject: [openstack-dev] [requirements][release][congress] FFE request to bump doc req sphinx to 1.7.3 In-Reply-To: <20180808210735.upu25sleged6yses@gentoo.org> References: <20180808210735.upu25sleged6yses@gentoo.org> Message-ID: Requesting the raised minimun just for openstack/congress. https://review.openstack.org/#/c/589995/ No re-release required; it'll just take effect in RC1. On 8/8/18, 2:07 PM, "Matthew Thode" wrote: >On 18-08-08 13:20:47, Eric K wrote: >> Lower versions of sphinx seems to experience a problem where the >> exclude_patterns option is not in effect. I'd like to bump the >> docs/requirements to sphinx>=1.7.3 if it is not disruptive to >> packaging. If it is disruptive to packaging we can leave it as is. >> >> Thanks! >> >> https://review.openstack.org/#/c/589995/ >> > >Which project(s) will need the new minimum? Those projects would need >re-releases. Then my question then becomes if those projects need a >raised minumum too, and for which project(s). And so on. > >-- >Matthew Thode (prometheanfire) >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ekcs.openstack at gmail.com Thu Aug 9 01:00:35 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Wed, 08 Aug 2018 18:00:35 -0700 Subject: [openstack-dev] [requirements][release][congress] FFE request to bump python-monascaclient to 1.12.1 In-Reply-To: <20180808210650.xajr5o56ujjkeaos@gentoo.org> References: <20180808210650.xajr5o56ujjkeaos@gentoo.org> Message-ID: Requesting the raised minimun just for openstack/congress. https://review.openstack.org/#/c/590021/ No re-release required; it'll just take effect in RC1. On 8/8/18, 2:06 PM, "Matthew Thode" wrote: >On 18-08-08 13:14:08, Eric K wrote: >> python-monascaclient 1.12.0 paired with osc-lib 1.11.0 seems to >> experience a problem around Session. >> >> python-monascaclient 1.12.1 fixes the issue [1]. So I'd like to bump >> congress requirements to python-monascaclient>=1.12.1 if it is not >> disruptive to packaging [2]. If it is disruptive, we can just note it >> as a known issue. > >Which project(s) will need the new minimum? Those projects would need >re-releases. Then my question then becomes if those projects need a >raised minumum too, and for which project(s). And so on. > >-- >Matthew Thode (prometheanfire) >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gmann at ghanshyammann.com Thu Aug 9 02:28:51 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 09 Aug 2018 11:28:51 +0900 Subject: [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: <1651c83e77b.11b6c98f4105864.6317928039666867682@ghanshyammann.com> Thanks Victoria for such a great work and well coordinated. You have done remarkable work in internships program. -gmann ---- On Wed, 08 Aug 2018 08:47:28 +0900 Victoria Martínez de la Cruz wrote ---- > Hi all, > I'm reaching you out to let you know that I'll be stepping down as coordinator for OpenStack next round. I had been contributing to this effort for several rounds now and I believe is a good moment for somebody else to take the lead. You all know how important is Outreachy to me and I'm grateful for all the amazing things I've done as part of the Outreachy program and all the great people I've met in the way. I plan to keep involved with the internships but leave the coordination tasks to somebody else. > If you are interested in becoming an Outreachy coordinator, let me know and I can share my experience and provide some guidance. > Thanks, > Victoria __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From remo at rm.ht Thu Aug 9 02:53:29 2018 From: remo at rm.ht (Remo Mattei) Date: Wed, 8 Aug 2018 19:53:29 -0700 Subject: [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: <1651c83e77b.11b6c98f4105864.6317928039666867682@ghanshyammann.com> References: <1651c83e77b.11b6c98f4105864.6317928039666867682@ghanshyammann.com> Message-ID: <4167B95D-5499-479B-9A09-A822BEEFD802@rm.ht> Great Job!! Victoria. Ciao > On Aug 8, 2018, at 19:28, Ghanshyam Mann wrote: > > Thanks Victoria for such a great work and well coordinated. You have done remarkable work in internships program. > > -gmann > > ---- On Wed, 08 Aug 2018 08:47:28 +0900 Victoria Martínez de la Cruz wrote ---- >> Hi all, >> I'm reaching you out to let you know that I'll be stepping down as coordinator for OpenStack next round. I had been contributing to this effort for several rounds now and I believe is a good moment for somebody else to take the lead. You all know how important is Outreachy to me and I'm grateful for all the amazing things I've done as part of the Outreachy program and all the great people I've met in the way. I plan to keep involved with the internships but leave the coordination tasks to somebody else. >> If you are interested in becoming an Outreachy coordinator, let me know and I can share my experience and provide some guidance. >> Thanks, >> Victoria __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jaypipes at gmail.com Thu Aug 9 02:53:54 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 8 Aug 2018 22:53:54 -0400 Subject: [openstack-dev] [nova][placement] Excessive WARNING level log messages in placement-api Message-ID: <4432b5e3-1f42-20e0-968a-5a4e7636d60a@gmail.com> For evidence, see: http://logs.openstack.org/41/590041/1/check/tempest-full-py3/db08dec/controller/logs/screen-placement-api.txt.gz?level=WARNING thousands of these are filling the logs with WARNING-level log messages, making it difficult to find anything: Aug 08 22:17:30.837557 ubuntu-xenial-inap-mtl01-0001226060 devstack at placement-api.service[14403]: WARNING py.warnings [req-a809b022-59af-4628-be73-488cfec3187d req-d46cb1f0-431f-490f-955b-b9c2cd9f6437 service placement] /usr/local/lib/python3.5/dist-packages/oslo_policy/policy.py:896: UserWarning: Policy placement:resource_providers:list failed scope check. The token used to make the request was project scoped but the policy requires ['system'] scope. This behavior may change in the future where using the intended scope is required Aug 08 22:17:30.837800 ubuntu-xenial-inap-mtl01-0001226060 devstack at placement-api.service[14403]: warnings.warn(msg) Aug 08 22:17:30.838067 ubuntu-xenial-inap-mtl01-0001226060 devstack at placement-api.service[14403]: Is there any way we can get rid of these? Thanks, -jay From prometheanfire at gentoo.org Thu Aug 9 03:03:34 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 8 Aug 2018 22:03:34 -0500 Subject: [openstack-dev] [requirements][release][congress] FFE request to bump python-monascaclient to 1.12.1 In-Reply-To: References: <20180808210650.xajr5o56ujjkeaos@gentoo.org> Message-ID: <20180809030334.nu2r5tcqn56ty4vt@gentoo.org> On 18-08-08 18:00:35, Eric K wrote: > Requesting the raised minimun just for openstack/congress. > https://review.openstack.org/#/c/590021/ > > No re-release required; it'll just take effect in RC1. > > > > On 8/8/18, 2:06 PM, "Matthew Thode" wrote: > > >On 18-08-08 13:14:08, Eric K wrote: > >> python-monascaclient 1.12.0 paired with osc-lib 1.11.0 seems to > >> experience a problem around Session. > >> > >> python-monascaclient 1.12.1 fixes the issue [1]. So I'd like to bump > >> congress requirements to python-monascaclient>=1.12.1 if it is not > >> disruptive to packaging [2]. If it is disruptive, we can just note it > >> as a known issue. > > > >Which project(s) will need the new minimum? Those projects would need > >re-releases. Then my question then becomes if those projects need a > >raised minumum too, and for which project(s). And so on. > > SGTM then, ack by requirements -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From prometheanfire at gentoo.org Thu Aug 9 03:04:01 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 8 Aug 2018 22:04:01 -0500 Subject: [openstack-dev] [requirements][release][congress] FFE request to bump doc req sphinx to 1.7.3 In-Reply-To: References: <20180808210735.upu25sleged6yses@gentoo.org> Message-ID: <20180809030401.ecwgvi7uzuy2jbm6@gentoo.org> On 18-08-08 18:00:14, Eric K wrote: > Requesting the raised minimun just for openstack/congress. > https://review.openstack.org/#/c/589995/ > > No re-release required; it'll just take effect in RC1. > > On 8/8/18, 2:07 PM, "Matthew Thode" wrote: > > >On 18-08-08 13:20:47, Eric K wrote: > >> Lower versions of sphinx seems to experience a problem where the > >> exclude_patterns option is not in effect. I'd like to bump the > >> docs/requirements to sphinx>=1.7.3 if it is not disruptive to > >> packaging. If it is disruptive to packaging we can leave it as is. > >> > >> Thanks! > >> > >> https://review.openstack.org/#/c/589995/ > >> > > > >Which project(s) will need the new minimum? Those projects would need > >re-releases. Then my question then becomes if those projects need a > >raised minumum too, and for which project(s). And so on. > > ack from requirements then -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From singh.surya64mnnit at gmail.com Thu Aug 9 05:56:46 2018 From: singh.surya64mnnit at gmail.com (Surya Singh) Date: Thu, 9 Aug 2018 11:26:46 +0530 Subject: [openstack-dev] [kolla] Dropping core reviewer In-Reply-To: References: <1533652097121.31214@cisco.com> Message-ID: words are not strong enough to appreciate your immense contribution and help in OpenStack community. Projects like Kolla, Heat and Magnum are still rocking and many more to come in future from you. Hope to see you around. Wish you all the luck !! -- Surya On Wed, Aug 8, 2018 at 6:15 PM Paul Bourke wrote: > +1. Will always have good memories of when Steve was getting the project > off the ground. Thanks Steve for doing a great job of building the > community around Kolla, and for all your help in general! > > Best of luck, > -Paul > > On 08/08/18 12:23, Eduardo Gonzalez wrote: > > Steve, > > > > Is sad to see you leaving kolla core team, hope to still see you around > > IRC and Summit/PTGs. > > > > I truly appreciate your leadership, guidance and commitment to make > > kolla the great project it is now. > > > > Best luck on your new projects and board of directors. > > > > Regards > > > > > > > > > > > > 2018-08-07 16:28 GMT+02:00 Steven Dake (stdake) > >: > > > > Kollians, > > > > > > Many of you that know me well know my feelings towards participating > > as a core reviewer in a project. Folks with the ability to +2/+W > > gerrit changes can sometimes unintentionally harm a codebase if they > > are not consistently reviewing and maintaining codebase context. I > > also believe in leading an exception-free life, and I'm no exception > > to my own rules. As I am not reviewing Kolla actively given my > > OpenStack individually elected board of directors service and other > > responsibilities, I am dropping core reviewer ability for the Kolla > > repositories. > > > > > > I want to take a moment to thank the thousands of people that have > > contributed and shaped Kolla into the modern deployment system for > > OpenStack that it is today. I personally find Kolla to be my finest > > body of work as a leader. Kolla would not have been possible > > without the involvement of the OpenStack global community working > > together to resolve the operational pain points of OpenStack. Thank > > you for your contributions. > > > > > > Finally, quoting Thierry [1] from our initial application to > > OpenStack, " ... Long live Kolla!" > > > > > > Cheers! > > > > -steve > > > > > > [1] https://review.openstack.org/#/c/206789/ > > > > > > > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > < > http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pratapagoutham at gmail.com Thu Aug 9 07:32:12 2018 From: pratapagoutham at gmail.com (Goutham Pratapa) Date: Thu, 9 Aug 2018 13:02:12 +0530 Subject: [openstack-dev] [rally] There is no Platform plugin with name: 'existing@openstack'" Message-ID: Hi Rally Team, I have been trying to setup rally version v1.1.0 I could successfully install rally but when i try to create the deployment i am getting this error *ubuntu at ubuntu:~$ rally deployment create --file=existing.json --name=existing* *Env manager got invalid spec:* * ["There is no Platform plugin with name: 'existing at openstack'"]* *ubuntu at ubuntu:~$ rally version * *1.1.0* can any one help me the issue and the fix ? Thanks in advance. -- Cheers !!! Goutham Pratapa -------------- next part -------------- An HTML attachment was scrubbed... URL: From eumel at arcor.de Thu Aug 9 08:10:01 2018 From: eumel at arcor.de (Frank Kloeker) Date: Thu, 09 Aug 2018 10:10:01 +0200 Subject: [openstack-dev] [I18n] Translation Imports Message-ID: <8ca784f1bcf266d5305cd8f03498058f@arcor.de> Hello, maybe you missed it, translation import jobs are back today. Please merge "Imported Translations from Zanata" as soon as possible so we can move fast forward with Rocky translation. If you have already branched to stable/rocky please aware that we releasenotes translation manage only in master branch. Special thanks to openstack-infra and the zuul team for the fast error handling. kind regards Frank From aj at suse.com Thu Aug 9 09:04:55 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 9 Aug 2018 11:04:55 +0200 Subject: [openstack-dev] [I18n][all] Translation Imports In-Reply-To: <8ca784f1bcf266d5305cd8f03498058f@arcor.de> References: <8ca784f1bcf266d5305cd8f03498058f@arcor.de> Message-ID: On 2018-08-09 10:10, Frank Kloeker wrote: > Hello, > > maybe you missed it, translation import jobs are back today. Please > merge "Imported Translations from Zanata" as soon as possible so we can > move fast forward with Rocky translation. > If you have already branched to stable/rocky please aware that we > releasenotes translation manage only in master branch. This means that the import deletes automatically the releasenotes translations on any stable branch. The file removals you see are fine. > Special thanks to openstack-infra and the zuul team for the fast error > handling. Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From andr.kurilin at gmail.com Thu Aug 9 09:40:54 2018 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Thu, 9 Aug 2018 12:40:54 +0300 Subject: [openstack-dev] [rally] There is no Platform plugin with name: 'existing@openstack'" In-Reply-To: References: Message-ID: Hi Goutham! There are 2 issues which can result in such error: 1) You did not read change log for Rally (see https://github.com/openstack/rally/blob/master/CHANGELOG.rst, all versions are included there). We do not provide in-tree OpenStack plugins started from Rally 1.0.0 .You need to install rally-openstack package ( https://pypi.python.org/pypi/rally-openstack) . It has Rally as a dependency, so if you are preparing the environment from the scratch -> just install rally-openstack package. 2) There are one or many conflicts in package requirements. Run `rally plugin show Dummy.openstack` and see the logging messages. It should point out the errors of loading plugins if there is something. чт, 9 авг. 2018 г. в 10:32, Goutham Pratapa : > Hi Rally Team, > > I have been trying to setup rally version v1.1.0 > > I could successfully install rally but when i try to create the deployment > i am getting this error > > > > *ubuntu at ubuntu:~$ rally deployment create --file=existing.json > --name=existing* > > *Env manager got invalid spec:* > > > * ["There is no Platform plugin with name: 'existing at openstack'"]* > > > *ubuntu at ubuntu:~$ rally version * > > *1.1.0* > > can any one help me the issue and the fix ? > > Thanks in advance. > > -- > Cheers !!! > Goutham Pratapa > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Thu Aug 9 09:41:38 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Thu, 09 Aug 2018 11:41:38 +0200 Subject: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface Message-ID: <1533807698.26377.7@smtp.office365.com> Dear Nova notification consumers! The Nova team made progress with the new versioned notification interface [1] and it is almost reached feature parity [2] with the legacy, unversioned one. So Nova team will discuss on the upcoming PTG the deprecation of the legacy interface. There is a list of projects (we know of) consuming the legacy interface and we would like to know if any of these projects plan to switch over to the new interface in the foreseeable future so we can make a well informed decision about the deprecation. * Searchlight [3] - it is in maintenance mode so I guess the answer is no * Designate [4] * Telemetry [5] * Mistral [6] * Blazar [7] * Watcher [8] - it seems Watcher uses both legacy and versioned nova notifications * Masakari - I'm not sure Masakari depends on nova notifications or not Cheers, gibi [1] https://docs.openstack.org/nova/latest/reference/notifications.html [2] http://burndown.peermore.com/nova-notification/ [3] https://github.com/openstack/searchlight/blob/master/searchlight/elasticsearch/plugins/nova/notification_handler.py [4] https://github.com/openstack/designate/blob/master/designate/notification_handler/nova.py [5] https://github.com/openstack/ceilometer/blob/master/ceilometer/pipeline/data/event_definitions.yaml#L2 [6] https://github.com/openstack/mistral/blob/master/etc/event_definitions.yml.sample#L2 [7] https://github.com/openstack/blazar/blob/5526ed1f9b74d23b5881a5f73b70776ba9732da4/doc/source/user/compute-host-monitor.rst [8] https://github.com/openstack/watcher/blob/master/watcher/decision_engine/model/notification/nova.py#L335 From pratapagoutham at gmail.com Thu Aug 9 09:49:33 2018 From: pratapagoutham at gmail.com (Goutham Pratapa) Date: Thu, 9 Aug 2018 15:19:33 +0530 Subject: [openstack-dev] [rally] There is no Platform plugin with name: 'existing@openstack'" In-Reply-To: References: Message-ID: Hi Andrey, Thanks it worked. On Thu, Aug 9, 2018 at 3:10 PM, Andrey Kurilin wrote: > Hi Goutham! > > There are 2 issues which can result in such error: > > 1) You did not read change log for Rally (see > https://github.com/openstack/rally/blob/master/CHANGELOG.rst, all > versions are included there). We do not provide in-tree OpenStack plugins > started from Rally 1.0.0 .You need to install rally-openstack package ( > https://pypi.python.org/pypi/rally-openstack) . It has Rally as a > dependency, so if you are preparing the environment from the scratch -> > just install rally-openstack package. > > 2) There are one or many conflicts in package requirements. Run `rally > plugin show Dummy.openstack` and see the logging messages. It should point > out the errors of loading plugins if there is something. > > чт, 9 авг. 2018 г. в 10:32, Goutham Pratapa : > >> Hi Rally Team, >> >> I have been trying to setup rally version v1.1.0 >> >> I could successfully install rally but when i try to create the >> deployment i am getting this error >> >> >> >> *ubuntu at ubuntu:~$ rally deployment create --file=existing.json >> --name=existing* >> >> *Env manager got invalid spec:* >> >> >> * ["There is no Platform plugin with name: 'existing at openstack'"]* >> >> >> *ubuntu at ubuntu:~$ rally version * >> >> *1.1.0* >> >> can any one help me the issue and the fix ? >> >> Thanks in advance. >> >> -- >> Cheers !!! >> Goutham Pratapa >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > Best regards, > Andrey Kurilin. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Cheers !!! Goutham Pratapa -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Aug 9 09:53:33 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 09 Aug 2018 18:53:33 +0900 Subject: [openstack-dev] [nova] API updates week 02-08 Message-ID: <1651e1b0978.b6c86607112483.1442753467964748354@ghanshyammann.com> Hi All, Please find the Nova API highlights of this week. Weekly Office Hour: =============== What we discussed this week: - Discussed on granular policy spec to update that as default roles are present now. - Discussed keypair quota usage bug. and only doc update can be done for now. Patch is up for this https://review.openstack.org/#/c/590081/ - Discussed about simple-tenant-usage bug about value error. We need to handle 500 error for non iso8601 time format input. Bug was reported on Pike but due to env issue as author confirmed. I also tried this on master and not reproducible. Anyways we need to handle the 500 in this API. I will push patch for that. : https://bugs.launchpad.net/nova/+bug/1783338 Planned Features : ============== Below are the API related features which did not make to Rocky and need to propose for Stein. Not much progress to share on these as of now. 1. Servers Ips non-unique network names : - https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names - Spec Merged - https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged) - Weekly Progress: No progress. Need to open for stein 2. Volume multiattach enhancements: - https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements - https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged) - Weekly Progress: No progress. 3. API Extensions merge work - https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-stein - https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-stein - Weekly Progress: Done for Rocky, and new BP open for remaining work on this BP. I will remove the deprecated extensions policy first which will be more clean. 4. Handling a down cell - https://blueprints.launchpad.net/nova/+spec/handling-down-cell - https://review.openstack.org/#/q/topic:bp/handling-down-cell+(status:open+OR+status:merged) - Weekly Progress: No progress. Need to open for stein Bugs: ==== This week Bug Progress: https://etherpad.openstack.org/p/nova-api-weekly-bug-report Critical: 0->0 High importance: 2->1 By Status: New: 1->0 Confirmed/Triage: 30-> 32 In-progress: 34->32 Incomplete: 4->4 ===== Total: 68->68 NOTE- There might be some bug which are not tagged as 'api' or 'api-ref', those are not in above list. Tag such bugs so that we can keep our eyes. -gmann From andr.kurilin at gmail.com Thu Aug 9 10:35:53 2018 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Thu, 9 Aug 2018 13:35:53 +0300 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: <1533735233-sup-6263@lrrr.local> References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> <12b36042-0303-0c2b-5239-7e4084fe1219@debian.org> <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> <1533735233-sup-6263@lrrr.local> Message-ID: Hi Doug! I'm ready to port our job to openstack-zuul-jobs repo, but I expect that Community will not accept it. The result of rally unittests is different between environments with python 3.7 final release and python 3.7.0~b3 . There is at least one failed test at python 3.7.0~b3 which is not reproducible at py27,py34,py35,py36,py37-final , so I'm not sure that it is a good decision to add py37 job based on ubuntu-bionic. As for Rally, I applied the easiest thing which occurred to me - just use external python ppa (deadsnakes) to install the final release of Python 3.7. Such a way is satisfying for Rally community and it cannot be used as the main one for the whole OpenStack. ср, 8 авг. 2018 г. в 16:35, Doug Hellmann : > Excerpts from Andrey Kurilin's message of 2018-08-08 15:25:01 +0300: > > Thanks Thomas for pointing to the issue, I checked it locally and here is > > an update for openstack/rally (rally framework without in-tree OpenStack > > plugins) project: > > > > - added unittest job with py37 env > > It would be really useful if you could help set up a job definition in > openstack-zuul-jobs like we have for openstack-tox-py36 [1], so that other > projects can easily add the job, too. Do you have time to do that? > > Doug > > [1] > http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/jobs.yaml#n354 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Thu Aug 9 10:42:52 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 10:42:52 -0000 Subject: [openstack-dev] trove-dashboard 11.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for trove-dashboard for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/trove-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/trove-dashboard/log/?h=stable/rocky Release notes for trove-dashboard can be found at: http://docs.openstack.org/releasenotes/trove-dashboard/ From jaosorior at gmail.com Thu Aug 9 10:44:41 2018 From: jaosorior at gmail.com (Juan Antonio Osorio) Date: Thu, 9 Aug 2018 13:44:41 +0300 Subject: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface In-Reply-To: <1533807698.26377.7@smtp.office365.com> References: <1533807698.26377.7@smtp.office365.com> Message-ID: We have a small project (novajoin) that still relies on unversioned notifications. Thanks for the notification, I hope we can migrate to versioned notifications by Stein. On Thu, Aug 9, 2018 at 12:41 PM, Balázs Gibizer wrote: > Dear Nova notification consumers! > > > The Nova team made progress with the new versioned notification interface > [1] and it is almost reached feature parity [2] with the legacy, > unversioned one. So Nova team will discuss on the upcoming PTG the > deprecation of the legacy interface. There is a list of projects (we know > of) consuming the legacy interface and we would like to know if any of > these projects plan to switch over to the new interface in the foreseeable > future so we can make a well informed decision about the deprecation. > > > * Searchlight [3] - it is in maintenance mode so I guess the answer is no > * Designate [4] > * Telemetry [5] > * Mistral [6] > * Blazar [7] > * Watcher [8] - it seems Watcher uses both legacy and versioned nova > notifications > * Masakari - I'm not sure Masakari depends on nova notifications or not > > Cheers, > gibi > > [1] https://docs.openstack.org/nova/latest/reference/notifications.html > [2] http://burndown.peermore.com/nova-notification/ > > [3] https://github.com/openstack/searchlight/blob/master/searchl > ight/elasticsearch/plugins/nova/notification_handler.py > [4] https://github.com/openstack/designate/blob/master/designate > /notification_handler/nova.py > [5] https://github.com/openstack/ceilometer/blob/master/ceilomet > er/pipeline/data/event_definitions.yaml#L2 > [6] https://github.com/openstack/mistral/blob/master/etc/event_d > efinitions.yml.sample#L2 > [7] https://github.com/openstack/blazar/blob/5526ed1f9b74d23b588 > 1a5f73b70776ba9732da4/doc/source/user/compute-host-monitor.rst > [8] https://github.com/openstack/watcher/blob/master/watcher/dec > ision_engine/model/notification/nova.py#L335 > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Juan Antonio Osorio R. e-mail: jaosorior at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Thu Aug 9 10:49:31 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 10:49:31 -0000 Subject: [openstack-dev] neutron-lbaas-dashboard 5.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for neutron-lbaas-dashboard for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/neutron-lbaas-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/neutron-lbaas-dashboard/log/?h=stable/rocky Release notes for neutron-lbaas-dashboard can be found at: http://docs.openstack.org/releasenotes/neutron-lbaas-dashboard/ If you find an issue that could be considered release-critical, please file it at: https://storyboard.openstack.org/#!/project/907 and tag it *rocky-rc-potential* to bring it to the neutron-lbaas-dashboard release crew's attention. From no-reply at openstack.org Thu Aug 9 10:50:14 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 10:50:14 -0000 Subject: [openstack-dev] trove 10.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for trove for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/trove/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/trove/log/?h=stable/rocky Release notes for trove can be found at: http://docs.openstack.org/releasenotes/trove/ From no-reply at openstack.org Thu Aug 9 10:52:23 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 10:52:23 -0000 Subject: [openstack-dev] neutron-lbaas 13.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for neutron-lbaas for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/neutron-lbaas/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/neutron-lbaas/log/?h=stable/rocky Release notes for neutron-lbaas can be found at: http://docs.openstack.org/releasenotes/neutron-lbaas/ From no-reply at openstack.org Thu Aug 9 10:53:37 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 10:53:37 -0000 Subject: [openstack-dev] octavia 3.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for octavia for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/octavia/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/octavia/log/?h=stable/rocky Release notes for octavia can be found at: http://docs.openstack.org/releasenotes/octavia/ From no-reply at openstack.org Thu Aug 9 10:54:48 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 10:54:48 -0000 Subject: [openstack-dev] octavia-dashboard 2.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for octavia-dashboard for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/octavia-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/octavia-dashboard/log/?h=stable/rocky Release notes for octavia-dashboard can be found at: http://docs.openstack.org/releasenotes/octavia-dashboard/ If you find an issue that could be considered release-critical, please file it at: https://storyboard.openstack.org/#!/project/909 and tag it *rocky-rc-potential* to bring it to the octavia-dashboard release crew's attention. From linghucongsong at 163.com Thu Aug 9 12:04:40 2018 From: linghucongsong at 163.com (linghucongsong) Date: Thu, 9 Aug 2018 20:04:40 +0800 (CST) Subject: [openstack-dev] [tricircle] Nominate change in tricircle core team Message-ID: <7212f283.12059.1651e931460.Coremail.linghucongsong@163.com> Hi team, I would like to nominate Zhuo Tang (ztang) for tricircle core member. ztang has actively joined the discussion of feature development in our offline meeting and has participated in contribute important blueprints since Rocky, like network deletion reliability and service function chaining. I really think his experience will help us substantially improve tricircle. Bye the way the vote unitl 2018-8-16 beijing time. Best Wishes! Baisen -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Thu Aug 9 12:27:48 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 12:27:48 -0000 Subject: [openstack-dev] mistral-extra 7.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for mistral-extra for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/mistral-extra/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/mistral-extra/log/?h=stable/rocky Release notes for mistral-extra can be found at: http://docs.openstack.org/releasenotes/mistral-extra/ From no-reply at openstack.org Thu Aug 9 12:28:03 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 12:28:03 -0000 Subject: [openstack-dev] mistral 7.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for mistral for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/mistral/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/mistral/log/?h=stable/rocky Release notes for mistral can be found at: http://docs.openstack.org/releasenotes/mistral/ From no-reply at openstack.org Thu Aug 9 12:32:28 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 12:32:28 -0000 Subject: [openstack-dev] mistral-dashboard 7.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for mistral-dashboard for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/mistral-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/mistral-dashboard/log/?h=stable/rocky Release notes for mistral-dashboard can be found at: http://docs.openstack.org/releasenotes/mistral-dashboard/ From no-reply at openstack.org Thu Aug 9 12:37:43 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 12:37:43 -0000 Subject: [openstack-dev] horizon 14.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for horizon for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/horizon/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/horizon/log/?h=stable/rocky Release notes for horizon can be found at: http://docs.openstack.org/releasenotes/horizon/ From no-reply at openstack.org Thu Aug 9 12:51:27 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 12:51:27 -0000 Subject: [openstack-dev] blazar 2.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for blazar for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/blazar/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/blazar/log/?h=stable/rocky Release notes for blazar can be found at: http://docs.openstack.org/releasenotes/blazar/ If you find an issue that could be considered release-critical, please file it at: https://launchpad.net/blazar and tag it *rocky-rc-potential* to bring it to the blazar release crew's attention. From alee at redhat.com Thu Aug 9 13:40:42 2018 From: alee at redhat.com (Ade Lee) Date: Thu, 09 Aug 2018 09:40:42 -0400 Subject: [openstack-dev] [opensatck-dev][qa][barbican][novajoin][networking-fortinet][vmware-nsx] Dependency of Tempest changes In-Reply-To: <1650ec36ecf.10b0df76a34653.8050329285896349825@ghanshyammann.com> References: <1650ec36ecf.10b0df76a34653.8050329285896349825@ghanshyammann.com> Message-ID: <1533822042.23178.23.camel@redhat.com> barbican and novajoin done. On Mon, 2018-08-06 at 19:23 +0900, Ghanshyam Mann wrote: > Hi All, > > Tempest patch [1] removes the deprecated config option for volume v1 > API and it has dependency on may plugins. I have proposed the patches > to each plugins using that option [2] to stop using that option so > that their gate will not be broken if Tempest patch merge. Also I > have made Tempest patch dependency on each plugins commit. Many of > those dependent patch has merged but 4 patches are still hanging > around since long time which is blocking Tempest change to get > merge. > > Below are the plugins which have not merged the changes: > barbican-tempest-plugin - https://review.openstack.org/#/c/573174/ > > novajoin-tempest-plugin - https://review.openstack.org/#/c/573175/ > > networking-fortinet - https://review.openstack.org/#/c/573170/   > vmware-nsx-tempest-plugin - https://review.openstack.org/#/c/57317 > 2/ > > I want to merge this tempest patch in Rocky release which I am > planing to do in next week. To make that happen we have to merge the > Tempest patch soon. If above patches are not merged by plugins team > within 2-3 days which means those plugins might not be active or do > not care for gate, I am going to remove their dependency on Tempest > patch and merge that. > > [1] https://review.openstack.org/#/c/573135/ > [2] https://review.openstack.org/#/q/topic:remove-support-of-cinder-v > 1-api+(status:open+OR+status:merged) > > -gmann > > > > > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gr at ham.ie Thu Aug 9 13:44:36 2018 From: gr at ham.ie (Graham Hayes) Date: Thu, 9 Aug 2018 14:44:36 +0100 Subject: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface In-Reply-To: <1533807698.26377.7@smtp.office365.com> References: <1533807698.26377.7@smtp.office365.com> Message-ID: <8434a7bb-cea4-1b57-b603-4fa00eb1404a@ham.ie> Designate has no plans to swap or add support for the new interface in the near or medium term - we are more than willing to take patches, but we do not have the people power to do it ourselves. Some of our users do use the old interface a lot - designate-sink is quite heavily embeded in some workflows. Thanks, - Graham On 09/08/2018 10:41, Balázs Gibizer wrote: > Dear Nova notification consumers! > > > The Nova team made progress with the new versioned notification > interface [1] and it is almost reached feature parity [2] with the > legacy, unversioned one. So Nova team will discuss on the upcoming PTG > the deprecation of the legacy interface. There is a list of projects (we > know of) consuming the legacy interface and we would like to know if any > of these projects plan to switch over to the new interface in the > foreseeable future so we can make a well informed decision about the > deprecation. > > > * Searchlight [3] - it is in maintenance mode so I guess the answer is no > * Designate [4] > * Telemetry [5] > * Mistral [6] > * Blazar [7] > * Watcher [8] - it seems Watcher uses both legacy and versioned nova > notifications > * Masakari - I'm not sure Masakari depends on nova notifications or not > > Cheers, > gibi > > [1] https://docs.openstack.org/nova/latest/reference/notifications.html > [2] http://burndown.peermore.com/nova-notification/ > > [3] > https://github.com/openstack/searchlight/blob/master/searchlight/elasticsearch/plugins/nova/notification_handler.py > > [4] > https://github.com/openstack/designate/blob/master/designate/notification_handler/nova.py > > [5] > https://github.com/openstack/ceilometer/blob/master/ceilometer/pipeline/data/event_definitions.yaml#L2 > > [6] > https://github.com/openstack/mistral/blob/master/etc/event_definitions.yml.sample#L2 > > [7] > https://github.com/openstack/blazar/blob/5526ed1f9b74d23b5881a5f73b70776ba9732da4/doc/source/user/compute-host-monitor.rst > > [8] > https://github.com/openstack/watcher/blob/master/watcher/decision_engine/model/notification/nova.py#L335 > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From mriedemos at gmail.com Thu Aug 9 14:23:54 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 9 Aug 2018 09:23:54 -0500 Subject: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface In-Reply-To: <1533807698.26377.7@smtp.office365.com> References: <1533807698.26377.7@smtp.office365.com> Message-ID: <5576d677-03bd-1bab-f668-276d8e4982e2@gmail.com> On 8/9/2018 4:41 AM, Balázs Gibizer wrote: > * Masakari - I'm not sure Masakari depends on nova notifications or not From a quick look, it looks like masakari does not rely on nova's rpc-based notifications and instead registers and listens for libvirt guest events directly (ceilometer's compute agent does something similar I think - or used to anyway): https://github.com/openstack/masakari-monitors/commit/a566f8ddc6b3b46ae020d182496d153fb0c1b3e7 -- Thanks, Matt From mriedemos at gmail.com Thu Aug 9 14:27:05 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 9 Aug 2018 09:27:05 -0500 Subject: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface In-Reply-To: <8434a7bb-cea4-1b57-b603-4fa00eb1404a@ham.ie> References: <1533807698.26377.7@smtp.office365.com> <8434a7bb-cea4-1b57-b603-4fa00eb1404a@ham.ie> Message-ID: <9638ac7d-7a63-728a-6773-32a0d7975295@gmail.com> On 8/9/2018 8:44 AM, Graham Hayes wrote: > Designate has no plans to swap or add support for the new interface in > the near or medium term - we are more than willing to take patches, but > we do not have the people power to do it ourselves. > > Some of our users do use the old interface a lot - designate-sink > is quite heavily embeded in some workflows. This is what I suspected would be the answer from most projects. I was very half-assedly wondering if we could write some kind of translation middleware library that allows your service to listen for versioned notifications and translate them to legacy notifications. Then we could apply that generically across projects that don't have time for a big re-write while allowing nova to drop the legacy compat code (after some period of deprecation, I was thinking at least a year). It should be pretty simple to write a dumb versioned->unversioned payload mapping for each legacy notification, but there might be more sophisticated ways of doing that using some kind of schema or template instead. Just thinking out loud. -- Thanks, Matt From sean.mcginnis at gmx.com Thu Aug 9 14:58:06 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 9 Aug 2018 09:58:06 -0500 Subject: [openstack-dev] [release] Release countdown for week R-2, August 13-17 Message-ID: <20180809145805.GA32449@sm-workstation> Development Focus ----------------- Teams should be working on any release critical bugs that would require another RC before the final release, and thinking about plans for Stein. General Information ------------------- Any cycle-with-milestones projects that missed the RC1 deadline should prepare an RC1 release as soon as possible. After all of the cycle-with-milestone projects have branched we will branch devstack, grenade, and the requirements repos. This will effectively open them back up for Stein development, though the focus should still be on finishing up Rocky until the final release. Actions --------- Watch for any translation patches coming through and merge them quickly. If your project has a stable/rocky branch created, please make sure those patches are also merged there. Keep in mind there will need to be a final release candidate cut to capture any merged translations and critical bug fixes from this branch. Please also check for completeness in release notes and add any relevant "prelude" content. These notes are targetted for the downstream consumers of your project, so it would be great to include any useful information for those that are going to pick up and use or deploy the Queens version of your project. We also have the cycle-highlights information in the project deliverable files. This one is targeted at marketing and other consumers that have typically been pinging PTLs every release asking for "what's new" in this release. If you have not done so already, please add a few highlights for your team that would be useful for this kind of consumer. This would be a good time for any release:independent projects to add the history for any releases not yet listed in their deliverable file. These files are under the deliverable/_independent directory in the openstack/releases repo. If you have a cycle-with-intermediary release that has not done an RC yet, please do so as soon as possible. If we do not receive release requests for these repos soon we will be forced to create a release from the latest commit to create a stable/rocky branch. The release team would rather not be the ones initiating this release. Upcoming Deadlines & Dates -------------------------- Final RC deadline: August 23 Rocky Release: August 29 Stein PTG: September 10-14 -- Sean McGinnis (smcginnis) From sean.mcginnis at gmx.com Thu Aug 9 15:28:33 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 9 Aug 2018 10:28:33 -0500 Subject: [openstack-dev] [release] Release countdown for week R-2, August 13-17 In-Reply-To: <20180809145805.GA32449@sm-workstation> References: <20180809145805.GA32449@sm-workstation> Message-ID: <20180809152832.GA2836@sm-workstation> Related to below, here is the current list of deliverables waiting for an RC1 release (as of August 9, 15:30 UTC): barbican ceilometer-powervm cinder congress-dashboard congress cyborg designate-dashboard designate glance heat masakari-monitors masakari networking-bagpipe networking-bgpvpn networking-midonet networking-odl networking-ovn networking-powervm networking-sfc neutron-dynamic-routing neutron-fwaas neutron-vpnaas neutron nova-powervm nova release-test sahara-dashboard sahara-image-elements sahara Today is the deadline, so please make sure you get in the RC release requests soon. Thanks! Sean On Thu, Aug 09, 2018 at 09:58:06AM -0500, Sean McGinnis wrote: > Development Focus > ----------------- > > Teams should be working on any release critical bugs that would require another > RC before the final release, and thinking about plans for Stein. > > General Information > ------------------- > > Any cycle-with-milestones projects that missed the RC1 deadline should prepare > an RC1 release as soon as possible. > > After all of the cycle-with-milestone projects have branched we will branch > devstack, grenade, and the requirements repos. This will effectively open them > back up for Stein development, though the focus should still be on finishing up > Rocky until the final release. > > Actions > --------- > > Watch for any translation patches coming through and merge them quickly. > > If your project has a stable/rocky branch created, please make sure those > patches are also merged there. Keep in mind there will need to be a final > release candidate cut to capture any merged translations and critical bug fixes > from this branch. > > Please also check for completeness in release notes and add any relevant > "prelude" content. These notes are targetted for the downstream consumers of > your project, so it would be great to include any useful information for those > that are going to pick up and use or deploy the Queens version of your project. > > We also have the cycle-highlights information in the project deliverable files. > This one is targeted at marketing and other consumers that have typically been > pinging PTLs every release asking for "what's new" in this release. If you have > not done so already, please add a few highlights for your team that would be > useful for this kind of consumer. > > This would be a good time for any release:independent projects to add the > history for any releases not yet listed in their deliverable file. These files > are under the deliverable/_independent directory in the openstack/releases > repo. > > If you have a cycle-with-intermediary release that has not done an RC yet, > please do so as soon as possible. If we do not receive release requests for > these repos soon we will be forced to create a release from the latest commit > to create a stable/rocky branch. The release team would rather not be the ones > initiating this release. > > > Upcoming Deadlines & Dates > -------------------------- > > Final RC deadline: August 23 > Rocky Release: August 29 > Stein PTG: September 10-14 > > -- > Sean McGinnis (smcginnis) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From emilien at redhat.com Thu Aug 9 15:40:29 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 9 Aug 2018 11:40:29 -0400 Subject: [openstack-dev] [tripleo] The Weekly Owl - 28th Edition Message-ID: Welcome to the twenty-eightiest edition of a weekly update in TripleO world! The goal is to provide a short reading (less than 5 minutes) to learn what's new this week. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-July/132672.html +---------------------------------+ | General announcements | +---------------------------------+ +--> We're still preparing the first release candidate of TripleO Rocky, please focus on Critical / High bugs. +--> Reminder about PTG etherpad, feel free to propose topics: https://etherpad.openstack.org/p/tripleo-ptg-stein +--> Juan will be the next PTL for Stein cycle, congratulations! +------------------------------+ | Continuous Integration | +------------------------------+ +--> Sprint theme: migration to zuul v3, including migrating from legacy bash to ansible tasks/playbooks (More on https://trello.com/c/JikmHXSS/881-sprint-17-goals) +--> The Ruck and Rover for this sprint are Gabriele Cerami (panda) and Rafael Folco (rfolco). Please tell them any CI issue. +--> Promotion on master is 2 days, 9 day on Queens, 0 days on Pike and 7 days on Ocata. +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting +-------------+ | Upgrades | +-------------+ +--> No updates this week. +---------------+ | Containers | +---------------+ +--> The team is looking at podman/buildah support for Stein cycle. More discussions at the PTG, but doing some ground work now. +----------------------+ | config-download | +----------------------+ +--> No updates this week.. +--------------+ | Integration | +--------------+ +--> No updates this week. +---------+ | UI/CLI | +---------+ +--> No updates this week. +---------------+ | Validations | +---------------+ +--> No updates this week. +---------------+ | Networking | +---------------+ +--> No updates this week. +--------------+ | Workflows | +--------------+ +--> Progress on the Mistral tempest plugin and testing on the containerized undercloud job. +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-----------+ | Security | +-----------+ +--> Discussion around secret management. +--> Last meeting notes: http://eavesdrop.openstack.org/meetings/tripleo_security_squad/2018/tripleo_security_squad.2018-08-08-12.03.log.html +--> More: https://etherpad.openstack.org/p/tripleo-security-squad +------------+ | Owl fact | +------------+ Owl Wings Are Helping Silence Airplanes, Fans, and Wind Turbines Nice reading: https://gizmodo.com/owl-wings-are-helping-silence-airplanes-fans-and-wind-1713023055 Thanks Cédric for this contribution! Thank you all for reading and stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Thu Aug 9 15:42:39 2018 From: neil at tigera.io (Neil Jerram) Date: Thu, 9 Aug 2018 16:42:39 +0100 Subject: [openstack-dev] [keystone][nova] Struggling with non-admin user on Queens install Message-ID: I'd like to create a non-admin project and user that are able to do nova.images.list(), in a Queens install. IIUC, all users should be able to do that. I'm afraid I'm pretty lost and would appreciate any help. Define a function to test whether a particular set of credentials can do nova.images.list(): from keystoneauth1 import identity from keystoneauth1 import session from novaclient.client import Client as NovaClient def attemp(auth): sess = session.Session(auth=auth) nova = NovaClient(2, session=sess) for i in nova.images.list(): print i With an admin user, things work: >>> auth_url = "http://controller:5000/v3" >>> auth = identity.Password(auth_url=auth_url, >>> username="admin", >>> password="abcdef", >>> project_name="admin", >>> project_domain_id="default", >>> user_domain_id="default") >>> attemp(auth) With a non-admin user with project_id specified, 401: >>> tauth = identity.Password(auth_url=auth_url, ... username="tenant2", ... password="password", ... project_id="tenant2", ... user_domain_id="default") >>> attemp(tauth) ... keystoneauth1.exceptions.http.Unauthorized: The request you have made requires authentication. (HTTP 401) (Request-ID: req-ed0630a4-7df0-4ba8-a4c4-de3ecb7b4d7d) With the same but without project_id, I get an empty service catalog instead: >>> tauth = identity.Password(auth_url=auth_url, ... username="tenant2", ... password="password", ... #project_name="tenant2", ... #project_domain_id="default", ... user_domain_id="default") >>> >>> attemp(tauth) ... keystoneauth1.exceptions.catalog.EmptyCatalog: The service catalog is empty. Can anyone help? Regards, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From stdake at cisco.com Thu Aug 9 15:55:08 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Thu, 9 Aug 2018 15:55:08 +0000 Subject: [openstack-dev] [kolla] Dropping core reviewer In-Reply-To: References: <1533652097121.31214@cisco.com> , Message-ID: <1533830111273.2195@cisco.com> ?Kollians, Thanks for the kind words. I do plan to stay involved in the OpenStack community - specifically targeting governance and will definitely be around - irc - mls - summits - etc :) Cheers -steve ________________________________ From: Surya Singh Sent: Wednesday, August 8, 2018 10:56 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [kolla] Dropping core reviewer words are not strong enough to appreciate your immense contribution and help in OpenStack community. Projects like Kolla, Heat and Magnum are still rocking and many more to come in future from you. Hope to see you around. Wish you all the luck !! -- Surya On Wed, Aug 8, 2018 at 6:15 PM Paul Bourke > wrote: +1. Will always have good memories of when Steve was getting the project off the ground. Thanks Steve for doing a great job of building the community around Kolla, and for all your help in general! Best of luck, -Paul On 08/08/18 12:23, Eduardo Gonzalez wrote: > Steve, > > Is sad to see you leaving kolla core team, hope to still see you around > IRC and Summit/PTGs. > > I truly appreciate your leadership, guidance and commitment to make > kolla the great project it is now. > > Best luck on your new projects and board of directors. > > Regards > > > > > > 2018-08-07 16:28 GMT+02:00 Steven Dake (stdake) > >>: > > Kollians, > > > Many of you that know me well know my feelings towards participating > as a core reviewer in a project. Folks with the ability to +2/+W > gerrit changes can sometimes unintentionally harm a codebase if they > are not consistently reviewing and maintaining codebase context. I > also believe in leading an exception-free life, and I'm no exception > to my own rules. As I am not reviewing Kolla actively given my > OpenStack individually elected board of directors service and other > responsibilities, I am dropping core reviewer ability for the Kolla > repositories. > > > I want to take a moment to thank the thousands of people that have > contributed and shaped Kolla into the modern deployment system for > OpenStack that it is today. I personally find Kolla to be my finest > body of work as a leader. Kolla would not have been possible > without the involvement of the OpenStack global community working > together to resolve the operational pain points of OpenStack. Thank > you for your contributions. > > > Finally, quoting Thierry [1] from our initial application to > OpenStack, " ... Long live Kolla!" > > > Cheers! > > -steve > > > [1] https://review.openstack.org/#/c/206789/ > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Thu Aug 9 15:56:57 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 15:56:57 -0000 Subject: [openstack-dev] manila 7.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for manila for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/manila/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/manila/log/?h=stable/rocky Release notes for manila can be found at: http://docs.openstack.org/releasenotes/manila/ From no-reply at openstack.org Thu Aug 9 16:04:47 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 16:04:47 -0000 Subject: [openstack-dev] keystone 14.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for keystone for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/keystone/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/keystone/log/?h=stable/rocky Release notes for keystone can be found at: http://docs.openstack.org/releasenotes/keystone/ From inc007 at gmail.com Thu Aug 9 16:22:35 2018 From: inc007 at gmail.com (=?UTF-8?B?TWljaGHFgiBKYXN0cnrEmWJza2k=?=) Date: Thu, 9 Aug 2018 09:22:35 -0700 Subject: [openstack-dev] [kolla] Dropping core reviewer In-Reply-To: <1533830111273.2195@cisco.com> References: <1533652097121.31214@cisco.com> <1533830111273.2195@cisco.com> Message-ID: Hello Kollegues, Koalas and Koalines, I feel I should do the same, as my work sadly doesn't involve Kolla, or OpenStack for that matter, any more. It has been a wonderful time and serving Kolla community as core and PTL is achievement I'm most proud of and I thank you all for giving me this opportunity. We've built something great! Cheers, Michal On Thu, 9 Aug 2018 at 08:55, Steven Dake (stdake) wrote: > > Kollians, > > > Thanks for the kind words. > > > I do plan to stay involved in the OpenStack community - specifically targeting governance and will definitely be around - irc - mls - summits - etc :) > > > Cheers > > -steve > > > ________________________________ > From: Surya Singh > Sent: Wednesday, August 8, 2018 10:56 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [kolla] Dropping core reviewer > > words are not strong enough to appreciate your immense contribution and help in OpenStack community. > Projects like Kolla, Heat and Magnum are still rocking and many more to come in future from you. > Hope to see you around. > > Wish you all the luck !! > -- Surya > > On Wed, Aug 8, 2018 at 6:15 PM Paul Bourke wrote: >> >> +1. Will always have good memories of when Steve was getting the project >> off the ground. Thanks Steve for doing a great job of building the >> community around Kolla, and for all your help in general! >> >> Best of luck, >> -Paul >> >> On 08/08/18 12:23, Eduardo Gonzalez wrote: >> > Steve, >> > >> > Is sad to see you leaving kolla core team, hope to still see you around >> > IRC and Summit/PTGs. >> > >> > I truly appreciate your leadership, guidance and commitment to make >> > kolla the great project it is now. >> > >> > Best luck on your new projects and board of directors. >> > >> > Regards >> > >> > >> > >> > >> > >> > 2018-08-07 16:28 GMT+02:00 Steven Dake (stdake) > > >: >> > >> > Kollians, >> > >> > >> > Many of you that know me well know my feelings towards participating >> > as a core reviewer in a project. Folks with the ability to +2/+W >> > gerrit changes can sometimes unintentionally harm a codebase if they >> > are not consistently reviewing and maintaining codebase context. I >> > also believe in leading an exception-free life, and I'm no exception >> > to my own rules. As I am not reviewing Kolla actively given my >> > OpenStack individually elected board of directors service and other >> > responsibilities, I am dropping core reviewer ability for the Kolla >> > repositories. >> > >> > >> > I want to take a moment to thank the thousands of people that have >> > contributed and shaped Kolla into the modern deployment system for >> > OpenStack that it is today. I personally find Kolla to be my finest >> > body of work as a leader. Kolla would not have been possible >> > without the involvement of the OpenStack global community working >> > together to resolve the operational pain points of OpenStack. Thank >> > you for your contributions. >> > >> > >> > Finally, quoting Thierry [1] from our initial application to >> > OpenStack, " ... Long live Kolla!" >> > >> > >> > Cheers! >> > >> > -steve >> > >> > >> > [1] https://review.openstack.org/#/c/206789/ >> > >> > >> > >> > >> > >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > >> > >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Thu Aug 9 16:43:11 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 09 Aug 2018 12:43:11 -0400 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> <12b36042-0303-0c2b-5239-7e4084fe1219@debian.org> <57e9dffb-26cd-e96a-cac9-49942f73ab11@debian.org> <1533735233-sup-6263@lrrr.local> Message-ID: <1533832959-sup-4944@lrrr.local> Excerpts from Andrey Kurilin's message of 2018-08-09 13:35:53 +0300: > Hi Doug! > > I'm ready to port our job to openstack-zuul-jobs repo, but I expect that > Community will not accept it. > > The result of rally unittests is different between environments with python > 3.7 final release and python 3.7.0~b3 . > There is at least one failed test at python 3.7.0~b3 which is not > reproducible at py27,py34,py35,py36,py37-final , > so I'm not sure that it is a good decision to add py37 job based on > ubuntu-bionic. > > As for Rally, I applied the easiest thing which occurred to me - just use > external python ppa (deadsnakes) to > install the final release of Python 3.7. > Such a way is satisfying for Rally community and it cannot be used as the > main one for the whole OpenStack. Yes, I think we don't want to use that approach for most of the jobs. The point is to test on the Python packaged in the distro. Doug From cdent+os at anticdent.org Thu Aug 9 16:44:03 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 9 Aug 2018 17:44:03 +0100 (BST) Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, As is our recent custom, short meeting this week. Our main topic of conversation was discussing the planning etherpad [7] for the API-SIG gathering at the Denver PTG. If you will be there, and have topics of interest, please add them to the etherpad. There are no new guidelines under review, but there is a stack of changes which do some reformatting and explicitly link to useful resources [8]. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * None # API Guidelines Proposed for Freeze * None # Guidelines that are ready for wider review by the whole community. * None # Guidelines Currently Under Review [3] * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://etherpad.openstack.org/p/api-sig-stein-ptg [8] https://review.openstack.org/#/c/589131/ Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From fungi at yuggoth.org Thu Aug 9 16:44:38 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 9 Aug 2018 16:44:38 +0000 Subject: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface In-Reply-To: <1533807698.26377.7@smtp.office365.com> References: <1533807698.26377.7@smtp.office365.com> Message-ID: <20180809164438.7ik2tqldwc6cceho@yuggoth.org> On 2018-08-09 11:41:38 +0200 (+0200), Balázs Gibizer wrote: [...] > There is a list of projects (we know of) consuming the legacy > interface and we would like to know if any of these projects plan > to switch over to the new interface in the foreseeable future so > we can make a well informed decision about the deprecation. > > > * Searchlight [3] - it is in maintenance mode so I guess the answer is no [...] With https://review.openstack.org/588644 looking likely to merge and Searchlight already not slated for inclusion in Rocky, I recommend not basing your decision on what it is or isn't using at this point. If someone wants to resurrect it, updating things like the use of the Nova notification interface seem like the bare minimum amount of work they should commit to doing anyway. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Thu Aug 9 16:47:21 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 09 Aug 2018 12:47:21 -0400 Subject: [openstack-dev] [nova][placement][oslo] Excessive WARNING level log messages in placement-api In-Reply-To: <4432b5e3-1f42-20e0-968a-5a4e7636d60a@gmail.com> References: <4432b5e3-1f42-20e0-968a-5a4e7636d60a@gmail.com> Message-ID: <1533833182-sup-6244@lrrr.local> Excerpts from Jay Pipes's message of 2018-08-08 22:53:54 -0400: > For evidence, see: > > http://logs.openstack.org/41/590041/1/check/tempest-full-py3/db08dec/controller/logs/screen-placement-api.txt.gz?level=WARNING > > thousands of these are filling the logs with WARNING-level log messages, > making it difficult to find anything: > > Aug 08 22:17:30.837557 ubuntu-xenial-inap-mtl01-0001226060 > devstack at placement-api.service[14403]: WARNING py.warnings > [req-a809b022-59af-4628-be73-488cfec3187d > req-d46cb1f0-431f-490f-955b-b9c2cd9f6437 service placement] > /usr/local/lib/python3.5/dist-packages/oslo_policy/policy.py:896: > UserWarning: Policy placement:resource_providers:list failed scope > check. The token used to make the request was project scoped but the > policy requires ['system'] scope. This behavior may change in the future > where using the intended scope is required > Aug 08 22:17:30.837800 ubuntu-xenial-inap-mtl01-0001226060 > devstack at placement-api.service[14403]: warnings.warn(msg) > Aug 08 22:17:30.838067 ubuntu-xenial-inap-mtl01-0001226060 > devstack at placement-api.service[14403]: > > Is there any way we can get rid of these? > > Thanks, > -jay > It looks like those are coming out of the policy library? Maybe file a bug there. I added "oslo" to the subject line to get the team's attention. This feels like something we could fix and backport to rocky. Doug From mriedemos at gmail.com Thu Aug 9 17:18:14 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 9 Aug 2018 12:18:14 -0500 Subject: [openstack-dev] [nova][placement][oslo] Excessive WARNING level log messages in placement-api In-Reply-To: <1533833182-sup-6244@lrrr.local> References: <4432b5e3-1f42-20e0-968a-5a4e7636d60a@gmail.com> <1533833182-sup-6244@lrrr.local> Message-ID: On 8/9/2018 11:47 AM, Doug Hellmann wrote: > Excerpts from Jay Pipes's message of 2018-08-08 22:53:54 -0400: >> For evidence, see: >> >> http://logs.openstack.org/41/590041/1/check/tempest-full-py3/db08dec/controller/logs/screen-placement-api.txt.gz?level=WARNING >> >> thousands of these are filling the logs with WARNING-level log messages, >> making it difficult to find anything: >> >> Aug 08 22:17:30.837557 ubuntu-xenial-inap-mtl01-0001226060 >> devstack at placement-api.service[14403]: WARNING py.warnings >> [req-a809b022-59af-4628-be73-488cfec3187d >> req-d46cb1f0-431f-490f-955b-b9c2cd9f6437 service placement] >> /usr/local/lib/python3.5/dist-packages/oslo_policy/policy.py:896: >> UserWarning: Policy placement:resource_providers:list failed scope >> check. The token used to make the request was project scoped but the >> policy requires ['system'] scope. This behavior may change in the future >> where using the intended scope is required >> Aug 08 22:17:30.837800 ubuntu-xenial-inap-mtl01-0001226060 >> devstack at placement-api.service[14403]: warnings.warn(msg) >> Aug 08 22:17:30.838067 ubuntu-xenial-inap-mtl01-0001226060 >> devstack at placement-api.service[14403]: >> >> Is there any way we can get rid of these? >> >> Thanks, >> -jay >> > It looks like those are coming out of the policy library? Maybe file a > bug there. I added "oslo" to the subject line to get the team's > attention. > > This feels like something we could fix and backport to rocky. > > Doug I could have sworn I created a bug in oslo.policy for this at one point for the same reason Jay mentions it, but I guess not. We could simply, on the nova side, add a warnings filter to only log this once. -- Thanks, Matt From imain at redhat.com Thu Aug 9 17:37:02 2018 From: imain at redhat.com (Ian Main) Date: Thu, 9 Aug 2018 10:37:02 -0700 Subject: [openstack-dev] [tripleo] Patches to speed up plan operations In-Reply-To: References: Message-ID: Hey Jirka! I wasn't aware of the other options available. Basically yes, you now just need to upload a tarball of the templates to swift. You can see in the client: - tarball.tarball_extract_to_swift_container( - swift_client, tmp_tarball.name, container_name) + _upload_file(swift_client, container_name, + constants.TEMPLATES_TARBALL_NAME, tmp_tarball.name) Other than that it should be the same. I'm not sure what files the UI wants to look at in swift, but certainly some are still there. Basically any file that exists in swift is not overwritten by the contents of the tar file. So if a file exists in swift it takes priority. I'll try to catch you on irc but I know our timezones are quite different. Thanks for looking into it! Ian On Wed, Aug 8, 2018 at 10:46 AM Jiri Tomasek wrote: > Hello, thanks for bringing this up. > > I am going to try to test this patch with TripleO UI tomorrow. Without > properly looking at the patch, questions I would like to get answers for > are: > > How is this going to affect ways to create/update deployment plan? > Currently user is able to create deployment plan by: > - not providing any files - creating deployment plan from default files in > /usr/share/openstack-tripleo-heat-templates > - providing a tarball > - providing a local directory of files to create plan from > - providing a git repository link > > These changes will have an impact on certain TripleO UI operations where > (in rare cases) we reach directly for a swift object > > IIUC it seems we are deciding to consider deployment plan as a black box > packed in a tarball, which I quite like, we'll need to provide a standard > way how to provide custom files to the plan. > > How is this going to affect CLI vs GUI workflow as currently CLI creates > the plan as part of the deploy command, rather than GUI starts its workflow > by selecting/creating deployment plan and whole configuration of the plan > is performed on the deployment plan. Then the deployment plan gets > deployed. We are aiming to introduce CLI commands to consolidate the > behaviour of both clients to what GUI workflow is currently. > > I am going to try to find answers to these questions and identify > potential problems in next couple of days. > > -- Jirka > > > On Tue, Aug 7, 2018 at 5:34 PM Dan Prince wrote: > >> Thanks for taking this on Ian! I'm fully on board with the effort. I >> like the consolidation and performance improvements. Storing t-h-t >> templates in Swift worked okay 3-4 years ago. Now that we have more >> templates, many of which need .j2 rendering the storage there has >> become quite a bottleneck. >> >> Additionally, since we'd be sending commands to Heat via local >> filesystem template storage we could consider using softlinks again >> within t-h-t which should help with refactoring and deprecation >> efforts. >> >> Dan >> On Wed, Aug 1, 2018 at 7:35 PM Ian Main wrote: >> > >> > >> > Hey folks! >> > >> > So I've been working on some patches to speed up plan operations in >> TripleO. This was originally driven by the UI needing to be able to >> perform a 'plan upload' in something less than several minutes. :) >> > >> > https://review.openstack.org/#/c/581153/ >> > https://review.openstack.org/#/c/581141/ >> > >> > I have a functioning set of patches, and it actually cuts over 2 >> minutes off the overcloud deployment time. >> > >> > Without patch: >> > + openstack overcloud plan create --templates >> /home/stack/tripleo-heat-templates/ overcloud >> > Creating Swift container to store the plan >> > Creating plan from template files in: >> /home/stack/tripleo-heat-templates/ >> > Plan created. >> > real 3m3.415s >> > >> > With patch: >> > + openstack overcloud plan create --templates >> /home/stack/tripleo-heat-templates/ overcloud >> > Creating Swift container to store the plan >> > Creating plan from template files in: >> /home/stack/tripleo-heat-templates/ >> > Plan created. >> > real 0m44.694s >> > >> > This is on VMs. On real hardware it now takes something like 15-20 >> seconds to do the plan upload which is much more manageable from the UI >> standpoint. >> > >> > Some things about what this patch does: >> > >> > - It makes use of process-templates.py (written for the undercloud) to >> process the jinjafied templates. This reduces replication with the >> existing version in the code base and is very fast as it's all done on >> local disk. >> > - It stores the bulk of the templates as a tarball in swift. Any >> individual files in swift take precedence over the contents of the tarball >> so it should be backwards compatible. This is a great speed up as we're >> not accessing a lot of individual files in swift. >> > >> > There's still some work to do; cleaning up and fixing the unit tests, >> testing upgrades etc. I just wanted to get some feedback on the general >> idea and hopefully some reviews and/or help - especially with the unit test >> stuff. >> > >> > Thanks everyone! >> > >> > Ian >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Aug 9 17:42:04 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 9 Aug 2018 12:42:04 -0500 Subject: [openstack-dev] [nova][placement][oslo] Excessive WARNING level log messages in placement-api In-Reply-To: References: <4432b5e3-1f42-20e0-968a-5a4e7636d60a@gmail.com> <1533833182-sup-6244@lrrr.local> Message-ID: <05711536-2808-67c5-552f-15e01771a89a@gmail.com> On 8/9/2018 12:18 PM, Matt Riedemann wrote: > We could simply, on the nova side, add a warnings filter to only log > this once. Let's see if this works: https://review.openstack.org/#/c/590445/ -- Thanks, Matt From doug at doughellmann.com Thu Aug 9 17:48:13 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 09 Aug 2018 13:48:13 -0400 Subject: [openstack-dev] [nova][placement][oslo] Excessive WARNING level log messages in placement-api In-Reply-To: References: <4432b5e3-1f42-20e0-968a-5a4e7636d60a@gmail.com> <1533833182-sup-6244@lrrr.local> Message-ID: <1533836872-sup-9517@lrrr.local> Excerpts from Matt Riedemann's message of 2018-08-09 12:18:14 -0500: > On 8/9/2018 11:47 AM, Doug Hellmann wrote: > > Excerpts from Jay Pipes's message of 2018-08-08 22:53:54 -0400: > >> For evidence, see: > >> > >> http://logs.openstack.org/41/590041/1/check/tempest-full-py3/db08dec/controller/logs/screen-placement-api.txt.gz?level=WARNING > >> > >> thousands of these are filling the logs with WARNING-level log messages, > >> making it difficult to find anything: > >> > >> Aug 08 22:17:30.837557 ubuntu-xenial-inap-mtl01-0001226060 > >> devstack at placement-api.service[14403]: WARNING py.warnings > >> [req-a809b022-59af-4628-be73-488cfec3187d > >> req-d46cb1f0-431f-490f-955b-b9c2cd9f6437 service placement] > >> /usr/local/lib/python3.5/dist-packages/oslo_policy/policy.py:896: > >> UserWarning: Policy placement:resource_providers:list failed scope > >> check. The token used to make the request was project scoped but the > >> policy requires ['system'] scope. This behavior may change in the future > >> where using the intended scope is required > >> Aug 08 22:17:30.837800 ubuntu-xenial-inap-mtl01-0001226060 > >> devstack at placement-api.service[14403]: warnings.warn(msg) > >> Aug 08 22:17:30.838067 ubuntu-xenial-inap-mtl01-0001226060 > >> devstack at placement-api.service[14403]: > >> > >> Is there any way we can get rid of these? > >> > >> Thanks, > >> -jay > >> > > It looks like those are coming out of the policy library? Maybe file a > > bug there. I added "oslo" to the subject line to get the team's > > attention. > > > > This feels like something we could fix and backport to rocky. > > > > Doug > > I could have sworn I created a bug in oslo.policy for this at one point > for the same reason Jay mentions it, but I guess not. > > We could simply, on the nova side, add a warnings filter to only log > this once. > What level should it be logged at in the policy library? Should it be logged there at all? Doug From doug at doughellmann.com Thu Aug 9 17:51:02 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 09 Aug 2018 13:51:02 -0400 Subject: [openstack-dev] [Release-job-failures][masakari][release] Pre-release of openstack/masakari failed In-Reply-To: References: Message-ID: <1533836982-sup-6486@lrrr.local> Excerpts from zuul's message of 2018-08-09 17:23:01 +0000: > Build failed. > > - release-openstack-python http://logs.openstack.org/84/84135048cb372cbd11080fc27151949cee4e52d1/pre-release/release-openstack-python/095990b/ : FAILURE in 8m 57s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > The RC1 build for Masakari failed with this error: error: can't copy 'etc/masakari/masakari-custom-recovery-methods.conf': doesn't exist or not a regular file The packaging files need to be fixed so a new release candidate can be prepared. The changes will need to be made on master and then backported to the new stable/rocky branch. Doug http://logs.openstack.org/84/84135048cb372cbd11080fc27151949cee4e52d1/pre-release/release-openstack-python/095990b/ara-report/result/7459d483-48d8-414f-8830-d6411158f9a2/ From lbragstad at gmail.com Thu Aug 9 17:53:19 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 9 Aug 2018 12:53:19 -0500 Subject: [openstack-dev] [nova][placement][oslo] Excessive WARNING level log messages in placement-api In-Reply-To: References: <4432b5e3-1f42-20e0-968a-5a4e7636d60a@gmail.com> <1533833182-sup-6244@lrrr.local> Message-ID: On 08/09/2018 12:18 PM, Matt Riedemann wrote: > On 8/9/2018 11:47 AM, Doug Hellmann wrote: >> Excerpts from Jay Pipes's message of 2018-08-08 22:53:54 -0400: >>> For evidence, see: >>> >>> http://logs.openstack.org/41/590041/1/check/tempest-full-py3/db08dec/controller/logs/screen-placement-api.txt.gz?level=WARNING >>> >>> >>> thousands of these are filling the logs with WARNING-level log >>> messages, >>> making it difficult to find anything: >>> >>> Aug 08 22:17:30.837557 ubuntu-xenial-inap-mtl01-0001226060 >>> devstack at placement-api.service[14403]: WARNING py.warnings >>> [req-a809b022-59af-4628-be73-488cfec3187d >>> req-d46cb1f0-431f-490f-955b-b9c2cd9f6437 service placement] >>> /usr/local/lib/python3.5/dist-packages/oslo_policy/policy.py:896: >>> UserWarning: Policy placement:resource_providers:list failed scope >>> check. The token used to make the request was project scoped but the >>> policy requires ['system'] scope. This behavior may change in the >>> future >>> where using the intended scope is required >>> Aug 08 22:17:30.837800 ubuntu-xenial-inap-mtl01-0001226060 >>> devstack at placement-api.service[14403]:   warnings.warn(msg) >>> Aug 08 22:17:30.838067 ubuntu-xenial-inap-mtl01-0001226060 >>> devstack at placement-api.service[14403]: >>> >>> Is there any way we can get rid of these? >>> >>> Thanks, >>> -jay >>> >> It looks like those are coming out of the policy library? Maybe file a >> bug there. I added "oslo" to the subject line to get the team's >> attention. >> >> This feels like something we could fix and backport to rocky. >> >> Doug > > I could have sworn I created a bug in oslo.policy for this at one > point for the same reason Jay mentions it, but I guess not. > This? https://bugs.launchpad.net/oslo.policy/+bug/1421863 > > We could simply, on the nova side, add a warnings filter to only log > this once. > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From lbragstad at gmail.com Thu Aug 9 17:55:52 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 9 Aug 2018 12:55:52 -0500 Subject: [openstack-dev] [nova][placement][oslo] Excessive WARNING level log messages in placement-api In-Reply-To: <1533836872-sup-9517@lrrr.local> References: <4432b5e3-1f42-20e0-968a-5a4e7636d60a@gmail.com> <1533833182-sup-6244@lrrr.local> <1533836872-sup-9517@lrrr.local> Message-ID: <9430da68-5d05-f012-842d-73142b6595fc@gmail.com> On 08/09/2018 12:48 PM, Doug Hellmann wrote: > Excerpts from Matt Riedemann's message of 2018-08-09 12:18:14 -0500: >> On 8/9/2018 11:47 AM, Doug Hellmann wrote: >>> Excerpts from Jay Pipes's message of 2018-08-08 22:53:54 -0400: >>>> For evidence, see: >>>> >>>> http://logs.openstack.org/41/590041/1/check/tempest-full-py3/db08dec/controller/logs/screen-placement-api.txt.gz?level=WARNING >>>> >>>> thousands of these are filling the logs with WARNING-level log messages, >>>> making it difficult to find anything: >>>> >>>> Aug 08 22:17:30.837557 ubuntu-xenial-inap-mtl01-0001226060 >>>> devstack at placement-api.service[14403]: WARNING py.warnings >>>> [req-a809b022-59af-4628-be73-488cfec3187d >>>> req-d46cb1f0-431f-490f-955b-b9c2cd9f6437 service placement] >>>> /usr/local/lib/python3.5/dist-packages/oslo_policy/policy.py:896: >>>> UserWarning: Policy placement:resource_providers:list failed scope >>>> check. The token used to make the request was project scoped but the >>>> policy requires ['system'] scope. This behavior may change in the future >>>> where using the intended scope is required >>>> Aug 08 22:17:30.837800 ubuntu-xenial-inap-mtl01-0001226060 >>>> devstack at placement-api.service[14403]: warnings.warn(msg) >>>> Aug 08 22:17:30.838067 ubuntu-xenial-inap-mtl01-0001226060 >>>> devstack at placement-api.service[14403]: >>>> >>>> Is there any way we can get rid of these? >>>> >>>> Thanks, >>>> -jay >>>> >>> It looks like those are coming out of the policy library? Maybe file a >>> bug there. I added "oslo" to the subject line to get the team's >>> attention. >>> >>> This feels like something we could fix and backport to rocky. >>> >>> Doug >> I could have sworn I created a bug in oslo.policy for this at one point >> for the same reason Jay mentions it, but I guess not. >> >> We could simply, on the nova side, add a warnings filter to only log >> this once. >> > What level should it be logged at in the policy library? Should it be > logged there at all? The initial intent behind logging was to make sure operators knew that they needed to make a role assignment adjustment in order to be compatible moving forward. I can investigate a way to log things at least once in oslo.policy though. I fear not logging it at all would cause failures in upgrade since operators wouldn't know they need to make that adjustment. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From doug at doughellmann.com Thu Aug 9 18:05:01 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 09 Aug 2018 14:05:01 -0400 Subject: [openstack-dev] [nova][placement][oslo] Excessive WARNING level log messages in placement-api In-Reply-To: <9430da68-5d05-f012-842d-73142b6595fc@gmail.com> References: <4432b5e3-1f42-20e0-968a-5a4e7636d60a@gmail.com> <1533833182-sup-6244@lrrr.local> <1533836872-sup-9517@lrrr.local> <9430da68-5d05-f012-842d-73142b6595fc@gmail.com> Message-ID: <1533837865-sup-5209@lrrr.local> Excerpts from Lance Bragstad's message of 2018-08-09 12:55:52 -0500: > > On 08/09/2018 12:48 PM, Doug Hellmann wrote: > > Excerpts from Matt Riedemann's message of 2018-08-09 12:18:14 -0500: > >> On 8/9/2018 11:47 AM, Doug Hellmann wrote: > >>> Excerpts from Jay Pipes's message of 2018-08-08 22:53:54 -0400: > >>>> For evidence, see: > >>>> > >>>> http://logs.openstack.org/41/590041/1/check/tempest-full-py3/db08dec/controller/logs/screen-placement-api.txt.gz?level=WARNING > >>>> > >>>> thousands of these are filling the logs with WARNING-level log messages, > >>>> making it difficult to find anything: > >>>> > >>>> Aug 08 22:17:30.837557 ubuntu-xenial-inap-mtl01-0001226060 > >>>> devstack at placement-api.service[14403]: WARNING py.warnings > >>>> [req-a809b022-59af-4628-be73-488cfec3187d > >>>> req-d46cb1f0-431f-490f-955b-b9c2cd9f6437 service placement] > >>>> /usr/local/lib/python3.5/dist-packages/oslo_policy/policy.py:896: > >>>> UserWarning: Policy placement:resource_providers:list failed scope > >>>> check. The token used to make the request was project scoped but the > >>>> policy requires ['system'] scope. This behavior may change in the future > >>>> where using the intended scope is required > >>>> Aug 08 22:17:30.837800 ubuntu-xenial-inap-mtl01-0001226060 > >>>> devstack at placement-api.service[14403]: warnings.warn(msg) > >>>> Aug 08 22:17:30.838067 ubuntu-xenial-inap-mtl01-0001226060 > >>>> devstack at placement-api.service[14403]: > >>>> > >>>> Is there any way we can get rid of these? > >>>> > >>>> Thanks, > >>>> -jay > >>>> > >>> It looks like those are coming out of the policy library? Maybe file a > >>> bug there. I added "oslo" to the subject line to get the team's > >>> attention. > >>> > >>> This feels like something we could fix and backport to rocky. > >>> > >>> Doug > >> I could have sworn I created a bug in oslo.policy for this at one point > >> for the same reason Jay mentions it, but I guess not. > >> > >> We could simply, on the nova side, add a warnings filter to only log > >> this once. > >> > > What level should it be logged at in the policy library? Should it be > > logged there at all? > > The initial intent behind logging was to make sure operators knew that > they needed to make a role assignment adjustment in order to be > compatible moving forward. I can investigate a way to log things at > least once in oslo.policy though. I fear not logging it at all would > cause failures in upgrade since operators wouldn't know they need to > make that adjustment. That sounds like a good check to add to the upgrade test tools as part of the goal for Stein. Doug From mriedemos at gmail.com Thu Aug 9 18:21:56 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 9 Aug 2018 13:21:56 -0500 Subject: [openstack-dev] [nova][placement][oslo] Excessive WARNING level log messages in placement-api In-Reply-To: References: <4432b5e3-1f42-20e0-968a-5a4e7636d60a@gmail.com> <1533833182-sup-6244@lrrr.local> Message-ID: <9834c1c4-19fa-28fd-a379-05dbb40b09d1@gmail.com> On 8/9/2018 12:53 PM, Lance Bragstad wrote: >> I could have sworn I created a bug in oslo.policy for this at one >> point for the same reason Jay mentions it, but I guess not. >> > This?https://bugs.launchpad.net/oslo.policy/+bug/1421863 > Not unless I was time traveling (note the date that was reported). -- Thanks, Matt From gr at ham.ie Thu Aug 9 18:30:25 2018 From: gr at ham.ie (Graham Hayes) Date: Thu, 9 Aug 2018 19:30:25 +0100 Subject: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface In-Reply-To: <9638ac7d-7a63-728a-6773-32a0d7975295@gmail.com> References: <1533807698.26377.7@smtp.office365.com> <8434a7bb-cea4-1b57-b603-4fa00eb1404a@ham.ie> <9638ac7d-7a63-728a-6773-32a0d7975295@gmail.com> Message-ID: <2ae2b182-e5c8-c193-f988-158d7398beda@ham.ie> On 09/08/2018 15:27, Matt Riedemann wrote: > On 8/9/2018 8:44 AM, Graham Hayes wrote: >> Designate has no plans to swap or add support for the new interface in >> the near or medium term - we are more than willing to take patches, but >> we do not have the people power to do it ourselves. >> >> Some of our users do use the old interface a lot - designate-sink >> is quite heavily embeded in some workflows. > > This is what I suspected would be the answer from most projects. > > I was very half-assedly wondering if we could write some kind of > translation middleware library that allows your service to listen for > versioned notifications and translate them to legacy notifications. Then > we could apply that generically across projects that don't have time for > a big re-write while allowing nova to drop the legacy compat code (after > some period of deprecation, I was thinking at least a year). > > It should be pretty simple to write a dumb versioned->unversioned > payload mapping for each legacy notification, but there might be more > sophisticated ways of doing that using some kind of schema or template > instead. Just thinking out loud. > I have no objection to that - and I wish we had the people to move to the new formats - I know maintaining legacy features like this is extra work no-one needs. Thinking out loud .... You could just send the deprecation notice, and we could deprecate designate-sink if no one came forward to update it - that seems fairer to push the burden on to the people who actually use the feature, not other teams maintaining legacy stuff. Does that seem overly harsh? From no-reply at openstack.org Thu Aug 9 18:35:08 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 18:35:08 -0000 Subject: [openstack-dev] cinder 13.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for cinder for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/cinder/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/cinder/log/?h=stable/rocky Release notes for cinder can be found at: http://docs.openstack.org/releasenotes/cinder/ From no-reply at openstack.org Thu Aug 9 18:37:21 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 18:37:21 -0000 Subject: [openstack-dev] heat 11.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for heat for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/heat/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/heat/log/?h=stable/rocky Release notes for heat can be found at: http://docs.openstack.org/releasenotes/heat/ From mriedemos at gmail.com Thu Aug 9 19:10:08 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 9 Aug 2018 14:10:08 -0500 Subject: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface In-Reply-To: <2ae2b182-e5c8-c193-f988-158d7398beda@ham.ie> References: <1533807698.26377.7@smtp.office365.com> <8434a7bb-cea4-1b57-b603-4fa00eb1404a@ham.ie> <9638ac7d-7a63-728a-6773-32a0d7975295@gmail.com> <2ae2b182-e5c8-c193-f988-158d7398beda@ham.ie> Message-ID: On 8/9/2018 1:30 PM, Graham Hayes wrote: > You could just send the deprecation notice, and we could deprecate > designate-sink if no one came forward to update it - that seems fairer > to push the burden on to the people who actually use the feature, not > other teams maintaining legacy stuff. Does that seem overly harsh? It's harsh depending on my mood the day you ask me. :) Nova has done long-running deprecations for things that we know we don't want people building on, like nova-network and cells v1. And then we've left those around for a long time while we work on whatever it takes to eventually drop them. I think we could do the same here, and designate-sink could log a warning once on startup or something that says, "This service relies on the legacy nova notification format which is deprecated and being replaced with the versioned notification format. Removal is TBD but if you are dependent on this service/feature, we encourage you to help work on the transition for designate-sink to use nova versioned notifications." I would hold off on doing that until after we've actually agreed to deprecate the legacy notifications at the PTG. -- Thanks, Matt From doug at doughellmann.com Thu Aug 9 19:12:12 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 09 Aug 2018 15:12:12 -0400 Subject: [openstack-dev] [all][election][tc] Lederless projects. In-Reply-To: <20180801003249.GE15918@thor.bakeyournoodle.com> References: <20180731235512.GB15918@thor.bakeyournoodle.com> <20180801003249.GE15918@thor.bakeyournoodle.com> Message-ID: <1533840972-sup-1312@lrrr.local> Excerpts from Tony Breeds's message of 2018-08-01 10:32:50 +1000: [snip] > Need appointment : 8 (Dragonflow Freezer Loci Packaging_Rpm RefStack > Searchlight Trove Winstackers) To summarize the situation for these teams as it stands now: Omer Anson volunteered to serve as PTL for Dragonflow another term. The patch to appoint him is https://review.openstack.org/589939 Changcai Geng has offered to be PTL for Freezer (https://review.openstack.org/#/c/590071). There is still a proposal to remove the project from governance (https://review.openstack.org/588645). We need the TC members to vote on the proposals above to indicate which direction they want to take. Several folks suggested retaining the project provisionally, but we don't have a formal proposal for that. If you want to retain the team provisionally, please say so when you vote in favor of confirming Changcai Geng as PTL. Sam Yaple has volunteered to serve as the PTL of Loci. The patch to appoint him is https://review.openstack.org/#/c/590488/ Dirk Mueller volunteered to serve as the PTL of the packaging-rpm team. The patch to confirm him is https://review.openstack.org/#/c/588617/ The repositories currently owned by the RefStack team are being moved to the Interop Working Group in https://review.openstack.org/#/c/590179/ The proposal to remove the Searchlight project (https://review.openstack.org/588644) has one comment indicating potential interest in taking over maintenance of the project. We are waiting for a formal proposal to designate a PTL before deciding whether to keep the team under governance. Dariusz Krol has volunteered to serve as PTL for Trove. The patch to confirm him is https://review.openstack.org/#/c/588510/ Claudiu Belu has volunteered to serve as PTL for the Winstackers team. The patch to confirm him is https://review.openstack.org/#/c/590386/ Doug From jaypipes at gmail.com Thu Aug 9 19:23:00 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 9 Aug 2018 15:23:00 -0400 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> Message-ID: On Wed, Aug 1, 2018 at 11:15 AM, Ben Nemec wrote: > Hi, > > I'm having an issue with no valid host errors when starting instances and > I'm struggling to figure out why. I thought the problem was disk space, > but I changed the disk_allocation_ratio and I'm still getting no valid > host. The host does have plenty of disk space free, so that shouldn't be a > problem. > > However, I'm not even sure it's disk that's causing the failures because I > can't find any information in the logs about why the no valid host is > happening. All I get from the scheduler is: > > "Got no allocation candidates from the Placement API. This may be a > temporary occurrence as compute nodes start up and begin reporting > inventory to the Placement service." > > While in placement I see: > > 2018-08-01 15:02:22.062 20 DEBUG nova.api.openstack.placement.requestlog > [req-0a830ce9-e2af-413a-86cb-b47ae129b676 fc44fe5cefef43f4b921b9123c95e694 > b07e6dc2e6284b00ac7070aa3457c15e - default default] Starting request: > 10.2.2.201 "GET /placement/allocation_candidat > es?limit=1000&resources=DISK_GB%3A20%2CMEMORY_MB%3A2048%2CVCPU%3A1" > __call__ /usr/lib/python2.7/site-packages/nova/api/openstack/placemen > t/requestlog.py:38 > 2018-08-01 15:02:22.103 20 INFO nova.api.openstack.placement.requestlog > [req-0a830ce9-e2af-413a-86cb-b47ae129b676 fc44fe5cefef43f4b921b9123c95e694 > b07e6dc2e6284b00ac7070aa3457c15e - default default] 10.2.2.201 "GET > /placement/allocation_candidates?limit=1000&resources=DISK_ > GB%3A20%2CMEMORY_MB%3A2048%2CVCPU%3A1" status: 200 len: 53 microversion: > 1.25 > > Basically it just seems to be logging that it got a request, but there's > no information about what it did with that request. > > So where do I go from here? Is there somewhere else I can look to see why > placement returned no candidates? > > Hi again, Ben, hope you are enjoying your well-earned time off! :) I've created a patch that (hopefully) will address some of the difficulty that folks have had in diagnosing which parts of a request caused all providers to be filtered out from the return of GET /allocation_candidates: https://review.openstack.org/#/c/590041 This patch changes two primary things: 1) Query-splitting The patch splits the existing monster SQL query that was being used for querying for all providers that matched all requested resources, required traits, forbidden traits and required aggregate associations into doing multiple queries, one for each requested resource. While this does increase the number of database queries executed for each call to GET /allocation_candidates, the changes allow better visibility into what parts of the request cause an exhaustion of matching providers. We've benchmarked the new patch and have shown the performance impact of doing 3 queries versus 1 (when there is a request for 3 resources -- VCPU, RAM and disk) is minimal (a few extra milliseconds for execution against a DB with 1K providers having inventory of all three resource classes). 2) Diagnostic logging output The patch adds debug log output within each loop iteration, so there is no logging output that shows how many matching providers were found for each resource class involved in the request. The output looks like this in the logs: [req-2d30faa8-4190-4490-a91e-610045530140] inside VCPU request loop. before applying trait and aggregate filters, found 12 matching providers[req-2d30faa8-4190-4490-a91e-610045530140] found 12 providers with capacity for the requested 1 VCPU.[req-2d30faa8-4190-4490-a91e-610045530140] inside MEMORY_MB request loop. before applying trait and aggregate filters, found 9 matching providers[req-2d30faa8-4190-4490-a91e-610045530140] found 9 providers with capacity for the requested 64 MEMORY_MB. before loop iteration we had 12 matches.[req-2d30faa8-4190-4490-a91e-610045530140] RequestGroup(use_same_provider=False, resources={MEMORY_MB:64, VCPU:1}, traits=[], aggregates=[]) (suffix '') returned 9 matches If a request includes required traits, forbidden traits or required aggregate associations, there are additional log messages showing how many matching providers were found after applying the trait or aggregate filtering set operation (in other words, the log output shows the impact of the trait filter or aggregate filter in much the same way that the existing FilterScheduler logging shows the "before and after" impact that a particular filter had on a request process. Have a look at the patch in question and please feel free to add your feedback and comments on ways this can be improved to meet your needs. Best, -jay -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Thu Aug 9 20:31:34 2018 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 9 Aug 2018 14:31:34 -0600 Subject: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do Message-ID: Ahoy folks, I think it's time we come up with some basic rules/patterns on where code lands when it comes to OpenStack related Ansible roles and as we convert/export things. There was a recent proposal to create an ansible-role-tempest[0] that would take what we use in tripleo-quickstart-extras[1] and separate it for re-usability by others. So it was asked if we could work with the openstack-ansible team and leverage the existing openstack-ansible-os_tempest[2]. It turns out we have a few more already existing roles laying around as well[3][4]. What I would like to propose is that we as a community come together to agree on specific patterns so that we can leverage the same roles for some of the core configuration/deployment functionality while still allowing for specific project specific customization. What I've noticed between all the project is that we have a few specific core pieces of functionality that needs to be handled (or skipped as it may be) for each service being deployed. 1) software installation 2) configuration management 3) service management 4) misc service actions Depending on which flavor of the deployment you're using, the content of each of these may be different. Just about the only thing that is shared between them all would be the configuration management part. To that, I was wondering if there would be a benefit to establishing a pattern within say openstack-ansible where we can disable items #1 and #3 but reuse #2 in projects like kolla/tripleo where we need to do some configuration generation. If we can't establish a similar pattern it'll make it harder to reuse and contribute between the various projects. In tripleo we've recently created a bunch of ansible-role-tripleo-* repositories which we were planning on moving the tripleo specific tasks (for upgrades, etc) to and were hoping that we might be able to reuse the upstream ansible roles similar to how we've previously leverage the puppet openstack work for configurations. So for us, it would be beneficial if we could maybe help align/contribute/guide the configuration management and maybe misc service action portions of the openstack-ansible roles, but be able to disable the actual software install/service management as that would be managed via our ansible-role-tripleo-* roles. Is this something that would be beneficial to further discuss at the PTG? Anyone have any additional suggestions/thoughts? My personal thoughts for tripleo would be that we'd have tripleo-ansible calls openstack-ansible- for core config but package/service installation disabled and calls ansible-role-tripleo- for tripleo specific actions such as opinionated packages/service configuration/upgrades. Maybe this is too complex? But at the same time, do we need to come up with 3 different ways to do this? Thanks, -Alex [0] https://review.openstack.org/#/c/589133/ [1] http://git.openstack.org/cgit/openstack/tripleo-quickstart-extras/tree/roles/validate-tempest [2] http://git.openstack.org/cgit/openstack/openstack-ansible-os_tempest/ [3] http://git.openstack.org/cgit/openstack/kolla-ansible/tree/ansible/roles/tempest [4] http://git.openstack.org/cgit/openstack/ansible-role-tripleo-tempest From mnaser at vexxhost.com Thu Aug 9 20:43:00 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 9 Aug 2018 16:43:00 -0400 Subject: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do In-Reply-To: References: Message-ID: Hi Alex, I am very much in favour of what you're bringing up. We do have multiple projects that leverage Ansible in different ways and we all end up doing the same thing at the end. The duplication of work is not really beneficial for us as it takes away from our use-cases. I believe that there is a certain number of steps that we all share regardless of how we deploy, some of the things that come up to me right away are: - Configuring infrastructure services (i.e.: create vhosts for service in rabbitmq, create databases for services, configure users for rabbitmq, db, etc) - Configuring inter-OpenStack services (i.e. keystone_authtoken section, creating endpoints, etc and users for services) - Configuring actual OpenStack services (i.e. /etc//.conf file with the ability of extending options) - Running CI/integration on a cloud (i.e. common role that literally gets an admin user, password and auth endpoint and creates all resources and does CI) This would deduplicate a lot of work, and especially the last one, it might be beneficial for more than Ansible-based projects, I can imagine Puppet OpenStack leveraging this as well inside Zuul CI (optionally)... However, I think that this something which we should discus further for the PTG. I think that there will be a tiny bit upfront work as we all standarize but then it's a win for all involved communities. I would like to propose that deployment tools maybe sit down together at the PTG, all share how we use Ansible to accomplish these tasks and then perhaps we can work all together on abstracting some of these concepts together for us to all leverage. I'll let others chime in as well. Regards, Mohammed On Thu, Aug 9, 2018 at 4:31 PM, Alex Schultz wrote: > Ahoy folks, > > I think it's time we come up with some basic rules/patterns on where > code lands when it comes to OpenStack related Ansible roles and as we > convert/export things. There was a recent proposal to create an > ansible-role-tempest[0] that would take what we use in > tripleo-quickstart-extras[1] and separate it for re-usability by > others. So it was asked if we could work with the openstack-ansible > team and leverage the existing openstack-ansible-os_tempest[2]. It > turns out we have a few more already existing roles laying around as > well[3][4]. > > What I would like to propose is that we as a community come together > to agree on specific patterns so that we can leverage the same roles > for some of the core configuration/deployment functionality while > still allowing for specific project specific customization. What I've > noticed between all the project is that we have a few specific core > pieces of functionality that needs to be handled (or skipped as it may > be) for each service being deployed. > > 1) software installation > 2) configuration management > 3) service management > 4) misc service actions > > Depending on which flavor of the deployment you're using, the content > of each of these may be different. Just about the only thing that is > shared between them all would be the configuration management part. > To that, I was wondering if there would be a benefit to establishing a > pattern within say openstack-ansible where we can disable items #1 and > #3 but reuse #2 in projects like kolla/tripleo where we need to do > some configuration generation. If we can't establish a similar > pattern it'll make it harder to reuse and contribute between the > various projects. > > In tripleo we've recently created a bunch of ansible-role-tripleo-* > repositories which we were planning on moving the tripleo specific > tasks (for upgrades, etc) to and were hoping that we might be able to > reuse the upstream ansible roles similar to how we've previously > leverage the puppet openstack work for configurations. So for us, it > would be beneficial if we could maybe help align/contribute/guide the > configuration management and maybe misc service action portions of the > openstack-ansible roles, but be able to disable the actual software > install/service management as that would be managed via our > ansible-role-tripleo-* roles. > > Is this something that would be beneficial to further discuss at the > PTG? Anyone have any additional suggestions/thoughts? > > My personal thoughts for tripleo would be that we'd have > tripleo-ansible calls openstack-ansible- for core config but > package/service installation disabled and calls > ansible-role-tripleo- for tripleo specific actions such as > opinionated packages/service configuration/upgrades. Maybe this is > too complex? But at the same time, do we need to come up with 3 > different ways to do this? > > Thanks, > -Alex > > [0] https://review.openstack.org/#/c/589133/ > [1] http://git.openstack.org/cgit/openstack/tripleo-quickstart-extras/tree/roles/validate-tempest > [2] http://git.openstack.org/cgit/openstack/openstack-ansible-os_tempest/ > [3] http://git.openstack.org/cgit/openstack/kolla-ansible/tree/ansible/roles/tempest > [4] http://git.openstack.org/cgit/openstack/ansible-role-tripleo-tempest > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From doug at doughellmann.com Thu Aug 9 20:56:27 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 09 Aug 2018 16:56:27 -0400 Subject: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do In-Reply-To: References: Message-ID: <1533848106-sup-4508@lrrr.local> Excerpts from Alex Schultz's message of 2018-08-09 14:31:34 -0600: > Ahoy folks, > > I think it's time we come up with some basic rules/patterns on where > code lands when it comes to OpenStack related Ansible roles and as we > convert/export things. There was a recent proposal to create an > ansible-role-tempest[0] that would take what we use in > tripleo-quickstart-extras[1] and separate it for re-usability by > others. So it was asked if we could work with the openstack-ansible > team and leverage the existing openstack-ansible-os_tempest[2]. It > turns out we have a few more already existing roles laying around as > well[3][4]. > > What I would like to propose is that we as a community come together > to agree on specific patterns so that we can leverage the same roles > for some of the core configuration/deployment functionality while > still allowing for specific project specific customization. What I've > noticed between all the project is that we have a few specific core > pieces of functionality that needs to be handled (or skipped as it may > be) for each service being deployed. > > 1) software installation > 2) configuration management > 3) service management > 4) misc service actions > > Depending on which flavor of the deployment you're using, the content > of each of these may be different. Just about the only thing that is > shared between them all would be the configuration management part. Does that make the 4 things separate roles, then? Isn't the role usually the unit of sharing between playbooks? Doug From doug at doughellmann.com Thu Aug 9 21:20:53 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 09 Aug 2018 17:20:53 -0400 Subject: [openstack-dev] [all][tc][election] Timing of the Upcoming Stein TC election In-Reply-To: <20180808043930.GK9540@thor.bakeyournoodle.com> References: <20180808043930.GK9540@thor.bakeyournoodle.com> Message-ID: <1533849636-sup-7516@lrrr.local> Excerpts from Tony Breeds's message of 2018-08-08 14:39:30 +1000: > Hello all, > With the PTL elections behind us it's time to start looking at the > TC election. Our charter[1] says: > > The election is held no later than 6 weeks prior to each OpenStack > Summit (on or before ‘S-6’ week), with elections held open for no less > than four business days. > > Assuming we have the same structure that gives us a timeline of: > > Summit is at: 2018-11-13 > Latest possible completion is at: 2018-10-02 > Moving back to Tuesday: 2018-10-02 > TC Election from 2018-09-25T23:45 to 2018-10-02T23:45 > TC Campaigning from 2018-09-18T23:45 to 2018-09-25T23:45 > TC Nominations from 2018-09-11T23:45 to 2018-09-18T23:45 > > This puts the bulk of the nomination period during the PTG, which is > sub-optimal as the nominations cause a distraction from the PTG but more > so because the campaigning will coincide with travel home, and some > community members take vacation along with the PTG. > > So I'd like to bring up the idea of moving the election forward a > little so that it's actually the campaigning period that overlaps with > the PTG: > > TC Election from 2018-09-18T23:45 to 2018-09-27T23:45 > TC Campaigning from 2018-09-06T23:45 to 2018-09-18T23:45 > TC Nominations from 2018-08-30T23:45 to 2018-09-06T23:45 > > This gives us longer campaigning and election periods. > > There are some advantages to doing this: > > * A panel style Q&A could be held formally or informally ;P > * There's improved scope for for incoming, outgoing and staying put TC > members to interact in a high bandwidth way. > * In personi/private discussions with TC candidates/members. > > However it isn't without downsides: > > * Election fatigue, We've just had the PTL elections and the UC > elections are currently running. Less break before the TC elections > may not be a good thing. > * TC candidates that can't travel to the PTG could be disadvantaged > * The campaigning would all happen at the PTG and not on the mailing > list disadvantaging community members not at the PTG. > > So thoughts? > > Yours Tony. > > [1] https://governance.openstack.org/tc/reference/charter.html Who needs to make this decision? The current TC? Doug From aschultz at redhat.com Thu Aug 9 21:32:30 2018 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 9 Aug 2018 15:32:30 -0600 Subject: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do In-Reply-To: <1533848106-sup-4508@lrrr.local> References: <1533848106-sup-4508@lrrr.local> Message-ID: On Thu, Aug 9, 2018 at 2:56 PM, Doug Hellmann wrote: > Excerpts from Alex Schultz's message of 2018-08-09 14:31:34 -0600: >> Ahoy folks, >> >> I think it's time we come up with some basic rules/patterns on where >> code lands when it comes to OpenStack related Ansible roles and as we >> convert/export things. There was a recent proposal to create an >> ansible-role-tempest[0] that would take what we use in >> tripleo-quickstart-extras[1] and separate it for re-usability by >> others. So it was asked if we could work with the openstack-ansible >> team and leverage the existing openstack-ansible-os_tempest[2]. It >> turns out we have a few more already existing roles laying around as >> well[3][4]. >> >> What I would like to propose is that we as a community come together >> to agree on specific patterns so that we can leverage the same roles >> for some of the core configuration/deployment functionality while >> still allowing for specific project specific customization. What I've >> noticed between all the project is that we have a few specific core >> pieces of functionality that needs to be handled (or skipped as it may >> be) for each service being deployed. >> >> 1) software installation >> 2) configuration management >> 3) service management >> 4) misc service actions >> >> Depending on which flavor of the deployment you're using, the content >> of each of these may be different. Just about the only thing that is >> shared between them all would be the configuration management part. > > Does that make the 4 things separate roles, then? Isn't the role > usually the unit of sharing between playbooks? > It can be, but it doesn't have to be. The problem comes in with the granularity at which you are defining the concept of the overall action. If you want a role to encompass all that is "nova", you could have a single nova role that you invoke 5 different times to do the different actions during the overall deployment. Or you could create a role for nova-install, nova-config, nova-service, nova-cells, etc etc. I think splitting them out into their own role is a bit too much in terms of management. In my particular openstack-ansible is already creating a role to manage "nova". So is there a way that I can leverage part of their process within mine without having to duplicate it. You can pull in the task files themselves from a different so technically I think you could define a ansible-role-tripleo-nova that does some include_tasks: ../../os_nova/tasks/install.yaml but then we'd have to duplicate the variables in our playbook rather than invoking a role with some parameters. IMHO this structure is an issue with the general sharing concepts of roles/tasks within ansible. It's not really well defined and there's not really a concept of inheritance so I can't really extend your tasks with mine in more of a programming sense. I have to duplicate it or do something like include a specific task file from another role. Since I can't really extend a role in the traditional OO programing sense, I would like to figure out how I can leverage only part of it. This can be done by establishing ansible variables to trigger specific actions or just actually including the raw tasks themselves. Either of these concepts needs some sort of contract to be established to the other won't get broken. We had this in puppet via parameters which are checked, there isn't really a similar concept in ansible so it seems that we need to agree on some community established rules. For tripleo, I would like to just invoke the os_nova role and pass in like install: false, service: false, config_dir: /my/special/location/, config_data: {...} and it spit out the configs. Then my roles would actually leverage these via containers/etc. Of course most of this goes away if we had a unified (not file based) configuration method across all services (openstack and non-openstack) but we don't. :D Thanks, -Alex > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From neil at tigera.io Thu Aug 9 21:36:02 2018 From: neil at tigera.io (Neil Jerram) Date: Thu, 9 Aug 2018 22:36:02 +0100 Subject: [openstack-dev] [keystone][nova] Struggling with non-admin user on Queens install In-Reply-To: References: Message-ID: It appears this is to do with Keystone v3-created users not having any role assignment by default. Big thanks to lbragstad for helping me to understand this on IRC; he also provided this link as historical context for this situation: https://bugs.launchpad.net/keystone/+bug/1662911. In detail, I was creating a non-admin project and user like this: tenant = self.keystone3.projects.create(username, "default", description=description, enabled=True) user = self.keystone3.users.create(username, domain="default", project=tenant.id, password=password) With just that, that user won't be able to do anything; you need to give it a role assignment as well, for example: admin_role = None for role in self.keystone3.roles.list(): _log.info("role: %r", role) if role.name == 'admin': admin_role = role break assert admin_role is not None, "Couldn't find 'admin' role" self.keystone3.roles.grant(admin_role, user=user, project=tenant) I still don't have a good understanding of what 'admin' within that project really means, or why it means that that user can then do, e.g. nova.images.list(); but at least I have a working system again. Regards, Neil On Thu, Aug 9, 2018 at 4:42 PM Neil Jerram wrote: > I'd like to create a non-admin project and user that are able to do > nova.images.list(), in a Queens install. IIUC, all users should be able to > do that. I'm afraid I'm pretty lost and would appreciate any help. > > Define a function to test whether a particular set of credentials can do > nova.images.list(): > > from keystoneauth1 import identity > from keystoneauth1 import session > from novaclient.client import Client as NovaClient > > def attemp(auth): > sess = session.Session(auth=auth) > nova = NovaClient(2, session=sess) > for i in nova.images.list(): > print i > > With an admin user, things work: > > >>> auth_url = "http://controller:5000/v3" > >>> auth = identity.Password(auth_url=auth_url, > >>> username="admin", > >>> password="abcdef", > >>> project_name="admin", > >>> project_domain_id="default", > >>> user_domain_id="default") > >>> attemp(auth) > > > > With a non-admin user with project_id specified, 401: > > >>> tauth = identity.Password(auth_url=auth_url, > ... username="tenant2", > ... password="password", > ... project_id="tenant2", > ... user_domain_id="default") > >>> attemp(tauth) > ... > keystoneauth1.exceptions.http.Unauthorized: The request you have made > requires authentication. (HTTP 401) (Request-ID: > req-ed0630a4-7df0-4ba8-a4c4-de3ecb7b4d7d) > > With the same but without project_id, I get an empty service catalog > instead: > > >>> tauth = identity.Password(auth_url=auth_url, > ... username="tenant2", > ... password="password", > ... #project_name="tenant2", > ... #project_domain_id="default", > ... user_domain_id="default") > >>> > >>> attemp(tauth) > ... > keystoneauth1.exceptions.catalog.EmptyCatalog: The service catalog is > empty. > > Can anyone help? > > Regards, > Neil > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Thu Aug 9 21:44:42 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 10 Aug 2018 07:44:42 +1000 Subject: [openstack-dev] [all][tc][election] Timing of the Upcoming Stein TC election In-Reply-To: <1533849636-sup-7516@lrrr.local> References: <20180808043930.GK9540@thor.bakeyournoodle.com> <1533849636-sup-7516@lrrr.local> Message-ID: <20180809214442.GA5069@thor.bakeyournoodle.com> On Thu, Aug 09, 2018 at 05:20:53PM -0400, Doug Hellmann wrote: > Excerpts from Tony Breeds's message of 2018-08-08 14:39:30 +1000: > > Hello all, > > With the PTL elections behind us it's time to start looking at the > > TC election. Our charter[1] says: > > > > The election is held no later than 6 weeks prior to each OpenStack > > Summit (on or before ‘S-6’ week), with elections held open for no less > > than four business days. > > > > Assuming we have the same structure that gives us a timeline of: > > > > Summit is at: 2018-11-13 > > Latest possible completion is at: 2018-10-02 > > Moving back to Tuesday: 2018-10-02 > > TC Election from 2018-09-25T23:45 to 2018-10-02T23:45 > > TC Campaigning from 2018-09-18T23:45 to 2018-09-25T23:45 > > TC Nominations from 2018-09-11T23:45 to 2018-09-18T23:45 > > > > This puts the bulk of the nomination period during the PTG, which is > > sub-optimal as the nominations cause a distraction from the PTG but more > > so because the campaigning will coincide with travel home, and some > > community members take vacation along with the PTG. > > > > So I'd like to bring up the idea of moving the election forward a > > little so that it's actually the campaigning period that overlaps with > > the PTG: > > > > TC Election from 2018-09-18T23:45 to 2018-09-27T23:45 > > TC Campaigning from 2018-09-06T23:45 to 2018-09-18T23:45 > > TC Nominations from 2018-08-30T23:45 to 2018-09-06T23:45 > > > > This gives us longer campaigning and election periods. > > > > There are some advantages to doing this: > > > > * A panel style Q&A could be held formally or informally ;P > > * There's improved scope for for incoming, outgoing and staying put TC > > members to interact in a high bandwidth way. > > * In personi/private discussions with TC candidates/members. > > > > However it isn't without downsides: > > > > * Election fatigue, We've just had the PTL elections and the UC > > elections are currently running. Less break before the TC elections > > may not be a good thing. > > * TC candidates that can't travel to the PTG could be disadvantaged > > * The campaigning would all happen at the PTG and not on the mailing > > list disadvantaging community members not at the PTG. > > > > So thoughts? > > > > Yours Tony. > > > > [1] https://governance.openstack.org/tc/reference/charter.html > > Who needs to make this decision? The current TC? I believe that the TC delegated that to the Election WG [1] but the governance here is a little gray/fuzzy. So I kinda think that if the TC doesn't object I can propose the patch to the election repo and you (as TC chair) can +/-1 is as you see fit. Is it fair to ask we do that shortly after the next TC office hours? Yours Tony. [1] https://governance.openstack.org/tc/reference/working-groups.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From kennelson11 at gmail.com Thu Aug 9 21:50:42 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 9 Aug 2018 14:50:42 -0700 Subject: [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: You have done such amazing things with the program! We appreciate everything you do :) Enjoy the little extra spare time. -Kendall (daiblo_rojo) On Tue, Aug 7, 2018 at 4:48 PM Victoria Martínez de la Cruz < victoria at vmartinezdelacruz.com> wrote: > Hi all, > > I'm reaching you out to let you know that I'll be stepping down as > coordinator for OpenStack next round. I had been contributing to this > effort for several rounds now and I believe is a good moment for somebody > else to take the lead. You all know how important is Outreachy to me and > I'm grateful for all the amazing things I've done as part of the Outreachy > program and all the great people I've met in the way. I plan to keep > involved with the internships but leave the coordination tasks to somebody > else. > > If you are interested in becoming an Outreachy coordinator, let me know > and I can share my experience and provide some guidance. > > Thanks, > > Victoria > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Thu Aug 9 21:52:05 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 21:52:05 -0000 Subject: [openstack-dev] ceilometer-powervm 7.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for ceilometer-powervm for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/ceilometer-powervm/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/ceilometer-powervm/log/?h=stable/rocky Release notes for ceilometer-powervm can be found at: http://docs.openstack.org/releasenotes/ceilometer-powervm/ From miguel at mlavalle.com Thu Aug 9 21:53:50 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 9 Aug 2018 16:53:50 -0500 Subject: [openstack-dev] [neutron] [drivers] Message-ID: Dear Neutron team members, Tomorrow I will be on an airplane during the drivers team meeting time and one team member is off on vacation. Next week, two of our team members will be off on vacation. As a consequence, Let's cancel the meetings on August 10th and 17th. We will resume normally on the 24th. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Thu Aug 9 21:57:42 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 9 Aug 2018 16:57:42 -0500 Subject: [openstack-dev] [neutron] [l3-sub-team] Weekly meeting cancellation Message-ID: Dear L3 sub-team members, On August 16th I will be on a business trip and other team members will be off on vacation. As a consequence, we will cancel our weekly meeting that day. We will resume at the usual time on the August 23rd Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Thu Aug 9 21:58:11 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 21:58:11 -0000 Subject: [openstack-dev] networking-powervm 7.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for networking-powervm for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/networking-powervm/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/networking-powervm/log/?h=stable/rocky Release notes for networking-powervm can be found at: http://docs.openstack.org/releasenotes/networking-powervm/ From no-reply at openstack.org Thu Aug 9 21:59:12 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 21:59:12 -0000 Subject: [openstack-dev] nova_powervm 7.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for nova_powervm for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/nova-powervm/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/nova_powervm/log/?h=stable/rocky Release notes for nova_powervm can be found at: http://docs.openstack.org/releasenotes/nova_powervm/ From no-reply at openstack.org Thu Aug 9 21:59:59 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 21:59:59 -0000 Subject: [openstack-dev] congress 8.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for congress for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/congress/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/congress/log/?h=stable/rocky Release notes for congress can be found at: http://docs.openstack.org/releasenotes/congress/ From no-reply at openstack.org Thu Aug 9 22:00:19 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 22:00:19 -0000 Subject: [openstack-dev] congress-dashboard 3.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for congress-dashboard for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/congress-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/congress-dashboard/log/?h=stable/rocky Release notes for congress-dashboard can be found at: http://docs.openstack.org/releasenotes/congress-dashboard/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/congress and tag it *rocky-rc-potential* to bring it to the congress-dashboard release crew's attention. From sean.mcginnis at gmx.com Thu Aug 9 22:01:28 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 9 Aug 2018 17:01:28 -0500 Subject: [openstack-dev] [all][docs] ACTION REQUIRED for projects using readthedocs In-Reply-To: <625a06b2-0a64-6018-8a3b-d2d8df419190@redhat.com> References: <625a06b2-0a64-6018-8a3b-d2d8df419190@redhat.com> Message-ID: <20180809220128.GA27978@sm-workstation> Resending the below since several projects using ReadTheDocs appear to have missed this. If your project publishes docs to ReadTheDocs, please follow these steps to avoid job failures. On Fri, Aug 03, 2018 at 02:20:40PM +1000, Ian Wienand wrote: > Hello, > > tl;dr : any projects using the "docs-on-readthedocs" job template > to trigger a build of their documentation in readthedocs needs to: > > 1) add the "openstackci" user as a maintainer of the RTD project > 2) generate a webhook integration URL for the project via RTD > 3) provide the unique webhook ID value in the "rtd_webhook_id" project > variable > > See > > https://docs.openstack.org/infra/openstack-zuul-jobs/project-templates.html#project_template-docs-on-readthedocs > > -- > > readthedocs has recently updated their API for triggering a > documentation build. In the old API, anyone could POST to a known URL > for the project and it would trigger a build. This end-point has > stopped responding and we now need to use an authenticated webhook to > trigger documentation builds. > > Since this is only done in the post and release pipelines, projects > probably haven't had great feedback that current methods are failing > and this may be a surprise. To check your publishing, you can go to > the zuul builds page [1] and filter by your project and the "post" > pipeline to find recent runs. > > There is now some setup required which can only be undertaken by a > current maintainer of the RTD project. > > In short; add the "openstackci" user as a maintainer, add a "generic > webhook" integration to the project, find the last bit of the URL from > that and put it in the project variable "rtd_webhook_id". > > Luckily OpenStack infra keeps a team of highly skilled digital artists > on retainer and they have produced a handy visual guide available at > > https://imgur.com/a/Pp4LH31 > > Once the RTD project is setup, you must provide the webhook ID value > in your project variables. This will look something like: > > - project: > templates: > - docs-on-readthedocs > - publish-to-pypi > vars: > rtd_webhook_id: '12345' > check: > jobs: > ... > > For actual examples; see pbrx [2] which keeps its config in tree, or > gerrit-dash-creator which has its configuration in project-config [3]. > > Happy to help if anyone is having issues, via mail or #openstack-infra > > Thanks! > > -i > > p.s. You don't *have* to use the jobs from the docs-on-readthedocs > templates and hence add infra as a maintainer; you can setup your own > credentials with zuul secrets in tree and write your playbooks and > jobs to use the generic role [4]. We're always happy to discuss any > concerns. > > [1] https://zuul.openstack.org/builds.html > [2] https://git.openstack.org/cgit/openstack/pbrx/tree/.zuul.yaml#n17 > [3] https://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul.d/projects.yaml > [4] https://zuul-ci.org/docs/zuul-jobs/roles.html#role-trigger-readthedocs > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From no-reply at openstack.org Thu Aug 9 22:14:27 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 22:14:27 -0000 Subject: [openstack-dev] networking-ovn 5.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for networking-ovn for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/networking-ovn/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/networking-ovn/log/?h=stable/rocky Release notes for networking-ovn can be found at: http://docs.openstack.org/releasenotes/networking-ovn/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/networking-ovn and tag it *rocky-rc-potential* to bring it to the networking-ovn release crew's attention. From no-reply at openstack.org Thu Aug 9 22:14:33 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 22:14:33 -0000 Subject: [openstack-dev] neutron-fwaas 13.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for neutron-fwaas for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/neutron-fwaas/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/neutron-fwaas/log/?h=stable/rocky Release notes for neutron-fwaas can be found at: http://docs.openstack.org/releasenotes/neutron-fwaas/ From no-reply at openstack.org Thu Aug 9 22:14:37 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 22:14:37 -0000 Subject: [openstack-dev] neutron 13.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for neutron for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/neutron/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/neutron/log/?h=stable/rocky Release notes for neutron can be found at: http://docs.openstack.org/releasenotes/neutron/ From no-reply at openstack.org Thu Aug 9 22:14:51 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 22:14:51 -0000 Subject: [openstack-dev] networking-midonet 7.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for networking-midonet for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/networking-midonet/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/networking-midonet/log/?h=stable/rocky Release notes for networking-midonet can be found at: http://docs.openstack.org/releasenotes/networking-midonet/ From no-reply at openstack.org Thu Aug 9 22:15:14 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 22:15:14 -0000 Subject: [openstack-dev] networking-bagpipe 9.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for networking-bagpipe for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/networking-bagpipe/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/networking-bagpipe/log/?h=stable/rocky Release notes for networking-bagpipe can be found at: http://docs.openstack.org/releasenotes/networking-bagpipe/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/networking-bagpipe and tag it *rocky-rc-potential* to bring it to the networking-bagpipe release crew's attention. From no-reply at openstack.org Thu Aug 9 22:16:19 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 22:16:19 -0000 Subject: [openstack-dev] neutron-dynamic-routing 13.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for neutron-dynamic-routing for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/neutron-dynamic-routing/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/neutron-dynamic-routing/log/?h=stable/rocky Release notes for neutron-dynamic-routing can be found at: http://docs.openstack.org/releasenotes/neutron-dynamic-routing/ From no-reply at openstack.org Thu Aug 9 22:17:18 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 22:17:18 -0000 Subject: [openstack-dev] networking-sfc 7.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for networking-sfc for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/networking-sfc/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/networking-sfc/log/?h=stable/rocky Release notes for networking-sfc can be found at: http://docs.openstack.org/releasenotes/networking-sfc/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/networking-sfc and tag it *rocky-rc-potential* to bring it to the networking-sfc release crew's attention. From no-reply at openstack.org Thu Aug 9 22:20:41 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 22:20:41 -0000 Subject: [openstack-dev] networking-odl 13.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for networking-odl for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/networking-odl/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/networking-odl/log/?h=stable/rocky Release notes for networking-odl can be found at: http://docs.openstack.org/releasenotes/networking-odl/ From no-reply at openstack.org Thu Aug 9 22:20:46 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 22:20:46 -0000 Subject: [openstack-dev] neutron-vpnaas 13.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for neutron-vpnaas for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/neutron-vpnaas/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/neutron-vpnaas/log/?h=stable/rocky Release notes for neutron-vpnaas can be found at: http://docs.openstack.org/releasenotes/neutron-vpnaas/ From no-reply at openstack.org Thu Aug 9 22:25:11 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 09 Aug 2018 22:25:11 -0000 Subject: [openstack-dev] networking-bgpvpn 9.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for networking-bgpvpn for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/networking-bgpvpn/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/networking-bgpvpn/log/?h=stable/rocky Release notes for networking-bgpvpn can be found at: http://docs.openstack.org/releasenotes/networking-bgpvpn/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/bgpvpn and tag it *rocky-rc-potential* to bring it to the networking-bgpvpn release crew's attention. From whayutin at redhat.com Fri Aug 10 02:00:13 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 9 Aug 2018 20:00:13 -0600 Subject: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do In-Reply-To: References: <1533848106-sup-4508@lrrr.local> Message-ID: On Thu, Aug 9, 2018 at 5:33 PM Alex Schultz wrote: > On Thu, Aug 9, 2018 at 2:56 PM, Doug Hellmann > wrote: > > Excerpts from Alex Schultz's message of 2018-08-09 14:31:34 -0600: > >> Ahoy folks, > >> > >> I think it's time we come up with some basic rules/patterns on where > >> code lands when it comes to OpenStack related Ansible roles and as we > >> convert/export things. There was a recent proposal to create an > >> ansible-role-tempest[0] that would take what we use in > >> tripleo-quickstart-extras[1] and separate it for re-usability by > >> others. So it was asked if we could work with the openstack-ansible > >> team and leverage the existing openstack-ansible-os_tempest[2]. It > >> turns out we have a few more already existing roles laying around as > >> well[3][4]. > >> > >> What I would like to propose is that we as a community come together > >> to agree on specific patterns so that we can leverage the same roles > >> for some of the core configuration/deployment functionality while > >> still allowing for specific project specific customization. What I've > >> noticed between all the project is that we have a few specific core > >> pieces of functionality that needs to be handled (or skipped as it may > >> be) for each service being deployed. > >> > >> 1) software installation > >> 2) configuration management > >> 3) service management > >> 4) misc service actions > >> > >> Depending on which flavor of the deployment you're using, the content > >> of each of these may be different. Just about the only thing that is > >> shared between them all would be the configuration management part. > > > > Does that make the 4 things separate roles, then? Isn't the role > > usually the unit of sharing between playbooks? > > > > It can be, but it doesn't have to be. The problem comes in with the > granularity at which you are defining the concept of the overall > action. If you want a role to encompass all that is "nova", you could > have a single nova role that you invoke 5 different times to do the > different actions during the overall deployment. Or you could create a > role for nova-install, nova-config, nova-service, nova-cells, etc etc. > I think splitting them out into their own role is a bit too much in > terms of management. In my particular openstack-ansible is already > creating a role to manage "nova". So is there a way that I can > leverage part of their process within mine without having to duplicate > it. You can pull in the task files themselves from a different so > technically I think you could define a ansible-role-tripleo-nova that > does some include_tasks: ../../os_nova/tasks/install.yaml but then > we'd have to duplicate the variables in our playbook rather than > invoking a role with some parameters. > > IMHO this structure is an issue with the general sharing concepts of > roles/tasks within ansible. It's not really well defined and there's > not really a concept of inheritance so I can't really extend your > tasks with mine in more of a programming sense. I have to duplicate it > or do something like include a specific task file from another role. > Since I can't really extend a role in the traditional OO programing > sense, I would like to figure out how I can leverage only part of it. > This can be done by establishing ansible variables to trigger specific > actions or just actually including the raw tasks themselves. Either > of these concepts needs some sort of contract to be established to the > other won't get broken. We had this in puppet via parameters which > are checked, there isn't really a similar concept in ansible so it > seems that we need to agree on some community established rules. > > For tripleo, I would like to just invoke the os_nova role and pass in > like install: false, service: false, config_dir: > /my/special/location/, config_data: {...} and it spit out the configs. > Then my roles would actually leverage these via containers/etc. Of > course most of this goes away if we had a unified (not file based) > configuration method across all services (openstack and non-openstack) > but we don't. :D > I like your idea here Alex. So having a role for each of these steps is too much management I agree, however establishing a pattern of using tasks for each step may be a really good way to cleanly handle this. Are you saying something like the following? openstack-nova-role/ * * /tasks/ * * /tasks/install.yml * * /tasks/service.yml * */tasks/config.yml * */taks/main.yml --------------------------- # main.yml include: install.yml when: nova_install|bool include: service.yml when: nova_service|bool include: config.yml when: nova_config.yml -------------------------------------------------- Interested in anything other than tags :) Thanks > > Thanks, > -Alex > > > Doug > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Wes Hayutin Associate MANAGER Red Hat w hayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Fri Aug 10 02:17:31 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 9 Aug 2018 22:17:31 -0400 Subject: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do In-Reply-To: References: <1533848106-sup-4508@lrrr.local> Message-ID: <20180810021731.GA5481@localhost.localdomain> On Thu, Aug 09, 2018 at 08:00:13PM -0600, Wesley Hayutin wrote: > On Thu, Aug 9, 2018 at 5:33 PM Alex Schultz wrote: > > > On Thu, Aug 9, 2018 at 2:56 PM, Doug Hellmann > > wrote: > > > Excerpts from Alex Schultz's message of 2018-08-09 14:31:34 -0600: > > >> Ahoy folks, > > >> > > >> I think it's time we come up with some basic rules/patterns on where > > >> code lands when it comes to OpenStack related Ansible roles and as we > > >> convert/export things. There was a recent proposal to create an > > >> ansible-role-tempest[0] that would take what we use in > > >> tripleo-quickstart-extras[1] and separate it for re-usability by > > >> others. So it was asked if we could work with the openstack-ansible > > >> team and leverage the existing openstack-ansible-os_tempest[2]. It > > >> turns out we have a few more already existing roles laying around as > > >> well[3][4]. > > >> > > >> What I would like to propose is that we as a community come together > > >> to agree on specific patterns so that we can leverage the same roles > > >> for some of the core configuration/deployment functionality while > > >> still allowing for specific project specific customization. What I've > > >> noticed between all the project is that we have a few specific core > > >> pieces of functionality that needs to be handled (or skipped as it may > > >> be) for each service being deployed. > > >> > > >> 1) software installation > > >> 2) configuration management > > >> 3) service management > > >> 4) misc service actions > > >> > > >> Depending on which flavor of the deployment you're using, the content > > >> of each of these may be different. Just about the only thing that is > > >> shared between them all would be the configuration management part. > > > > > > Does that make the 4 things separate roles, then? Isn't the role > > > usually the unit of sharing between playbooks? > > > > > > > It can be, but it doesn't have to be. The problem comes in with the > > granularity at which you are defining the concept of the overall > > action. If you want a role to encompass all that is "nova", you could > > have a single nova role that you invoke 5 different times to do the > > different actions during the overall deployment. Or you could create a > > role for nova-install, nova-config, nova-service, nova-cells, etc etc. > > I think splitting them out into their own role is a bit too much in > > terms of management. In my particular openstack-ansible is already > > creating a role to manage "nova". So is there a way that I can > > leverage part of their process within mine without having to duplicate > > it. You can pull in the task files themselves from a different so > > technically I think you could define a ansible-role-tripleo-nova that > > does some include_tasks: ../../os_nova/tasks/install.yaml but then > > we'd have to duplicate the variables in our playbook rather than > > invoking a role with some parameters. > > > > IMHO this structure is an issue with the general sharing concepts of > > roles/tasks within ansible. It's not really well defined and there's > > not really a concept of inheritance so I can't really extend your > > tasks with mine in more of a programming sense. I have to duplicate it > > or do something like include a specific task file from another role. > > Since I can't really extend a role in the traditional OO programing > > sense, I would like to figure out how I can leverage only part of it. > > This can be done by establishing ansible variables to trigger specific > > actions or just actually including the raw tasks themselves. Either > > of these concepts needs some sort of contract to be established to the > > other won't get broken. We had this in puppet via parameters which > > are checked, there isn't really a similar concept in ansible so it > > seems that we need to agree on some community established rules. > > > > For tripleo, I would like to just invoke the os_nova role and pass in > > like install: false, service: false, config_dir: > > /my/special/location/, config_data: {...} and it spit out the configs. > > Then my roles would actually leverage these via containers/etc. Of > > course most of this goes away if we had a unified (not file based) > > configuration method across all services (openstack and non-openstack) > > but we don't. :D > > > > I like your idea here Alex. > So having a role for each of these steps is too much management I agree, > however > establishing a pattern of using tasks for each step may be a really good > way to cleanly handle this. > > Are you saying something like the following? > > openstack-nova-role/ > * * /tasks/ > * * /tasks/install.yml > * * /tasks/service.yml > * */tasks/config.yml > * */taks/main.yml > --------------------------- > # main.yml > > include: install.yml > when: nova_install|bool > > include: service.yml > when: nova_service|bool > > include: config.yml > when: nova_config.yml > -------------------------------------------------- > > Interested in anything other than tags :) > Thanks > This is basically what I do with roles i write, allow the user to decide to step over specific tasks. For example, I have created nodepool_task_manager variable with the following: http://git.openstack.org/cgit/openstack/ansible-role-nodepool/tree/defaults/main.yaml#n16 http://git.openstack.org/cgit/openstack/ansible-role-nodepool/tree/tasks/main.yaml Been using it for a few years now, works much better then tags for me. The phases are pre, install, configure, service right now. - Paul From muroi.masahito at lab.ntt.co.jp Fri Aug 10 02:56:35 2018 From: muroi.masahito at lab.ntt.co.jp (Masahito MUROI) Date: Fri, 10 Aug 2018 11:56:35 +0900 Subject: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface In-Reply-To: <1533807698.26377.7@smtp.office365.com> References: <1533807698.26377.7@smtp.office365.com> Message-ID: <637d3d0e-043a-d14c-918f-7ad8e44ac1ec@lab.ntt.co.jp> Thanks for the notification! Blazar has already support the versioned notification. It consumes the service.update type notification internally. And I checked the feature works only with the versioned one. best regards, Masahito On 2018/08/09 18:41, Balázs Gibizer wrote: > Dear Nova notification consumers! > > > The Nova team made progress with the new versioned notification > interface [1] and it is almost reached feature parity [2] with the > legacy, unversioned one. So Nova team will discuss on the upcoming PTG > the deprecation of the legacy interface. There is a list of projects (we > know of) consuming the legacy interface and we would like to know if any > of these projects plan to switch over to the new interface in the > foreseeable future so we can make a well informed decision about the > deprecation. > > > * Searchlight [3] - it is in maintenance mode so I guess the answer is no > * Designate [4] > * Telemetry [5] > * Mistral [6] > * Blazar [7] > * Watcher [8] - it seems Watcher uses both legacy and versioned nova > notifications > * Masakari - I'm not sure Masakari depends on nova notifications or not > > Cheers, > gibi > > [1] https://docs.openstack.org/nova/latest/reference/notifications.html > [2] http://burndown.peermore.com/nova-notification/ > > [3] > https://github.com/openstack/searchlight/blob/master/searchlight/elasticsearch/plugins/nova/notification_handler.py > > [4] > https://github.com/openstack/designate/blob/master/designate/notification_handler/nova.py > > [5] > https://github.com/openstack/ceilometer/blob/master/ceilometer/pipeline/data/event_definitions.yaml#L2 > > [6] > https://github.com/openstack/mistral/blob/master/etc/event_definitions.yml.sample#L2 > > [7] > https://github.com/openstack/blazar/blob/5526ed1f9b74d23b5881a5f73b70776ba9732da4/doc/source/user/compute-host-monitor.rst > > [8] > https://github.com/openstack/watcher/blob/master/watcher/decision_engine/model/notification/nova.py#L335 > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From no-reply at openstack.org Fri Aug 10 03:20:03 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 10 Aug 2018 03:20:03 -0000 Subject: [openstack-dev] glance 17.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for glance for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/glance/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/glance/log/?h=stable/rocky Release notes for glance can be found at: http://docs.openstack.org/releasenotes/glance/ From no-reply at openstack.org Fri Aug 10 03:26:24 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 10 Aug 2018 03:26:24 -0000 Subject: [openstack-dev] sahara 9.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for sahara for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/sahara/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/sahara/log/?h=stable/rocky Release notes for sahara can be found at: http://docs.openstack.org/releasenotes/sahara/ From no-reply at openstack.org Fri Aug 10 03:27:27 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 10 Aug 2018 03:27:27 -0000 Subject: [openstack-dev] sahara-dashboard 9.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for sahara-dashboard for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/sahara-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/sahara-dashboard/log/?h=stable/rocky Release notes for sahara-dashboard can be found at: http://docs.openstack.org/releasenotes/sahara-dashboard/ From no-reply at openstack.org Fri Aug 10 03:28:30 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 10 Aug 2018 03:28:30 -0000 Subject: [openstack-dev] sahara-image-elements 9.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for sahara-image-elements for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/sahara-image-elements/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/sahara-image-elements/log/?h=stable/rocky Release notes for sahara-image-elements can be found at: http://docs.openstack.org/releasenotes/sahara-image-elements/ From witold.bedyk at est.fujitsu.com Fri Aug 10 09:45:50 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Fri, 10 Aug 2018 09:45:50 +0000 Subject: [openstack-dev] [monasca] Vacation Message-ID: Hello everyone, I'll be on vacation until August 31st. My deputies in that time will be: * Doug Szumski (dougsz) and * Dobrosław Żybort (Dobroslaw) Thanks a lot Witek From chkumar246 at gmail.com Fri Aug 10 09:59:30 2018 From: chkumar246 at gmail.com (Chandan kumar) Date: Fri, 10 Aug 2018 15:29:30 +0530 Subject: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do In-Reply-To: <20180810021731.GA5481@localhost.localdomain> References: <1533848106-sup-4508@lrrr.local> <20180810021731.GA5481@localhost.localdomain> Message-ID: Hello, On Fri, Aug 10, 2018 at 7:47 AM Paul Belanger wrote: > > On Thu, Aug 09, 2018 at 08:00:13PM -0600, Wesley Hayutin wrote: > > On Thu, Aug 9, 2018 at 5:33 PM Alex Schultz wrote: > > > > > On Thu, Aug 9, 2018 at 2:56 PM, Doug Hellmann > > > wrote: > > > > Excerpts from Alex Schultz's message of 2018-08-09 14:31:34 -0600: > > > >> Ahoy folks, > > > >> > > > >> I think it's time we come up with some basic rules/patterns on where > > > >> code lands when it comes to OpenStack related Ansible roles and as we > > > >> convert/export things. There was a recent proposal to create an > > > >> ansible-role-tempest[0] that would take what we use in > > > >> tripleo-quickstart-extras[1] and separate it for re-usability by > > > >> others. So it was asked if we could work with the openstack-ansible > > > >> team and leverage the existing openstack-ansible-os_tempest[2]. It > > > >> turns out we have a few more already existing roles laying around as > > > >> well[3][4]. > > > >> > > > >> What I would like to propose is that we as a community come together > > > >> to agree on specific patterns so that we can leverage the same roles > > > >> for some of the core configuration/deployment functionality while > > > >> still allowing for specific project specific customization. What I've > > > >> noticed between all the project is that we have a few specific core > > > >> pieces of functionality that needs to be handled (or skipped as it may > > > >> be) for each service being deployed. > > > >> > > > >> 1) software installation > > > >> 2) configuration management > > > >> 3) service management > > > >> 4) misc service actions > > > >> > > > >> Depending on which flavor of the deployment you're using, the content > > > >> of each of these may be different. Just about the only thing that is > > > >> shared between them all would be the configuration management part. > > > > > > > > Does that make the 4 things separate roles, then? Isn't the role > > > > usually the unit of sharing between playbooks? > > > > > > > > > > It can be, but it doesn't have to be. The problem comes in with the > > > granularity at which you are defining the concept of the overall > > > action. If you want a role to encompass all that is "nova", you could > > > have a single nova role that you invoke 5 different times to do the > > > different actions during the overall deployment. Or you could create a > > > role for nova-install, nova-config, nova-service, nova-cells, etc etc. > > > I think splitting them out into their own role is a bit too much in > > > terms of management. In my particular openstack-ansible is already > > > creating a role to manage "nova". So is there a way that I can > > > leverage part of their process within mine without having to duplicate > > > it. You can pull in the task files themselves from a different so > > > technically I think you could define a ansible-role-tripleo-nova that > > > does some include_tasks: ../../os_nova/tasks/install.yaml but then > > > we'd have to duplicate the variables in our playbook rather than > > > invoking a role with some parameters. > > > > > > IMHO this structure is an issue with the general sharing concepts of > > > roles/tasks within ansible. It's not really well defined and there's > > > not really a concept of inheritance so I can't really extend your > > > tasks with mine in more of a programming sense. I have to duplicate it > > > or do something like include a specific task file from another role. > > > Since I can't really extend a role in the traditional OO programing > > > sense, I would like to figure out how I can leverage only part of it. > > > This can be done by establishing ansible variables to trigger specific > > > actions or just actually including the raw tasks themselves. Either > > > of these concepts needs some sort of contract to be established to the > > > other won't get broken. We had this in puppet via parameters which > > > are checked, there isn't really a similar concept in ansible so it > > > seems that we need to agree on some community established rules. > > > > > > For tripleo, I would like to just invoke the os_nova role and pass in > > > like install: false, service: false, config_dir: > > > /my/special/location/, config_data: {...} and it spit out the configs. > > > Then my roles would actually leverage these via containers/etc. Of > > > course most of this goes away if we had a unified (not file based) > > > configuration method across all services (openstack and non-openstack) > > > but we don't. :D > > > > > > > I like your idea here Alex. > > So having a role for each of these steps is too much management I agree, > > however > > establishing a pattern of using tasks for each step may be a really good > > way to cleanly handle this. > > > > Are you saying something like the following? > > > > openstack-nova-role/ > > * * /tasks/ > > * * /tasks/install.yml > > * * /tasks/service.yml > > * */tasks/config.yml > > * */taks/main.yml > > --------------------------- > > # main.yml > > > > include: install.yml > > when: nova_install|bool > > > > include: service.yml > > when: nova_service|bool > > > > include: config.yml > > when: nova_config.yml > > -------------------------------------------------- > > > > Interested in anything other than tags :) > > Thanks > > > This is basically what I do with roles i write, allow the user to decide to step > over specific tasks. For example, I have created nodepool_task_manager variable > with the following: > > http://git.openstack.org/cgit/openstack/ansible-role-nodepool/tree/defaults/main.yaml#n16 > http://git.openstack.org/cgit/openstack/ansible-role-nodepool/tree/tasks/main.yaml > > Been using it for a few years now, works much better then tags for me. The > phases are pre, install, configure, service right now. Thanks Alex for starting the conversation. There are few other ansible roles for tempest and it's friends (stackviz) https://github.com/redhat-openstack/infrared/tree/master/plugins/tempest https://github.com/openstack/tempest/tree/master/roles It would be a great idea to improve ansible-role-os_tempest role and modify it such a way that it can be re-used by anyone. I will start working on this. Thanks, Chandan Kumar From no-reply at openstack.org Fri Aug 10 10:10:36 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 10 Aug 2018 10:10:36 -0000 Subject: [openstack-dev] nova 18.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for nova for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/nova/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/nova/log/?h=stable/rocky Release notes for nova can be found at: http://docs.openstack.org/releasenotes/nova/ From mark at stackhpc.com Fri Aug 10 10:26:50 2018 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 10 Aug 2018 11:26:50 +0100 Subject: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do In-Reply-To: References: <1533848106-sup-4508@lrrr.local> <20180810021731.GA5481@localhost.localdomain> Message-ID: A couple of changes [1][2] that I proposed to kolla ansible recently as a PoC could be related here. Kolla ansible is full of almost identical roles for each service, with a lot of duplicated 'code' (YAML). The idea was to try to factor some of that out into shared roles. This results in less code and a more data-driven approach, which is inherently more configurable (for better or worse). The two changes are for configuration and user/service/endpoint registration. The same could easily be done with DB management and various other tasks. These roles are quite specific to kolla ansible, since they are tied to the configuration layout and the use of a kolla_toolbox container for executing keystone/DB ansible modules. [1] https://review.openstack.org/587591 [2] https://review.openstack.org/587590 Mark On 10 August 2018 at 10:59, Chandan kumar wrote: > Hello, > > On Fri, Aug 10, 2018 at 7:47 AM Paul Belanger > wrote: > > > > On Thu, Aug 09, 2018 at 08:00:13PM -0600, Wesley Hayutin wrote: > > > On Thu, Aug 9, 2018 at 5:33 PM Alex Schultz > wrote: > > > > > > > On Thu, Aug 9, 2018 at 2:56 PM, Doug Hellmann > > > > > wrote: > > > > > Excerpts from Alex Schultz's message of 2018-08-09 14:31:34 -0600: > > > > >> Ahoy folks, > > > > >> > > > > >> I think it's time we come up with some basic rules/patterns on > where > > > > >> code lands when it comes to OpenStack related Ansible roles and > as we > > > > >> convert/export things. There was a recent proposal to create an > > > > >> ansible-role-tempest[0] that would take what we use in > > > > >> tripleo-quickstart-extras[1] and separate it for re-usability by > > > > >> others. So it was asked if we could work with the > openstack-ansible > > > > >> team and leverage the existing openstack-ansible-os_tempest[2]. > It > > > > >> turns out we have a few more already existing roles laying around > as > > > > >> well[3][4]. > > > > >> > > > > >> What I would like to propose is that we as a community come > together > > > > >> to agree on specific patterns so that we can leverage the same > roles > > > > >> for some of the core configuration/deployment functionality while > > > > >> still allowing for specific project specific customization. What > I've > > > > >> noticed between all the project is that we have a few specific > core > > > > >> pieces of functionality that needs to be handled (or skipped as > it may > > > > >> be) for each service being deployed. > > > > >> > > > > >> 1) software installation > > > > >> 2) configuration management > > > > >> 3) service management > > > > >> 4) misc service actions > > > > >> > > > > >> Depending on which flavor of the deployment you're using, the > content > > > > >> of each of these may be different. Just about the only thing > that is > > > > >> shared between them all would be the configuration management > part. > > > > > > > > > > Does that make the 4 things separate roles, then? Isn't the role > > > > > usually the unit of sharing between playbooks? > > > > > > > > > > > > > It can be, but it doesn't have to be. The problem comes in with the > > > > granularity at which you are defining the concept of the overall > > > > action. If you want a role to encompass all that is "nova", you > could > > > > have a single nova role that you invoke 5 different times to do the > > > > different actions during the overall deployment. Or you could create > a > > > > role for nova-install, nova-config, nova-service, nova-cells, etc > etc. > > > > I think splitting them out into their own role is a bit too much in > > > > terms of management. In my particular openstack-ansible is already > > > > creating a role to manage "nova". So is there a way that I can > > > > leverage part of their process within mine without having to > duplicate > > > > it. You can pull in the task files themselves from a different so > > > > technically I think you could define a ansible-role-tripleo-nova that > > > > does some include_tasks: ../../os_nova/tasks/install.yaml but then > > > > we'd have to duplicate the variables in our playbook rather than > > > > invoking a role with some parameters. > > > > > > > > IMHO this structure is an issue with the general sharing concepts of > > > > roles/tasks within ansible. It's not really well defined and there's > > > > not really a concept of inheritance so I can't really extend your > > > > tasks with mine in more of a programming sense. I have to duplicate > it > > > > or do something like include a specific task file from another role. > > > > Since I can't really extend a role in the traditional OO programing > > > > sense, I would like to figure out how I can leverage only part of it. > > > > This can be done by establishing ansible variables to trigger > specific > > > > actions or just actually including the raw tasks themselves. Either > > > > of these concepts needs some sort of contract to be established to > the > > > > other won't get broken. We had this in puppet via parameters which > > > > are checked, there isn't really a similar concept in ansible so it > > > > seems that we need to agree on some community established rules. > > > > > > > > For tripleo, I would like to just invoke the os_nova role and pass in > > > > like install: false, service: false, config_dir: > > > > /my/special/location/, config_data: {...} and it spit out the > configs. > > > > Then my roles would actually leverage these via containers/etc. Of > > > > course most of this goes away if we had a unified (not file based) > > > > configuration method across all services (openstack and > non-openstack) > > > > but we don't. :D > > > > > > > > > > I like your idea here Alex. > > > So having a role for each of these steps is too much management I > agree, > > > however > > > establishing a pattern of using tasks for each step may be a really > good > > > way to cleanly handle this. > > > > > > Are you saying something like the following? > > > > > > openstack-nova-role/ > > > * * /tasks/ > > > * * /tasks/install.yml > > > * * /tasks/service.yml > > > * */tasks/config.yml > > > * */taks/main.yml > > > --------------------------- > > > # main.yml > > > > > > include: install.yml > > > when: nova_install|bool > > > > > > include: service.yml > > > when: nova_service|bool > > > > > > include: config.yml > > > when: nova_config.yml > > > -------------------------------------------------- > > > > > > Interested in anything other than tags :) > > > Thanks > > > > > This is basically what I do with roles i write, allow the user to decide > to step > > over specific tasks. For example, I have created nodepool_task_manager > variable > > with the following: > > > > http://git.openstack.org/cgit/openstack/ansible-role- > nodepool/tree/defaults/main.yaml#n16 > > http://git.openstack.org/cgit/openstack/ansible-role- > nodepool/tree/tasks/main.yaml > > > > Been using it for a few years now, works much better then tags for me. > The > > phases are pre, install, configure, service right now. > > > Thanks Alex for starting the conversation. > There are few other ansible roles for tempest and it's friends (stackviz) > https://github.com/redhat-openstack/infrared/tree/master/plugins/tempest > https://github.com/openstack/tempest/tree/master/roles > > It would be a great idea to improve ansible-role-os_tempest role and > modify it such a way that it can be re-used by anyone. > I will start working on this. > > Thanks, > > Chandan Kumar > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From AnNP at vn.fujitsu.com Fri Aug 10 10:10:28 2018 From: AnNP at vn.fujitsu.com (Nguyen Phuong, An) Date: Fri, 10 Aug 2018 10:10:28 +0000 Subject: [openstack-dev] [neutron] Security group logging Message-ID: <12a670165d8348fa8079d4510a4b6549@G07SGEXCMSGPS05.g07.fujitsu.local> Hi team, Have a nice day. Since Security Group Logging was merged in Queens cycle, we've just found a critical bug which has been addressed in [1] and [2]. These patches is already in good shape now (got +2 from core reviewers). So, could you please help to review and bless these patches to be merged in Rocky stable branch? After that, we can backport to Queens stable branch. [1] https://review.openstack.org/#/c/587681/ [2] https://review.openstack.org/#/c/587770/ Thank you in advance, Best regards, An From Jesse.Pretorius at rackspace.co.uk Fri Aug 10 10:38:28 2018 From: Jesse.Pretorius at rackspace.co.uk (Jesse Pretorius) Date: Fri, 10 Aug 2018 10:38:28 +0000 Subject: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do In-Reply-To: <20180810021731.GA5481@localhost.localdomain> References: <1533848106-sup-4508@lrrr.local> <20180810021731.GA5481@localhost.localdomain> Message-ID: <5E07888C-3CE9-4DA1-9535-BB69910A39FD@rackspace.co.uk> > On 8/10/18, 3:20 AM, "Paul Belanger" wrote: > > This is basically what I do with roles i write, allow the user to decide to step > over specific tasks. For example, I have created nodepool_task_manager variable > with the following: > > http://git.openstack.org/cgit/openstack/ansible-role-nodepool/tree/defaults/main.yaml#n16 > http://git.openstack.org/cgit/openstack/ansible-role-nodepool/tree/tasks/main.yaml > > Been using it for a few years now, works much better then tags for me. The > phases are pre, install, configure, service right now. Hi folks, I'm really happy that this conversation is happening. Thanks to Alex for raising it! The task routing pattern is a reasonably good one, and is something which OSA is very likely to move towards in the future. Something else which I've always liked about the pattern used by the role's Paul has put together is that they clearly state the input expectation - for example, the role does not manage apt repositories, or implement any apt refreshes. This is the kind of thing that I think is going to be important - the role should be clear about what it does, clear about the inputs it expects, and the outputs of it. This will mean that the internals of the role can change, but those inputs are like an API - if you give the role that, it must always output the same result. I can see it possibly being useful to include things like how to test the service in the service role, how to evaluate the health, how to enact an upgrade, how to enact a fast-forward upgrade, etc. But a good start initially would be to define some standards which inform the development of the roles. Within OSA, for better or worse, we have a mix of role types - some are 'utility', built for include_role usage. An example is http://git.openstack.org/cgit/openstack/ansible-role-systemd_service - give it the right vars, and it lays down the type of system service unit that you asked for in a standard way. We also make use of the http://git.openstack.org/cgit/openstack/ansible-config_template action plugin *everywhere* because it allows us not to be bothers with variables for every tunable under the sun - we only need variables for specific things that glue services together, or implement 'sensible defaults'. We then have what I might call 'integration' roles - these are roles like http://git.openstack.org/cgit/openstack/openstack-ansible-os_nova which does all things nova, and uses include_role to bring in various utility roles. We try to follow the idea that a single role operates on a single host, and try to avoid orchestration across multiple hosts inside a role, but it's not always that simple for it to be cut and dried and I'm starting to think we might want to change that, for upgrades especially. Laying down something like keystone or nova, with all the options like federation or the nova drivers, is a complex thing to do. Putting it all in one role makes the role hard to understand, but at least it's obvious where you go to find it. I guess what I'm trying to get across is that trying to build commonly used roles is not going to be a simple process. All projects have very different styles, and very different ways of putting a service like nova down. However, we should start somewhere - break it down into a set of utility roles we'd like to see available, figure out the standards that matter for input/output and Ansible version support, figure out the release and branching strategy for them, get them done and move to using them and retire any previously implemented roles which duplicate their purpose, then target the next set. I think it would be beneficial to get in a room at the PTG and figure out where we start, and agree on some standards. I'd like to see a strong facilitator for the session who can ensure that we keep things on topic and productive. ________________________________ Rackspace Limited is a company registered in England & Wales (company registered number 03897010) whose registered office is at 5 Millington Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may contain confidential or privileged information intended for the recipient. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse at rackspace.com and delete the original message. Your cooperation is appreciated. From mordred at inaugust.com Fri Aug 10 12:06:36 2018 From: mordred at inaugust.com (Monty Taylor) Date: Fri, 10 Aug 2018 07:06:36 -0500 Subject: [openstack-dev] [sdk] Propose adding Dmitry Tantsur to openstacksdk-core Message-ID: Hey everybody, I'd like to propose Dmitry Tantsur (dtantsur) as a new openstacksdk core team member. He's been diving in to some of the hard bits, such as dealing with microversions, and has a good grasp of the resource/proxy layer. His reviews have been super useful broadly, and he's also helping drive Ironic related functionality. Thoughts/concerns? Thanks! Monty From mordred at inaugust.com Fri Aug 10 12:06:43 2018 From: mordred at inaugust.com (Monty Taylor) Date: Fri, 10 Aug 2018 07:06:43 -0500 Subject: [openstack-dev] [sdk] Pruning core team Message-ID: <2e1d7af0-112e-549b-aaf9-72c9c4a9581e@inaugust.com> Hey everybody, We have some former contributors who haven't been involved in the last cycle that we should prune from the roster. They're all wonderful humans and it would be awesome to have them back if life presented them an opportunity to be involved again. I'd like to propose removing Brian Curtin, Clint Byrum, David Simard, and Ricardo Cruz. Thoughts/concerns? Thanks! Monty From artem.goncharov at gmail.com Fri Aug 10 12:18:58 2018 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Fri, 10 Aug 2018 14:18:58 +0200 Subject: [openstack-dev] [sdk] Propose adding Dmitry Tantsur to openstacksdk-core In-Reply-To: References: Message-ID: +1 On Fri, 10 Aug 2018, 14:06 Monty Taylor, wrote: > Hey everybody, > > I'd like to propose Dmitry Tantsur (dtantsur) as a new openstacksdk core > team member. He's been diving in to some of the hard bits, such as > dealing with microversions, and has a good grasp of the resource/proxy > layer. His reviews have been super useful broadly, and he's also helping > drive Ironic related functionality. > > Thoughts/concerns? > > Thanks! > Monty > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Aug 10 12:54:51 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Fri, 10 Aug 2018 14:54:51 +0200 Subject: [openstack-dev] [sdk] Propose adding Dmitry Tantsur to openstacksdk-core In-Reply-To: References: Message-ID: <93947DF7-BDA6-4355-897B-EE45837D7604@redhat.com> +1 > Wiadomość napisana przez Monty Taylor w dniu 10.08.2018, o godz. 14:06: > > Hey everybody, > > I'd like to propose Dmitry Tantsur (dtantsur) as a new openstacksdk core team member. He's been diving in to some of the hard bits, such as dealing with microversions, and has a good grasp of the resource/proxy layer. His reviews have been super useful broadly, and he's also helping drive Ironic related functionality. > > Thoughts/concerns? > > Thanks! > Monty > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From ricardo.carrillo.cruz at gmail.com Fri Aug 10 13:27:02 2018 From: ricardo.carrillo.cruz at gmail.com (Ricardo Carrillo Cruz) Date: Fri, 10 Aug 2018 15:27:02 +0200 Subject: [openstack-dev] [sdk] Pruning core team In-Reply-To: <2e1d7af0-112e-549b-aaf9-72c9c4a9581e@inaugust.com> References: <2e1d7af0-112e-549b-aaf9-72c9c4a9581e@inaugust.com> Message-ID: Good from me, as I cannot spend cycles on reviewing code on my current position. Long live shade! El vie., 10 ago. 2018 a las 14:07, Monty Taylor () escribió: > Hey everybody, > > We have some former contributors who haven't been involved in the last > cycle that we should prune from the roster. They're all wonderful humans > and it would be awesome to have them back if life presented them an > opportunity to be involved again. > > I'd like to propose removing Brian Curtin, Clint Byrum, David Simard, > and Ricardo Cruz. > > Thoughts/concerns? > > Thanks! > Monty > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgolovat at redhat.com Fri Aug 10 14:55:35 2018 From: sgolovat at redhat.com (Sergii Golovatiuk) Date: Fri, 10 Aug 2018 17:55:35 +0300 Subject: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do In-Reply-To: References: <1533848106-sup-4508@lrrr.local> Message-ID: Hi, On Fri, Aug 10, 2018 at 5:00 AM, Wesley Hayutin wrote: > > > On Thu, Aug 9, 2018 at 5:33 PM Alex Schultz wrote: > >> On Thu, Aug 9, 2018 at 2:56 PM, Doug Hellmann >> wrote: >> > Excerpts from Alex Schultz's message of 2018-08-09 14:31:34 -0600: >> >> Ahoy folks, >> >> >> >> I think it's time we come up with some basic rules/patterns on where >> >> code lands when it comes to OpenStack related Ansible roles and as we >> >> convert/export things. There was a recent proposal to create an >> >> ansible-role-tempest[0] that would take what we use in >> >> tripleo-quickstart-extras[1] and separate it for re-usability by >> >> others. So it was asked if we could work with the openstack-ansible >> >> team and leverage the existing openstack-ansible-os_tempest[2]. It >> >> turns out we have a few more already existing roles laying around as >> >> well[3][4]. >> >> >> >> What I would like to propose is that we as a community come together >> >> to agree on specific patterns so that we can leverage the same roles >> >> for some of the core configuration/deployment functionality while >> >> still allowing for specific project specific customization. What I've >> >> noticed between all the project is that we have a few specific core >> >> pieces of functionality that needs to be handled (or skipped as it may >> >> be) for each service being deployed. >> >> >> >> 1) software installation >> >> 2) configuration management >> >> 3) service management >> >> 4) misc service actions >> >> >> >> Depending on which flavor of the deployment you're using, the content >> >> of each of these may be different. Just about the only thing that is >> >> shared between them all would be the configuration management part. >> > >> > Does that make the 4 things separate roles, then? Isn't the role >> > usually the unit of sharing between playbooks? >> > >> >> It can be, but it doesn't have to be. The problem comes in with the >> granularity at which you are defining the concept of the overall >> action. If you want a role to encompass all that is "nova", you could >> have a single nova role that you invoke 5 different times to do the >> different actions during the overall deployment. Or you could create a >> role for nova-install, nova-config, nova-service, nova-cells, etc etc. >> I think splitting them out into their own role is a bit too much in >> terms of management. In my particular openstack-ansible is already >> creating a role to manage "nova". So is there a way that I can >> leverage part of their process within mine without having to duplicate >> it. You can pull in the task files themselves from a different so >> technically I think you could define a ansible-role-tripleo-nova that >> does some include_tasks: ../../os_nova/tasks/install.yaml but then >> we'd have to duplicate the variables in our playbook rather than >> invoking a role with some parameters. >> >> IMHO this structure is an issue with the general sharing concepts of >> roles/tasks within ansible. It's not really well defined and there's >> not really a concept of inheritance so I can't really extend your >> tasks with mine in more of a programming sense. I have to duplicate it >> or do something like include a specific task file from another role. >> Since I can't really extend a role in the traditional OO programing >> sense, I would like to figure out how I can leverage only part of it. >> This can be done by establishing ansible variables to trigger specific >> actions or just actually including the raw tasks themselves. Either >> of these concepts needs some sort of contract to be established to the >> other won't get broken. We had this in puppet via parameters which >> are checked, there isn't really a similar concept in ansible so it >> seems that we need to agree on some community established rules. >> >> For tripleo, I would like to just invoke the os_nova role and pass in >> like install: false, service: false, config_dir: >> /my/special/location/, config_data: {...} and it spit out the configs. >> Then my roles would actually leverage these via containers/etc. Of >> course most of this goes away if we had a unified (not file based) >> configuration method across all services (openstack and non-openstack) >> but we don't. :D >> > > I like your idea here Alex. > So having a role for each of these steps is too much management I agree, > however > establishing a pattern of using tasks for each step may be a really good > way to cleanly handle this. > > Are you saying something like the following? > > openstack-nova-role/ > * * /tasks/ > * * /tasks/install.yml > * * /tasks/service.yml > * */tasks/config.yml > * */taks/main.yml > --------------------------- > # main.yml > I like the idea. We may also add upgrade tasks here also > > include: install.yml > when: nova_install|bool > > include: service.yml > when: nova_service|bool > > include: config.yml > when: nova_config.yml > -------------------------------------------------- > > Interested in anything other than tags :) > Thanks > > > >> >> Thanks, >> -Alex >> >> > Doug >> > >> > ____________________________________________________________ >> ______________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -- > > Wes Hayutin > > Associate MANAGER > > Red Hat > > > > w hayutin at redhat.com T: +1919 <+19197544114> > 4232509 IRC: weshay > > > View my calendar and check my availability for meetings HERE > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Best Regards, Sergii Golovatiuk -------------- next part -------------- An HTML attachment was scrubbed... URL: From alee at redhat.com Fri Aug 10 15:15:09 2018 From: alee at redhat.com (Ade Lee) Date: Fri, 10 Aug 2018 11:15:09 -0400 Subject: [openstack-dev] [barbican][oslo] FFE request for castellan Message-ID: <1533914109.23178.37.camel@redhat.com> Hi all, I'd like to request a feature freeze exception to get the following change in for castellan. https://review.openstack.org/#/c/575800/ This extends the functionality of the vault backend to provide previously uninmplemented functionality, so it should not break anyone. The castellan vault plugin is used behind barbican in the barbican- vault plugin. We'd like to get this change into Rocky so that we can release Barbican with complete functionality on this backend (along with a complete set of passing functional tests). Thanks, Ade From shrewsbury.dave at gmail.com Fri Aug 10 15:14:47 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Fri, 10 Aug 2018 11:14:47 -0400 Subject: [openstack-dev] [sdk] Propose adding Dmitry Tantsur to openstacksdk-core In-Reply-To: <93947DF7-BDA6-4355-897B-EE45837D7604@redhat.com> References: <93947DF7-BDA6-4355-897B-EE45837D7604@redhat.com> Message-ID: +1 On Fri, Aug 10, 2018 at 8:55 AM Slawomir Kaplonski wrote: > +1 > > > Wiadomość napisana przez Monty Taylor w dniu > 10.08.2018, o godz. 14:06: > > > > Hey everybody, > > > > I'd like to propose Dmitry Tantsur (dtantsur) as a new openstacksdk core > team member. He's been diving in to some of the hard bits, such as dealing > with microversions, and has a good grasp of the resource/proxy layer. His > reviews have been super useful broadly, and he's also helping drive Ironic > related functionality. > > > > Thoughts/concerns? > > > > Thanks! > > Monty > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosario.disomma.ml at gmail.com Fri Aug 10 15:16:21 2018 From: rosario.disomma.ml at gmail.com (Rosario Di Somma) Date: Fri, 10 Aug 2018 17:16:21 +0200 Subject: [openstack-dev] [sdk] Propose adding Dmitry Tantsur to openstacksdk-core In-Reply-To: References: <93947DF7-BDA6-4355-897B-EE45837D7604@redhat.com> Message-ID: +1 On Fri, Aug 10, 2018 at 17:14, David Shrewsbury wrote: +1 On Fri, Aug 10, 2018 at 8:55 AM Slawomir Kaplonski < skaplons at redhat.com [skaplons at redhat.com] > wrote: +1 > Wiadomość napisana przez Monty Taylor < mordred at inaugust.com [mordred at inaugust.com] > w dniu 10.08.2018, o godz. 14:06: > > Hey everybody, > > I'd like to propose Dmitry Tantsur (dtantsur) as a new openstacksdk core team member. He's been diving in to some of the hard bits, such as dealing with microversions, and has a good grasp of the resource/proxy layer. His reviews have been super useful broadly, and he's also helping drive Ironic related functionality. > > Thoughts/concerns? > > Thanks! > Monty > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe] > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev] — Slawek Kaplonski Senior software engineer Red Hat __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev] -- David Shrewsbury (Shrews) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Fri Aug 10 15:19:27 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 10 Aug 2018 16:19:27 +0100 (BST) Subject: [openstack-dev] [tc] [all] documenting appointed PTLs Message-ID: We've had several appointed PTLs this cycle, in some cases because people forgot to nominate themselves, in other cases because existing maintainers have been pulled away and volunteers stepped up. Thanks to those people who did. We haven't had a formal process for documenting those appointments and there's been some confusion on who and where it should all happen. I've proposed a plan at https://review.openstack.org/#/c/590790/ that may not yet be perfect, but gives a starting point on which to accrete a reasonable solution. If you have opinions on the matter, please leave something on the review. Thanks. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From gr at ham.ie Fri Aug 10 15:28:00 2018 From: gr at ham.ie (Graham Hayes) Date: Fri, 10 Aug 2018 16:28:00 +0100 Subject: [openstack-dev] [requirements][ffe] FFE for python-designateclient 2.10.0 Message-ID: Hi all, I would like to ask for a FFE to release python-designateclient 2.10.0 [1] We did not do a release at all during the rocky cycle, and this allows us to create a stable/rocky branch It just requires a U-C bump, and contains a few bug fixes from 2.9.0. Thanks, Graham 1 - https://review.openstack.org/590776 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From prometheanfire at gentoo.org Fri Aug 10 15:30:53 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Fri, 10 Aug 2018 10:30:53 -0500 Subject: [openstack-dev] [requirements][ffe] FFE for python-designateclient 2.10.0 In-Reply-To: References: Message-ID: <20180810153053.wikizdjsqgneimzp@gentoo.org> On 18-08-10 16:28:00, Graham Hayes wrote: > Hi all, > > I would like to ask for a FFE to release python-designateclient 2.10.0 > [1] > > We did not do a release at all during the rocky cycle, and this allows > us to create a stable/rocky branch > > It just requires a U-C bump, and contains a few bug fixes from 2.9.0. > > Thanks, > > Graham > > 1 - https://review.openstack.org/590776 > As discussed during the releases meeting, ack from reqs -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From colleen at gazlene.net Fri Aug 10 15:46:38 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 10 Aug 2018 17:46:38 +0200 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 6 August 2018 Message-ID: <1533915998.2993501.1470046096.3F011E8B@webmail.messagingengine.com> # Keystone Team Update - Week of 6 August 2018 ## News ### RC1 We released RC1 this week[1]. Please try it out and be on the lookout for critical bugs. As of yet we don't seem to have any showstoppers that would require another RC. [1] https://releases.openstack.org/rocky/index.html#rocky-keystone ### Edge Discussions The OpenNFV Edge Cloud group and the Edge Computing Group are ramping up implementations of proofs of concept for the potential keystone architectures for edge cloud scenarios. Some of the models under investigation or that we've suggested[2] are keystone-to-keystone federation, regular federation with an external identity provider, database synchronization via database replication[3] and database synchronization via an agent. One idea to enhance the federation-based models is to make application credentials refreshable, which Kristi is going to write a spec for[4]. I encourage the team to join the meeting calls[5][6], to help the people working on implementations, and volunteer for technical work items. It would be great to be at a point where we can discuss design details for the next cycle at the PTG. [2] https://wiki.openstack.org/wiki/Keystone_edge_architectures [3] https://review.openstack.org/566448 [4] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-08-07.log.html#t2018-08-07T15:34:54 [5] https://wiki.openstack.org/wiki/Edge_Computing_Group#Meetings [6] https://wiki.opnfv.org/display/PROJ/Edge+cloud ### Flask Work Morgan has been diligently working on converting our APIs to Flask, please see the many outstanding reviews[7]. Some of these conversions should be parallelizeable so if you'd like to help him out I'm sure he would appreciate it, just coordinate with him[8]. [7] https://review.openstack.org/#/q/status:open+topic:bug/1776504 [8] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-08-06.log.html#t2018-08-06T20:31:19 ### Self-Service Keystone At the weekly meeting Adam suggested we make self-service keystone a focus point of the PTG[9]. Currently, policy limitations make it difficult for an unprivileged keystone user to get things done or to get information without the help of an administrator. There are some other projects that have been created to act as workflow proxies to mitigate keystone's limitations, such as Adjutant[10] (now an official OpenStack project) and Ksproj[11] (written by Kristi). The question is whether the primitives offered by keystone are sufficient building blocks for these external tools to leverage, or if we should be doing more of this logic within keystone. Certainly improving our RBAC model is going to be a major part of improving the self-service user experience. [9] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-08-07-16.00.log.html#l-121 [10] https://adjutant.readthedocs.io/en/latest/ [11] https://github.com/CCI-MOC/ksproj ### Standalone Keystone Also at the meeting and during office hours, we revived the discussion of what it would take to have a standalone keystone be a useful identity provider for non-OpenStack projects[12][13]. First up we'd need to turn keystone into a fully-fledged SAML IdP, which it's not at the moment (which is a point of confusion in our documentation), or even add support for it to act as an OpenID Connect IdP. This would be relatively easy to do (or at least not impossible). Then the application would have to use keystonemiddleware or its own middleware to route requests to keystone to issue and validate tokens (this is one aspect where we've previously discussed whether JWT could benefit us). Then the question is what should a not-OpenStack application do with keystone's "scoped RBAC"? It would all depend on how the resources of the application are grouped and whether they care about multitenancy in some form. Likely each application would have different needs and it would be difficult to find a one-size-fits-all approach. We're interested to know whether anyone has a burning use case for something like this. [12] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-08-07-16.00.log.html#l-192 [13] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-08-07.log.html#t2018-08-07T17:01:30 ### PTG Planning We're in the brainstorming phase for the PTG, please add topics to the etherpad[14]. Lance will organize these into an agenda soonish. [14] https://etherpad.openstack.org/p/keystone-stein-ptg ## Recently Merged Changes Search query: https://bit.ly/2IACk3F We merged 16 changes this week. ## Changes that need Attention Search query: https://bit.ly/2wv7QLK There are 54 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. Special attention should be given to patches that close bugs, and we should make sure we backport any critical bugfixes to stable/rocky. ## Bugs This week we opened 2 new bugs and closed 3. There don't currently seem to be any showstopper bugs for Rocky. orange_julius has been chasing a fun, apparently longstanding bug in ldappool[15], our traditionally low-effort adopted project. Bugs opened (2) Bug #1786383 (keystone:Undecided) opened by Liyingjun https://bugs.launchpad.net/keystone/+bug/1786383 Bug #1785898 (ldappool:Undecided) opened by Nick Wilburn https://bugs.launchpad.net/ldappool/+bug/1785898 Bugs fixed (3) Bug #1782704 (keystone:High) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1782704 Bug #1780503 (keystone:Medium) fixed by Gage Hugo https://bugs.launchpad.net/keystone/+bug/1780503 Bug #1785164 (keystone:Undecided) fixed by wangxiyuan https://bugs.launchpad.net/keystone/+bug/1785164 [15] https://bugs.launchpad.net/ldappool/+bug/1785898 ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html This week was the RC1 deadline as well as the string freeze, so we should not be merging any changes to strings for Rocky. We have two weeks to release another RC if we need to. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 From sean.mcginnis at gmx.com Fri Aug 10 16:18:33 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 10 Aug 2018 11:18:33 -0500 Subject: [openstack-dev] [requirements][karbor] FFE for python-karborclient Message-ID: <20180810161832.GA30688@sm-workstation> This is requesting a requirements FFE to raise u-c for python-karborclient. This client only has some requirements and CI changes merged, but it has not done any releases during the rocky cycle. It is well past client lib freeze, but as stated in our policy, we will need to force a final release so there is a rocky version and these requirements and CI changes are in the stable/rocky branch of the repo. There is one caveat with this release in that the karbor service has not done a release for rocky yet. If one is not done by the final cycle-with-intermediary deadline, karbor will need to be excluded from the Rocky coordinated release. This would include service and clients. Sean From sean.mcginnis at gmx.com Fri Aug 10 16:20:13 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 10 Aug 2018 11:20:13 -0500 Subject: [openstack-dev] [requirements][tricircle] FFE for python-tricircleclient Message-ID: <20180810162012.GB30688@sm-workstation> This is a requirements FFE to raise the upper-constraints for python-tricircleclient. This client only has some requirements and CI changes merged, but it has not done any releases during the rocky cycle. It is well past client lib freeze, but as stated in our policy, we will need to force a final release so there is a rocky version and these requirements and CI changes are in the stable/rocky branch of the repo. Sean From doug at doughellmann.com Fri Aug 10 16:33:44 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 10 Aug 2018 12:33:44 -0400 Subject: [openstack-dev] [tc] [all] documenting appointed PTLs In-Reply-To: References: Message-ID: <1533918812-sup-7579@lrrr.local> Excerpts from Chris Dent's message of 2018-08-10 16:19:27 +0100: > > We've had several appointed PTLs this cycle, in some cases because > people forgot to nominate themselves, in other cases because > existing maintainers have been pulled away and volunteers stepped > up. Thanks to those people who did. > > We haven't had a formal process for documenting those appointments > and there's been some confusion on who and where it should all > happen. I've proposed a plan at > > https://review.openstack.org/#/c/590790/ > > that may not yet be perfect, but gives a starting point on which > to accrete a reasonable solution. > > If you have opinions on the matter, please leave something on the > review. > > Thanks. > Thanks for writing that up, Chris. Doug From doug at doughellmann.com Fri Aug 10 16:44:11 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 10 Aug 2018 12:44:11 -0400 Subject: [openstack-dev] [all][tc][election] Timing of the Upcoming Stein TC election In-Reply-To: <20180809214442.GA5069@thor.bakeyournoodle.com> References: <20180808043930.GK9540@thor.bakeyournoodle.com> <1533849636-sup-7516@lrrr.local> <20180809214442.GA5069@thor.bakeyournoodle.com> Message-ID: <1533919412-sup-684@lrrr.local> Excerpts from Tony Breeds's message of 2018-08-10 07:44:42 +1000: > On Thu, Aug 09, 2018 at 05:20:53PM -0400, Doug Hellmann wrote: > > Excerpts from Tony Breeds's message of 2018-08-08 14:39:30 +1000: > > > Hello all, > > > With the PTL elections behind us it's time to start looking at the > > > TC election. Our charter[1] says: > > > > > > The election is held no later than 6 weeks prior to each OpenStack > > > Summit (on or before ‘S-6’ week), with elections held open for no less > > > than four business days. > > > > > > Assuming we have the same structure that gives us a timeline of: > > > > > > Summit is at: 2018-11-13 > > > Latest possible completion is at: 2018-10-02 > > > Moving back to Tuesday: 2018-10-02 > > > TC Election from 2018-09-25T23:45 to 2018-10-02T23:45 > > > TC Campaigning from 2018-09-18T23:45 to 2018-09-25T23:45 > > > TC Nominations from 2018-09-11T23:45 to 2018-09-18T23:45 > > > > > > This puts the bulk of the nomination period during the PTG, which is > > > sub-optimal as the nominations cause a distraction from the PTG but more > > > so because the campaigning will coincide with travel home, and some > > > community members take vacation along with the PTG. > > > > > > So I'd like to bring up the idea of moving the election forward a > > > little so that it's actually the campaigning period that overlaps with > > > the PTG: > > > > > > TC Election from 2018-09-18T23:45 to 2018-09-27T23:45 > > > TC Campaigning from 2018-09-06T23:45 to 2018-09-18T23:45 > > > TC Nominations from 2018-08-30T23:45 to 2018-09-06T23:45 > > > > > > This gives us longer campaigning and election periods. > > > > > > There are some advantages to doing this: > > > > > > * A panel style Q&A could be held formally or informally ;P > > > * There's improved scope for for incoming, outgoing and staying put TC > > > members to interact in a high bandwidth way. > > > * In personi/private discussions with TC candidates/members. > > > > > > However it isn't without downsides: > > > > > > * Election fatigue, We've just had the PTL elections and the UC > > > elections are currently running. Less break before the TC elections > > > may not be a good thing. > > > * TC candidates that can't travel to the PTG could be disadvantaged > > > * The campaigning would all happen at the PTG and not on the mailing > > > list disadvantaging community members not at the PTG. > > > > > > So thoughts? > > > > > > Yours Tony. > > > > > > [1] https://governance.openstack.org/tc/reference/charter.html > > > > Who needs to make this decision? The current TC? > > I believe that the TC delegated that to the Election WG [1] but the > governance here is a little gray/fuzzy. OK, I'm content for the Election team to make the call I just wanted to make sure I gave you an opinion if you were asking me for one. ;-) > So I kinda think that if the TC doesn't object I can propose the patch > to the election repo and you (as TC chair) can +/-1 is as you see fit. > > Is it fair to ask we do that shortly after the next TC office hours? +1 > > Yours Tony. > > [1] https://governance.openstack.org/tc/reference/working-groups.html From prometheanfire at gentoo.org Fri Aug 10 16:53:24 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Fri, 10 Aug 2018 11:53:24 -0500 Subject: [openstack-dev] [requirements][karbor] FFE for python-karborclient In-Reply-To: <20180810161832.GA30688@sm-workstation> References: <20180810161832.GA30688@sm-workstation> Message-ID: <20180810165324.cyyv2c65efgoix7r@gentoo.org> On 18-08-10 11:18:33, Sean McGinnis wrote: > This is requesting a requirements FFE to raise u-c for python-karborclient. > > This client only has some requirements and CI changes merged, but it has not > done any releases during the rocky cycle. It is well past client lib freeze, > but as stated in our policy, we will need to force a final release so there is > a rocky version and these requirements and CI changes are in the stable/rocky > branch of the repo. > > There is one caveat with this release in that the karbor service has not done a > release for rocky yet. If one is not done by the final cycle-with-intermediary > deadline, karbor will need to be excluded from the Rocky coordinated release. > This would include service and clients. > requirements FFE approved for UC only. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From prometheanfire at gentoo.org Fri Aug 10 16:53:52 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Fri, 10 Aug 2018 11:53:52 -0500 Subject: [openstack-dev] [requirements][tricircle] FFE for python-tricircleclient In-Reply-To: <20180810162012.GB30688@sm-workstation> References: <20180810162012.GB30688@sm-workstation> Message-ID: <20180810165352.3xsq7kelsa5bklk7@gentoo.org> On 18-08-10 11:20:13, Sean McGinnis wrote: > This is a requirements FFE to raise the upper-constraints for > python-tricircleclient. > > This client only has some requirements and CI changes merged, but it has not > done any releases during the rocky cycle. It is well past client lib freeze, > but as stated in our policy, we will need to force a final release so there is > a rocky version and these requirements and CI changes are in the stable/rocky > branch of the repo. > requirements FFE approved for UC only. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From no-reply at openstack.org Fri Aug 10 16:58:20 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 10 Aug 2018 16:58:20 -0000 Subject: [openstack-dev] designate 7.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for designate for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/designate/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/designate/log/?h=stable/rocky Release notes for designate can be found at: http://docs.openstack.org/releasenotes/designate/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/designate and tag it *rocky-rc-potential* to bring it to the designate release crew's attention. From no-reply at openstack.org Fri Aug 10 17:00:49 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 10 Aug 2018 17:00:49 -0000 Subject: [openstack-dev] masakari-monitors 6.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for masakari-monitors for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/masakari-monitors/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/masakari-monitors/log/?h=stable/rocky Release notes for masakari-monitors can be found at: http://docs.openstack.org/releasenotes/masakari-monitors/ If you find an issue that could be considered release-critical, please file it at: http://bugs.launchpad.net/masakari-monitors and tag it *rocky-rc-potential* to bring it to the masakari-monitors release crew's attention. From no-reply at openstack.org Fri Aug 10 17:08:12 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 10 Aug 2018 17:08:12 -0000 Subject: [openstack-dev] designate-dashboard 7.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for designate-dashboard for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/designate-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: http://git.openstack.org/cgit/openstack/designate-dashboard/log/?h=stable/rocky Release notes for designate-dashboard can be found at: http://docs.openstack.org/releasenotes/designate-dashboard/ From dtroyer at gmail.com Fri Aug 10 17:53:25 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 10 Aug 2018 12:53:25 -0500 Subject: [openstack-dev] [sdk] Pruning core team In-Reply-To: <2e1d7af0-112e-549b-aaf9-72c9c4a9581e@inaugust.com> References: <2e1d7af0-112e-549b-aaf9-72c9c4a9581e@inaugust.com> Message-ID: On Fri, Aug 10, 2018 at 7:06 AM, Monty Taylor wrote: > I'd like to propose removing Brian Curtin, Clint Byrum, David Simard, and > Ricardo Cruz. > > Thoughts/concerns? Reluctant +1, thanks guys for all the hard work! dt -- Dean Troyer dtroyer at gmail.com From mordred at inaugust.com Fri Aug 10 19:53:37 2018 From: mordred at inaugust.com (Monty Taylor) Date: Fri, 10 Aug 2018 14:53:37 -0500 Subject: [openstack-dev] [sdk] Pruning core team In-Reply-To: References: <2e1d7af0-112e-549b-aaf9-72c9c4a9581e@inaugust.com> Message-ID: <3b1fafc7-a7e5-daa2-65c0-810d81c4e895@inaugust.com> On 08/10/2018 12:53 PM, Dean Troyer wrote: > On Fri, Aug 10, 2018 at 7:06 AM, Monty Taylor wrote: >> I'd like to propose removing Brian Curtin, Clint Byrum, David Simard, and >> Ricardo Cruz. >> >> Thoughts/concerns? > > Reluctant +1, thanks guys for all the hard work! +100 to the reluctant - and the thanks From sam47priya at gmail.com Fri Aug 10 23:44:51 2018 From: sam47priya at gmail.com (Sam P) Date: Sat, 11 Aug 2018 08:44:51 +0900 Subject: [openstack-dev] [Release-job-failures][masakari][release] Pre-release of openstack/masakari failed In-Reply-To: <1533836982-sup-6486@lrrr.local> References: <1533836982-sup-6486@lrrr.local> Message-ID: Hi Doug, Thanks. I will fix this. --- Regards, Sampath On Fri, Aug 10, 2018 at 2:51 AM Doug Hellmann wrote: > Excerpts from zuul's message of 2018-08-09 17:23:01 +0000: > > Build failed. > > > > - release-openstack-python > http://logs.openstack.org/84/84135048cb372cbd11080fc27151949cee4e52d1/pre-release/release-openstack-python/095990b/ > : FAILURE in 8m 57s > > - announce-release announce-release : SKIPPED > > - propose-update-constraints propose-update-constraints : SKIPPED > > > > The RC1 build for Masakari failed with this error: > > error: can't copy 'etc/masakari/masakari-custom-recovery-methods.conf': > doesn't exist or not a regular file > > The packaging files need to be fixed so a new release candidate can be > prepared. The changes will need to be made on master and then backported > to the new stable/rocky branch. > > Doug > > > http://logs.openstack.org/84/84135048cb372cbd11080fc27151949cee4e52d1/pre-release/release-openstack-python/095990b/ara-report/result/7459d483-48d8-414f-8830-d6411158f9a2/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Sat Aug 11 00:14:43 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 10 Aug 2018 17:14:43 -0700 Subject: [openstack-dev] [all][tc][election] Timing of the Upcoming Stein TC election In-Reply-To: <1533919412-sup-684@lrrr.local> References: <20180808043930.GK9540@thor.bakeyournoodle.com> <1533849636-sup-7516@lrrr.local> <20180809214442.GA5069@thor.bakeyournoodle.com> <1533919412-sup-684@lrrr.local> Message-ID: I created the patch to setup the tc election: https://review.openstack.org/#/c/591111/ -Kendall (diablo_rojo) On Fri, Aug 10, 2018 at 9:44 AM Doug Hellmann wrote: > Excerpts from Tony Breeds's message of 2018-08-10 07:44:42 +1000: > > On Thu, Aug 09, 2018 at 05:20:53PM -0400, Doug Hellmann wrote: > > > Excerpts from Tony Breeds's message of 2018-08-08 14:39:30 +1000: > > > > Hello all, > > > > With the PTL elections behind us it's time to start looking at > the > > > > TC election. Our charter[1] says: > > > > > > > > The election is held no later than 6 weeks prior to each OpenStack > > > > Summit (on or before ‘S-6’ week), with elections held open for no > less > > > > than four business days. > > > > > > > > Assuming we have the same structure that gives us a timeline of: > > > > > > > > Summit is at: 2018-11-13 > > > > Latest possible completion is at: 2018-10-02 > > > > Moving back to Tuesday: 2018-10-02 > > > > TC Election from 2018-09-25T23:45 to 2018-10-02T23:45 > > > > TC Campaigning from 2018-09-18T23:45 to 2018-09-25T23:45 > > > > TC Nominations from 2018-09-11T23:45 to 2018-09-18T23:45 > > > > > > > > This puts the bulk of the nomination period during the PTG, which is > > > > sub-optimal as the nominations cause a distraction from the PTG but > more > > > > so because the campaigning will coincide with travel home, and some > > > > community members take vacation along with the PTG. > > > > > > > > So I'd like to bring up the idea of moving the election forward a > > > > little so that it's actually the campaigning period that overlaps > with > > > > the PTG: > > > > > > > > TC Election from 2018-09-18T23:45 to 2018-09-27T23:45 > > > > TC Campaigning from 2018-09-06T23:45 to 2018-09-18T23:45 > > > > TC Nominations from 2018-08-30T23:45 to 2018-09-06T23:45 > > > > > > > > This gives us longer campaigning and election periods. > > > > > > > > There are some advantages to doing this: > > > > > > > > * A panel style Q&A could be held formally or informally ;P > > > > * There's improved scope for for incoming, outgoing and staying put > TC > > > > members to interact in a high bandwidth way. > > > > * In personi/private discussions with TC candidates/members. > > > > > > > > However it isn't without downsides: > > > > > > > > * Election fatigue, We've just had the PTL elections and the UC > > > > elections are currently running. Less break before the TC > elections > > > > may not be a good thing. > > > > * TC candidates that can't travel to the PTG could be disadvantaged > > > > * The campaigning would all happen at the PTG and not on the > mailing > > > > list disadvantaging community members not at the PTG. > > > > > > > > So thoughts? > > > > > > > > Yours Tony. > > > > > > > > [1] https://governance.openstack.org/tc/reference/charter.html > > > > > > Who needs to make this decision? The current TC? > > > > I believe that the TC delegated that to the Election WG [1] but the > > governance here is a little gray/fuzzy. > > OK, I'm content for the Election team to make the call I just wanted to > make sure I gave you an opinion if you were asking me for one. ;-) > > > So I kinda think that if the TC doesn't object I can propose the patch > > to the election repo and you (as TC chair) can +/-1 is as you see fit. > > > > Is it fair to ask we do that shortly after the next TC office hours? > > +1 > > > > > Yours Tony. > > > > [1] https://governance.openstack.org/tc/reference/working-groups.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangchi at szzt.com.cn Sat Aug 11 04:27:00 2018 From: zhangchi at szzt.com.cn (zhangchi at szzt.com.cn) Date: Sat, 11 Aug 2018 12:27:00 +0800 Subject: [openstack-dev] [tricircle] Nominate change in tricircle core team In-Reply-To: <1cf97201.2fd2.16526ba252e.Coremail.linghucongsong@163.com> References: <7212f283.12059.1651e931460.Coremail.linghucongsong@163.com> <1cf97201.2fd2.16526ba252e.Coremail.linghucongsong@163.com> Message-ID: +1 ---------------------------------------------------------------------------- ============================================================================ 在 2018-08-11 10:04,linghucongsong 写道: > At 2018-08-09 20:04:40, "linghucongsong" > wrote: > >> Hi team, I would like to nominate Zhuo Tang (ztang) for tricircle >> core member. ztang has actively joined the discussion of feature >> development in our offline meeting and has participated in >> contribute important blueprints since Rocky, like network deletion >> reliability and service function chaining. I really think his >> experience will help us substantially improve tricircle. >> Bye the way the vote unitl 2018-8-16 beijing time. >> >> Best Wishes! >> Baisen From lbragstad at gmail.com Sat Aug 11 09:14:15 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Sat, 11 Aug 2018 17:14:15 +0800 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 6 August 2018 In-Reply-To: <1533915998.2993501.1470046096.3F011E8B@webmail.messagingengine.com> References: <1533915998.2993501.1470046096.3F011E8B@webmail.messagingengine.com> Message-ID: On Fri, Aug 10, 2018, 23:47 Colleen Murphy wrote: > # Keystone Team Update - Week of 6 August 2018 > > ## News > > ### RC1 > > We released RC1 this week[1]. Please try it out and be on the lookout for > critical bugs. As of yet we don't seem to have any showstoppers that would > require another RC. Should we rev the keystone version for the inclusion of the new default roles? > [1] https://releases.openstack.org/rocky/index.html#rocky-keystone > > ### Edge Discussions > > The OpenNFV Edge Cloud group and the Edge Computing Group are ramping up > implementations of proofs of concept for the potential keystone > architectures for edge cloud scenarios. Some of the models under > investigation or that we've suggested[2] are keystone-to-keystone > federation, regular federation with an external identity provider, database > synchronization via database replication[3] and database synchronization > via an agent. One idea to enhance the federation-based models is to make > application credentials refreshable, which Kristi is going to write a spec > for[4]. I encourage the team to join the meeting calls[5][6], to help the > people working on implementations, and volunteer for technical work items. > It would be great to be at a point where we can discuss design details for > the next cycle at the PTG. > > [2] https://wiki.openstack.org/wiki/Keystone_edge_architectures > [3] https://review.openstack.org/566448 > [4] > http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-08-07.log.html#t2018-08-07T15:34:54 > [5] https://wiki.openstack.org/wiki/Edge_Computing_Group#Meetings > [6] https://wiki.opnfv.org/display/PROJ/Edge+cloud > > ### Flask Work > > Morgan has been diligently working on converting our APIs to Flask, please > see the many outstanding reviews[7]. Some of these conversions should be > parallelizeable so if you'd like to help him out I'm sure he would > appreciate it, just coordinate with him[8]. > > [7] https://review.openstack.org/#/q/status:open+topic:bug/1776504 > [8] > http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-08-06.log.html#t2018-08-06T20:31:19 > > ### Self-Service Keystone > > At the weekly meeting Adam suggested we make self-service keystone a focus > point of the PTG[9]. Currently, policy limitations make it difficult for an > unprivileged keystone user to get things done or to get information without > the help of an administrator. There are some other projects that have been > created to act as workflow proxies to mitigate keystone's limitations, such > as Adjutant[10] (now an official OpenStack project) and Ksproj[11] (written > by Kristi). The question is whether the primitives offered by keystone are > sufficient building blocks for these external tools to leverage, or if we > should be doing more of this logic within keystone. Certainly improving our > RBAC model is going to be a major part of improving the self-service user > experience. > > [9] > http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-08-07-16.00.log.html#l-121 > [10] https://adjutant.readthedocs.io/en/latest/ > [11] https://github.com/CCI-MOC/ksproj > > ### Standalone Keystone > > Also at the meeting and during office hours, we revived the discussion of > what it would take to have a standalone keystone be a useful identity > provider for non-OpenStack projects[12][13]. First up we'd need to turn > keystone into a fully-fledged SAML IdP, which it's not at the moment (which > is a point of confusion in our documentation), or even add support for it > to act as an OpenID Connect IdP. This would be relatively easy to do (or at > least not impossible). Then the application would have to use > keystonemiddleware or its own middleware to route requests to keystone to > issue and validate tokens (this is one aspect where we've previously > discussed whether JWT could benefit us). Then the question is what should a > not-OpenStack application do with keystone's "scoped RBAC"? It would all > depend on how the resources of the application are grouped and whether they > care about multitenancy in some form. Likely each application would have > different needs and it would be difficult to find a one-size-fits-all > approach. We're interested to know whether anyone has a burning use case > for something like this. > > [12] > http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-08-07-16.00.log.html#l-192 > [13] > http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-08-07.log.html#t2018-08-07T17:01:30 > > ### PTG Planning > > We're in the brainstorming phase for the PTG, please add topics to the > etherpad[14]. Lance will organize these into an agenda soonish. > > [14] https://etherpad.openstack.org/p/keystone-stein-ptg > > ## Recently Merged Changes > > Search query: https://bit.ly/2IACk3F > > We merged 16 changes this week. > > ## Changes that need Attention > > Search query: https://bit.ly/2wv7QLK > > There are 54 changes that are passing CI, not in merge conflict, have no > negative reviews and aren't proposed by bots. Special attention should be > given to patches that close bugs, and we should make sure we backport any > critical bugfixes to stable/rocky. > > ## Bugs > > This week we opened 2 new bugs and closed 3. There don't currently seem to > be any showstopper bugs for Rocky. orange_julius has been chasing a fun, > apparently longstanding bug in ldappool[15], our traditionally low-effort > adopted project. > > Bugs opened (2) > Bug #1786383 (keystone:Undecided) opened by Liyingjun > https://bugs.launchpad.net/keystone/+bug/1786383 > Bug #1785898 (ldappool:Undecided) opened by Nick Wilburn > https://bugs.launchpad.net/ldappool/+bug/1785898 > > Bugs fixed (3) > Bug #1782704 (keystone:High) fixed by Lance Bragstad > https://bugs.launchpad.net/keystone/+bug/1782704 > Bug #1780503 (keystone:Medium) fixed by Gage Hugo > https://bugs.launchpad.net/keystone/+bug/1780503 > Bug #1785164 (keystone:Undecided) fixed by wangxiyuan > https://bugs.launchpad.net/keystone/+bug/1785164 > > [15] https://bugs.launchpad.net/ldappool/+bug/1785898 > > ## Milestone Outlook > > https://releases.openstack.org/rocky/schedule.html > > This week was the RC1 deadline as well as the string freeze, so we should > not be merging any changes to strings for Rocky. We have two weeks to > release another RC if we need to. > > ## Help with this newsletter > > Help contribute to this newsletter by editing the etherpad: > https://etherpad.openstack.org/p/keystone-team-newsletter > Dashboard generated using gerrit-dash-creator and > https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.se Sat Aug 11 22:41:00 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Sun, 12 Aug 2018 00:41:00 +0200 Subject: [openstack-dev] [puppet] [ptg] Planning for PTG Message-ID: <20180811224100.rPjZlMLWg%tobias.urdin@binero.se> Hello all Puppeters, We are approaching the PTG in Denver and want to remind everybody to make sure you book your accommodations because they will run out. I've created an etherpad here [1] where you can add any topic you want to discuss. I will not be able to attend the PTG, if we get any topics that needs discussion we need somebody that could moderate the discussions. Please fill in the etherpad if you will be attending, it would be great if anybody that attended could take the moderation duty if we have topics and require a room booking. There will also be team photos if people are interested more information will follow from the PTG organizers about that. Best regards Tobias [1] https://etherpad.openstack.org/p/puppet-ptg-stein​ From ildiko.vancsa at gmail.com Sun Aug 12 07:19:52 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Sun, 12 Aug 2018 09:19:52 +0200 Subject: [openstack-dev] [keystone] Cross-project discussions about Keystone for Edge at the PTG Message-ID: <430E2C71-6771-4E22-90D3-FEA22AC0C01F@gmail.com> Hi, The Keystone, Edge Computing Group and StarlingX teams are planning to have follow up discussions about using Keystone in edge scenarios including discussing requirements, architecture options and currently ongoing activities. If you are interested in participating you can find further information here: http://lists.openstack.org/pipermail/edge-computing/2018-August/000394.html Please let me know if you have any questions. Thanks and Best Regards, Ildikó From ildiko.vancsa at gmail.com Sun Aug 12 07:43:39 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Sun, 12 Aug 2018 09:43:39 +0200 Subject: [openstack-dev] [os-upstream-institute] Restarting meetings on August 20 Message-ID: <8D827CFA-946D-4C11-BBC1-4B8408FFCD0B@gmail.com> Hi, As the Summer vacation season is getting to its end and we also need to start to prepare for the training just before the Berlin Summit we plan to resurrect the OUI meetings on every second Monday at 2000 UTC starting on August 20. We will post the agenda on the regular etherpad: https://etherpad.openstack.org/p/openstack-upstream-institute-meetings Further useful links: You can see the current state of the website: https://docs.openstack.org/upstream-training/index.html The current training content can be found here: https://docs.openstack.org/upstream-training/upstream-training-content.html To check the latest stage of the Contributor Guide: https://docs.openstack.org/contributors/index.html Open training-guide reviews: https://review.openstack.org/#/q/project:openstack/training-guides+status:open Open Contributor Guide reviews: https://review.openstack.org/#/q/project:openstack/contributor-guide+status:open Contributor Guide StoryBoard open Stories/Tasks: https://storyboard.openstack.org/#!/project/913 Please let me know if you have any questions. Thanks and Best Regards, Ildikó (IRC: ildikov) From gmann at ghanshyammann.com Sun Aug 12 08:41:20 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 12 Aug 2018 17:41:20 +0900 Subject: [openstack-dev] [releases][rocky][tempest-plugins][ptl] Reminder to tag the Tempest plugins for Rocky release Message-ID: <1652d4c0292.bc627e0c30615.4963641156798511337@ghanshyammann.com> Hi All, Rocky release is few weeks away and we all agreed to release Tempest plugin with cycle-with-intermediary. Detail discussion are in ML [1] in case you missed. This is reminder to tag your project tempest plugins for Rocky release. You should be able to find your plugins deliverable file under rocky folder in releases repo[3]. You can refer cinder-tempest-plugin release as example. Feel free to reach to release/QA team for any help/query. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131810.html [2] https://review.openstack.org/#/c/590025/ [3] https://github.com/openstack/releases/tree/master/deliverables/rocky -gmann From singh.surya64mnnit at gmail.com Sun Aug 12 15:02:30 2018 From: singh.surya64mnnit at gmail.com (Surya Singh) Date: Sun, 12 Aug 2018 20:32:30 +0530 Subject: [openstack-dev] [kolla] Dropping core reviewer In-Reply-To: References: <1533652097121.31214@cisco.com> <1533830111273.2195@cisco.com> Message-ID: Michal Thanks a lot for serving Kolla in so many ways. Cheers, - Surya On Thu, Aug 9, 2018 at 9:53 PM Michał Jastrzębski wrote: > Hello Kollegues, Koalas and Koalines, > > I feel I should do the same, as my work sadly doesn't involve Kolla, > or OpenStack for that matter, any more. > > It has been a wonderful time and serving Kolla community as core and > PTL is achievement I'm most proud of and I thank you all for giving me > this opportunity. We've built something great! > > Cheers, > Michal > On Thu, 9 Aug 2018 at 08:55, Steven Dake (stdake) > wrote: > > > > Kollians, > > > > > > Thanks for the kind words. > > > > > > I do plan to stay involved in the OpenStack community - specifically > targeting governance and will definitely be around - irc - mls - summits - > etc :) > > > > > > Cheers > > > > -steve > > > > > > ________________________________ > > From: Surya Singh > > Sent: Wednesday, August 8, 2018 10:56 PM > > To: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] [kolla] Dropping core reviewer > > > > words are not strong enough to appreciate your immense contribution and > help in OpenStack community. > > Projects like Kolla, Heat and Magnum are still rocking and many more to > come in future from you. > > Hope to see you around. > > > > Wish you all the luck !! > > -- Surya > > > > On Wed, Aug 8, 2018 at 6:15 PM Paul Bourke > wrote: > >> > >> +1. Will always have good memories of when Steve was getting the project > >> off the ground. Thanks Steve for doing a great job of building the > >> community around Kolla, and for all your help in general! > >> > >> Best of luck, > >> -Paul > >> > >> On 08/08/18 12:23, Eduardo Gonzalez wrote: > >> > Steve, > >> > > >> > Is sad to see you leaving kolla core team, hope to still see you > around > >> > IRC and Summit/PTGs. > >> > > >> > I truly appreciate your leadership, guidance and commitment to make > >> > kolla the great project it is now. > >> > > >> > Best luck on your new projects and board of directors. > >> > > >> > Regards > >> > > >> > > >> > > >> > > >> > > >> > 2018-08-07 16:28 GMT+02:00 Steven Dake (stdake) >> > >: > >> > > >> > Kollians, > >> > > >> > > >> > Many of you that know me well know my feelings towards > participating > >> > as a core reviewer in a project. Folks with the ability to +2/+W > >> > gerrit changes can sometimes unintentionally harm a codebase if > they > >> > are not consistently reviewing and maintaining codebase context. > I > >> > also believe in leading an exception-free life, and I'm no > exception > >> > to my own rules. As I am not reviewing Kolla actively given my > >> > OpenStack individually elected board of directors service and > other > >> > responsibilities, I am dropping core reviewer ability for the > Kolla > >> > repositories. > >> > > >> > > >> > I want to take a moment to thank the thousands of people that have > >> > contributed and shaped Kolla into the modern deployment system for > >> > OpenStack that it is today. I personally find Kolla to be my > finest > >> > body of work as a leader. Kolla would not have been possible > >> > without the involvement of the OpenStack global community working > >> > together to resolve the operational pain points of OpenStack. > Thank > >> > you for your contributions. > >> > > >> > > >> > Finally, quoting Thierry [1] from our initial application to > >> > OpenStack, " ... Long live Kolla!" > >> > > >> > > >> > Cheers! > >> > > >> > -steve > >> > > >> > > >> > [1] https://review.openstack.org/#/c/206789/ > >> > > >> > > >> > > >> > > >> > > >> > > >> > > __________________________________________________________________________ > >> > OpenStack Development Mailing List (not for usage questions) > >> > Unsubscribe: > >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> > < > http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe> > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > < > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev> > >> > > >> > > >> > > >> > > >> > > __________________________________________________________________________ > >> > OpenStack Development Mailing List (not for usage questions) > >> > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Sun Aug 12 20:11:59 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Sun, 12 Aug 2018 16:11:59 -0400 Subject: [openstack-dev] [oslo][tooz][etcd] need help debugging tooz test failure Message-ID: <1534104610-sup-3898@lrrr.local> The tooz tests on master and stable/rocky are failing with an error: UnicodeDecodeError: 'utf8' codec can't decode byte 0xc4 in position 0: invalid continuation byte This is unrelated to the change, which is simply importing test job settings or updating the .gitreview file. I need someone familiar with the library to help debug the issue. Can we get a volunteer? https://review.openstack.org/#/q/project:openstack/tooz+is:open From sundar.nadathur at intel.com Sun Aug 12 22:11:44 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Sun, 12 Aug 2018 15:11:44 -0700 Subject: [openstack-dev] [Cyborg] Update device info in db via REST API or RPC? Message-ID: <283ca8bb-8807-e092-801e-32ef54dbcdbf@intel.com> Hi all,   Apparently a decision was taken to have the Cyborg agent update the Cyborg database with device information using REST APIs, as part of discovery. The use of REST API has many implications: * It is open to public. So, we have to authenticate the users and check for   abuse. Even if it is open only to operators, it can still be prone to   error. * REST APIs have backwards compatibility requirements. It will not be easy to   change the signature or semantics. We also need to check the implications   on upgrade. It would be better to make this an RPC API offered by the Cyborg conductor, which will keep it internal to Cyborg and avoid the issues above. Thanks. Regards, Sundar From shzhzichen at gmail.com Mon Aug 13 01:58:57 2018 From: shzhzichen at gmail.com (=?UTF-8?B?6ZmI5Lqa5YWJ?=) Date: Mon, 13 Aug 2018 09:58:57 +0800 Subject: [openstack-dev] [tricircle] Nominate change in tricircle core team In-Reply-To: <7212f283.12059.1651e931460.Coremail.linghucongsong@163.com> References: <7212f283.12059.1651e931460.Coremail.linghucongsong@163.com> Message-ID: +1 2018-08-09 20:04 GMT+08:00 linghucongsong : > > Hi team, I would like to nominate Zhuo Tang (ztang) for tricircle core > member. ztang has actively joined the discussion of feature development in > our offline meeting and has participated in contribute important > blueprints since Rocky, like network deletion reliability and service > function chaining. I really think his experience will help us substantially > improve tricircle. > Bye the way the vote unitl 2018-8-16 beijing time. > > Best Wishes! > Baisen > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zmj1981123 at 163.com Mon Aug 13 06:13:15 2018 From: zmj1981123 at 163.com (zmj1981123) Date: Mon, 13 Aug 2018 14:13:15 +0800 (CST) Subject: [openstack-dev] [tricircle] Nominate change in tricircle core team In-Reply-To: References: <7212f283.12059.1651e931460.Coremail.linghucongsong@163.com> Message-ID: <55c9fa9b.75a9.16531eac883.Coremail.zmj1981123@163.com> +1 2018-08-09 20:04 GMT+08:00 linghucongsong : Hi team, I would like to nominate Zhuo Tang (ztang) for tricircle core member. ztang has actively joined the discussion of feature development in our offline meeting and has participated in contribute important blueprints since Rocky, like network deletion reliability and service function chaining. I really think his experience will help us substantially improve tricircle. Bye the way the vote unitl 2018-8-16 beijing time. Best Wishes! Baisen __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Mon Aug 13 08:07:21 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Mon, 13 Aug 2018 16:07:21 +0800 Subject: [openstack-dev] [nova] about live-resize down the instance Message-ID: Hi,all I find it is important that live-resize the instance in production environment,especially live downsize the disk.And we have talked it many years.But I don't know why the bp[1] didn't approved.Can you tell me more about this ?Thank you very much. [1]https://review.openstack.org/#/c/141219/ Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfrancoa at redhat.com Mon Aug 13 08:24:13 2018 From: jfrancoa at redhat.com (Jose Luis Franco Arza) Date: Mon, 13 Aug 2018 10:24:13 +0200 Subject: [openstack-dev] [tripleo] Proposing Lukas Bezdicka core on TripleO In-Reply-To: References: <9bd08898-b667-47c0-4b18-2e50de5ea406@redhat.com> Message-ID: +1 of course! On Mon, Aug 6, 2018 at 5:50 PM, Dougal Matthews wrote: > +1 > > On 6 August 2018 at 16:28, Alex Schultz wrote: > >> +1 >> >> On Mon, Aug 6, 2018 at 7:19 AM, Bogdan Dobrelya >> wrote: >> > +1 >> > >> > On 8/1/18 1:31 PM, Giulio Fidente wrote: >> >> >> >> Hi, >> >> >> >> I would like to propose Lukas Bezdicka core on TripleO. >> >> >> >> Lukas did a lot work in our tripleoclient, tripleo-common and >> >> tripleo-heat-templates repos to make FFU possible. >> >> >> >> FFU, which is meant to permit upgrades from Newton to Queens, requires >> >> in depth understanding of many TripleO components (for example Heat, >> >> Mistral and the TripleO client) but also of specific TripleO features >> >> which were added during the course of the three releases (for example >> >> config-download and upgrade tasks). I believe his FFU work to have been >> >> very challenging. >> >> >> >> Given his broad understanding, more recently Lukas started helping >> doing >> >> reviews in other areas. >> >> >> >> I am so sure he'll be a great addition to our group that I am not even >> >> looking for comments, just votes :D >> >> >> > >> > >> > -- >> > Best regards, >> > Bogdan Dobrelya, >> > Irc #bogdando >> > >> > >> > ____________________________________________________________ >> ______________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcafarel at redhat.com Mon Aug 13 08:24:15 2018 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Mon, 13 Aug 2018 10:24:15 +0200 Subject: [openstack-dev] [neutron] Bug deputy report, week August 5th - August 12th Message-ID: I was the bug deputy for the August 6th - August 12th week (6th exclusive, it was handled by Miguel, thanks to him). A mostly quiet week, most issues came with related fix (and some already merged). Interesting topics: API performance improvement (1786226), a few fixes for metering agent, a DVR bug/discussion (1786169), API performance improvement (1786226), some test fixes High: https://bugs.launchpad.net/neutron/+bug/1785848 - Neutron server producing tracebacks with 'L3RouterPlugin' object has no attribute 'is_distributed_router' when DVR is enabled - Fix merged https://review.openstack.org/#/c/589573 https://bugs.launchpad.net/neutron/+bug/1786213 - Metering agent: failed to run ip netns command - Fix merged https://review.openstack.org/#/c/590215/ Medium: https://bugs.launchpad.net/neutron/+bug/1786347 - Incorrect entry point of metering iptables driver - Fix merged https://review.openstack.org/#/c/590479 https://bugs.launchpad.net/neutron/+bug/1786169 - DVR: Missing fixed_ips info for IPv6 subnets - Proposed fix at https://review.openstack.org/#/c/590157 https://bugs.launchpad.net/neutron/+bug/1786413 - Cannot load neutron_fwaas.conf by neutron-api - Proposed fix at https://review.openstack.org/590656 https://bugs.launchpad.net/neutron/+bug/1786272 - Connection between two virtual routers does not work with DVR - In discussion (limitation or real issue) Low: https://bugs.launchpad.net/neutron/+bug/1786472 - Scenario test_connectivity_min_max_mtu fails when cirros is used - Fix merged https://review.openstack.org/#/c/590763 Wishlist: https://bugs.launchpad.net/neutron/+bug/1786226 - Use sqlalchemy baked query - Sample baked query at https://review.openstack.org/#/c/430973/2 already shows impressive performance improvements Incomplete: https://bugs.launchpad.net/neutron/+bug/1786047 - neutron-dhcp-agent is unable to set network namespaces - Issue happens on an openstack-ansible deployment with manual agent install on top. Additional details asked, but this is probably more a deployment issue than a real neutron bug https://bugs.launchpad.net/neutron/+bug/1786408 - IPsec shutdown and re-up the external-interface ,routing missing - a vpnaas bug, but on Kilo release. To confirm on a supported release? -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbooth at redhat.com Mon Aug 13 08:50:18 2018 From: mbooth at redhat.com (Matthew Booth) Date: Mon, 13 Aug 2018 09:50:18 +0100 Subject: [openstack-dev] [nova] CI job running functional against a mysql DB Message-ID: I was reviewing https://review.openstack.org/#/c/504885/ . The change looks good to me and I believe the test included exercises the root cause of the problem. However, I'd like to be certain that the test has been executed against MySQL rather than, eg, SQLite. Zuul has voted +1 on the change. Can anybody tell me if any of those jobs ran the included functional test against a MySQL DB?, Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) From skaplons at redhat.com Mon Aug 13 08:53:55 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 13 Aug 2018 10:53:55 +0200 Subject: [openstack-dev] Neutron team meeting cancelled Message-ID: <38428FDF-C0C5-4F16-ACB6-DF5BA38D1A4D@redhat.com> Hi, As Miguel is not available this week, Tuesday’s Neutron team meeting is cancelled. Next meeting should be normally on Monday, 20.08 — Slawek Kaplonski Senior software engineer Red Hat From skaplons at redhat.com Mon Aug 13 08:57:56 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 13 Aug 2018 10:57:56 +0200 Subject: [openstack-dev] Neutron CI meeting on 14.08 cancelled Message-ID: Hi, As few of Neutron core reviewers are not available this week, Tuesday’s CI meeting on 14.08 is cancelled. Next meeting will be at 21.08.2018 — Slawek Kaplonski Senior software engineer Red Hat From aplanas at suse.de Mon Aug 13 09:05:26 2018 From: aplanas at suse.de (Alberto Planas Dominguez) Date: Mon, 13 Aug 2018 11:05:26 +0200 Subject: [openstack-dev] [rpm-packaging] Step down as a reviewer Message-ID: <0a42dac71ee047ff9f4b1ef87114f019c617d6b8.camel@suse.de> Dear rpm-packaging team, I was lucky to help doing reviews for the rpm-packaging OpenStack project for the last couple of release cycles. I learned a lot during this time. I will change my role at SUSE at the end of the month (August 2018), so I request to be removed from the core position on those projects. Also, a big thank to the team for the provided help during this time. Saludos! -- SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) Maxfeldstraße 5, 90409 Nürnberg, Germany From mbooth at redhat.com Mon Aug 13 10:06:34 2018 From: mbooth at redhat.com (Matthew Booth) Date: Mon, 13 Aug 2018 11:06:34 +0100 Subject: [openstack-dev] [nova] Do we still want to lowercase metadata keys? Message-ID: Thanks mriedem for answering my previous question, and also pointing out the related previous spec around just forcing all metadata to be lowercase: (Spec: approved in Newton) https://review.openstack.org/#/c/311529/ (Online migration: not merged, abandoned) https://review.openstack.org/#/c/329737/ There are other code patches, but the above is representative. What I had read was the original bug: https://bugs.launchpad.net/nova/+bug/1538011 The tl;dr is that the default collation used by MySQL results in a bug when creating 2 metadata keys which differ only in case. The proposal was obviously to simply make all metadata keys lower case. However, as melwitt pointed out in the bug at the time that's a potentially user hostile change. After some lost IRC discussion it seems that folks believed at the time that to fix this properly would seriously compromise the performance of these queries. The agreed way forward was to allow existing keys to keep their case, but force new keys to be lower case (so I wonder how the above online migration came about?). Anyway, as Rajesh's patch shows, it's actually very easy just to fix the MySQL misconfiguration: https://review.openstack.org/#/c/504885/ So my question is, given that the previous series remains potentially user hostile, the fix isn't as complex as previously believed, and it doesn't involve a performance penalty, are there any other reasons why we might want to resurrect it rather than just go with Rajesh's patch? Or should we ask Rajesh to expand his patch into a series covering other metadata? Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) From gkotton at vmware.com Mon Aug 13 10:11:30 2018 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 13 Aug 2018 10:11:30 +0000 Subject: [openstack-dev] Pycharm license Message-ID: <17BD2D92-AB8C-47B3-AD2E-300F5A1084B6@vmware.com> Hi, I understand that the community has an option to get licenses. Anyone have any information regarding this. Thanks Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From coolsvap at gmail.com Mon Aug 13 10:18:17 2018 From: coolsvap at gmail.com (Swapnil Kulkarni) Date: Mon, 13 Aug 2018 15:48:17 +0530 Subject: [openstack-dev] Pycharm license In-Reply-To: <17BD2D92-AB8C-47B3-AD2E-300F5A1084B6@vmware.com> References: <17BD2D92-AB8C-47B3-AD2E-300F5A1084B6@vmware.com> Message-ID: On Mon 13 Aug, 2018, 15:41 Gary Kotton, wrote: > Hi, > > I understand that the community has an option to get licenses. Anyone have > any information regarding this. > > Thanks > > Gary > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Hi Gary, I can give you one. Please follow https://wiki.openstack.org/wiki/Pycharm Best Regards, Swapnil > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Mon Aug 13 11:36:56 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Mon, 13 Aug 2018 19:36:56 +0800 Subject: [openstack-dev] [Openstack-operators][nova]about live-resize down the instance Message-ID: Hi,all I find it is important that live-resize the instance in production environment,especially live downsize the disk.And we have talked it many years.But I don't know why the bp[1] didn't approved.Can you tell me more about this ?Thank you very much. [1]https://review.openstack.org/#/c/141219/ Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From tenobreg at redhat.com Mon Aug 13 12:32:27 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Mon, 13 Aug 2018 09:32:27 -0300 Subject: [openstack-dev] [sahara] Sahara PTG Etherpad Message-ID: Hi folks, I have started working on the planning etherpad for the Stein PTG[1]. Please review it and add more topics so we can review and select what we can discuss in Denver. Thanks all, [1] https://etherpad.openstack.org/p/sahara-stein-ptg -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Mon Aug 13 13:04:43 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 13 Aug 2018 09:04:43 -0400 Subject: [openstack-dev] [nova] Do we still want to lowercase metadata keys? In-Reply-To: References: Message-ID: <0af69e26-e73e-9257-1ca0-e2c43cde9a5d@gmail.com> On 08/13/2018 06:06 AM, Matthew Booth wrote: > Thanks mriedem for answering my previous question, and also pointing > out the related previous spec around just forcing all metadata to be > lowercase: > > (Spec: approved in Newton) https://review.openstack.org/#/c/311529/ > (Online migration: not merged, abandoned) > https://review.openstack.org/#/c/329737/ > > There are other code patches, but the above is representative. What I > had read was the original bug: > > https://bugs.launchpad.net/nova/+bug/1538011 > > The tl;dr is that the default collation used by MySQL results in a bug > when creating 2 metadata keys which differ only in case. The proposal > was obviously to simply make all metadata keys lower case. However, as > melwitt pointed out in the bug at the time that's a potentially user > hostile change. After some lost IRC discussion it seems that folks > believed at the time that to fix this properly would seriously > compromise the performance of these queries. The agreed way forward > was to allow existing keys to keep their case, but force new keys to > be lower case (so I wonder how the above online migration came > about?). > > Anyway, as Rajesh's patch shows, it's actually very easy just to fix > the MySQL misconfiguration: > > https://review.openstack.org/#/c/504885/ > > So my question is, given that the previous series remains potentially > user hostile, the fix isn't as complex as previously believed, and it > doesn't involve a performance penalty, are there any other reasons why > we might want to resurrect it rather than just go with Rajesh's patch? > Or should we ask Rajesh to expand his patch into a series covering > other metadata? Keep in mind this patch is only related to *aggregate* metadata, AFAICT. Any patch series that tries to "fix" this issue needs to include all of the following: * input automatically lower-cased [1] * inline (note: not online, inline) data migration inside the InstanceMeta object's _from_db_object() method for existing non-lowercased keys * change the collation of the aggregate_metadata.key column (note: this will require an entire rebuild of the table, since this column is part of a unique constraint [3] * online data migration for migrating non-lowercased keys to their lowercased counterpars (essentially doing `UPDATE key = LOWER(key) WHERE LOWER(key) != key` once the collation has been changed) None of the above touches the API layer. I suppose some might argue that the REST API should be microversion-bumped since the expected behaviour of the API will change (data will be transparently changed in one version of the API and not another). I don't personally think that's something I would require a microversion for, but who knows what others may say. Best, -jay [1] https://github.com/openstack/nova/blob/16f89fd093217d22530570e8277b561ea79f46ff/nova/objects/aggregate.py#L295 and https://github.com/openstack/nova/blob/16f89fd093217d22530570e8277b561ea79f46ff/nova/objects/aggregate.py#L331 and https://github.com/openstack/nova/blob/16f89fd093217d22530570e8277b561ea79f46ff/nova/objects/aggregate.py#L356 [2] https://github.com/openstack/nova/blob/16f89fd093217d22530570e8277b561ea79f46ff/nova/objects/aggregate.py#L248 [3] https://github.com/openstack/nova/blob/16f89fd093217d22530570e8277b561ea79f46ff/nova/db/sqlalchemy/api_models.py#L64 From cbelu at cloudbasesolutions.com Mon Aug 13 13:16:18 2018 From: cbelu at cloudbasesolutions.com (Claudiu Belu) Date: Mon, 13 Aug 2018 13:16:18 +0000 Subject: [openstack-dev] [Openstack-operators][nova]about live-resize down the instance In-Reply-To: References: Message-ID: Hi, That's quite an old spec. :) It has quite a bit of history, and the general nova core opinions were "No, we're probably not going to do that" in the beginning, to "We should probably add that, if people are asking for it" during the last OpenStack PTG. It even had 2x +2s and ~11 +1s at one point. I'll repropose the blueprint again for Stein and rebase everything, if people want it. Implementation-wise, it's pretty much done, last time I was still doing some functional and unit tests for it. I had a tempest test for it too. I had some issues regarding notifications, but gibi helped me sort them out (thanks!). As far as functionality goes, I'm afraid that at the moment we're only going to add live-resize to a larger flavor (live-upsizing). There are a few concerns regarding live-downsizing, especially when it comes to the hypevisor support for something like this (not all of them supports this). Additionally, when it comes to live-downsizing the disk, AFAIK, there's also a data loss concern associated with it, if the disk was not freed and / or if the partition table wasn't shrunk beforehandm, etc. Additionally, some nova drivers don't support downsizing the disks at all, not even cold resize. There are a lot of discussions and debates on the spec you've linked, and you can also read the arguments regarding the question you've asked too. tl;dr version. During the last PTG, we've agreed to "approve" the blueprint with only live-upsizing the instance in-place as the first iteration of the feature, which any new additions to be discussed afterwards. Best regards, Claudiu Belu ________________________________ From: Rambo [lijie at unitedstack.com] Sent: Monday, August 13, 2018 2:36 PM To: OpenStack Developmen Subject: [openstack-dev] [Openstack-operators][nova]about live-resize down the instance Hi,all I find it is important that live-resize the instance in production environment,especially live downsize the disk.And we have talked it many years.But I don't know why the bp[1] didn't approved.Can you tell me more about this ?Thank you very much. [1]https://review.openstack.org/#/c/141219/ Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Mon Aug 13 13:30:01 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Mon, 13 Aug 2018 21:30:01 +0800 Subject: [openstack-dev] [Openstack-operators][nova] deployment question consultation Message-ID: Hi,all I have some questions about deploy the large scale openstack cloud.Such as 1.Only in one region situation,what will happen in the cloud as expansion of cluster size?Then how solve it?If have the limit physical node number under the one region situation?How many nodes would be the best in one regione? 2.When to use cellV2 is most suitable in cloud? 3.How to shorten the time of batch creation of instance? Can you tell me more about these combined with own practice? Would you give me some methods to learn it?Such as the website,blog and so on. Thank you very much!Looking forward to hearing from you. Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Mon Aug 13 13:35:23 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 13 Aug 2018 15:35:23 +0200 Subject: [openstack-dev] [releases][rocky][tempest-plugins][ptl] Reminder to tag the Tempest plugins for Rocky release In-Reply-To: <1652d4c0292.bc627e0c30615.4963641156798511337@ghanshyammann.com> References: <1652d4c0292.bc627e0c30615.4963641156798511337@ghanshyammann.com> Message-ID: <90e7a800-9cf2-8c81-47f3-e16ba4def85e@redhat.com> Hi, The plugins are branchless and should stay so. Let us not dive into this madness again please. Dmitry On 08/12/2018 10:41 AM, Ghanshyam Mann wrote: > Hi All, > > Rocky release is few weeks away and we all agreed to release Tempest plugin with cycle-with-intermediary. Detail discussion are in ML [1] in case you missed. > > This is reminder to tag your project tempest plugins for Rocky release. You should be able to find your plugins deliverable file under rocky folder in releases repo[3]. You can refer cinder-tempest-plugin release as example. > > Feel free to reach to release/QA team for any help/query. Please make up your mind. Please. Please. Please. > > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131810.html > [2] https://review.openstack.org/#/c/590025/ > [3] https://github.com/openstack/releases/tree/master/deliverables/rocky > > -gmann > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Mon Aug 13 13:46:42 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 13 Aug 2018 09:46:42 -0400 Subject: [openstack-dev] [releases][rocky][tempest-plugins][ptl] Reminder to tag the Tempest plugins for Rocky release In-Reply-To: <90e7a800-9cf2-8c81-47f3-e16ba4def85e@redhat.com> References: <1652d4c0292.bc627e0c30615.4963641156798511337@ghanshyammann.com> <90e7a800-9cf2-8c81-47f3-e16ba4def85e@redhat.com> Message-ID: <1534167743-sup-5403@lrrr.local> Excerpts from Dmitry Tantsur's message of 2018-08-13 15:35:23 +0200: > Hi, > > The plugins are branchless and should stay so. Let us not dive into this madness > again please. You are correct that we do not want to branch, because we want the same tests running against all branches of services in our CI system to help us avoid (or at least recognize) API-breaking changes across release boundaries. We *do* need to tag so that people consuming the plugins to certify their clouds know which version of the plugin works with the version of the software they are installing. Newer versions of plugins may rely on features or changes in newer versions of tempest, or other dependencies, that are not available in an environment that is running an older cloud. We will apply those tags in the series-specific deliverable files in openstack/releases so that the version numbers appear together on releases.openstack.org on the relevant release page so that users looking for the "rocky" version of a plugin can find it easily. Doug > > Dmitry > > On 08/12/2018 10:41 AM, Ghanshyam Mann wrote: > > Hi All, > > > > Rocky release is few weeks away and we all agreed to release Tempest plugin with cycle-with-intermediary. Detail discussion are in ML [1] in case you missed. > > > > This is reminder to tag your project tempest plugins for Rocky release. You should be able to find your plugins deliverable file under rocky folder in releases repo[3]. You can refer cinder-tempest-plugin release as example. > > > > Feel free to reach to release/QA team for any help/query. > > Please make up your mind. Please. Please. Please. > > > > > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131810.html > > [2] https://review.openstack.org/#/c/590025/ > > [3] https://github.com/openstack/releases/tree/master/deliverables/rocky > > > > -gmann > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From jim at jimrollenhagen.com Mon Aug 13 13:50:34 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Mon, 13 Aug 2018 09:50:34 -0400 Subject: [openstack-dev] [releases][rocky][tempest-plugins][ptl] Reminder to tag the Tempest plugins for Rocky release In-Reply-To: <1534167743-sup-5403@lrrr.local> References: <1652d4c0292.bc627e0c30615.4963641156798511337@ghanshyammann.com> <90e7a800-9cf2-8c81-47f3-e16ba4def85e@redhat.com> <1534167743-sup-5403@lrrr.local> Message-ID: // jim On Mon, Aug 13, 2018 at 9:46 AM, Doug Hellmann wrote: > Excerpts from Dmitry Tantsur's message of 2018-08-13 15:35:23 +0200: > > Hi, > > > > The plugins are branchless and should stay so. Let us not dive into this > madness > > again please. > > You are correct that we do not want to branch, because we want the > same tests running against all branches of services in our CI system > to help us avoid (or at least recognize) API-breaking changes across > release boundaries. > > We *do* need to tag so that people consuming the plugins to certify > their clouds know which version of the plugin works with the version > of the software they are installing. Newer versions of plugins may > rely on features or changes in newer versions of tempest, or other > dependencies, that are not available in an environment that is > running an older cloud. > > We will apply those tags in the series-specific deliverable files in > openstack/releases so that the version numbers appear together on > releases.openstack.org on the relevant release page so that users > looking for the "rocky" version of a plugin can find it easily. > Thanks Doug. My confusion was around the cycle-with-intermediary model, which I thought implied a stable branch. Tagging at end of cycle seems fine to me. :) // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Mon Aug 13 13:51:56 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 13 Aug 2018 15:51:56 +0200 Subject: [openstack-dev] [releases][rocky][tempest-plugins][ptl] Reminder to tag the Tempest plugins for Rocky release In-Reply-To: <1534167743-sup-5403@lrrr.local> References: <1652d4c0292.bc627e0c30615.4963641156798511337@ghanshyammann.com> <90e7a800-9cf2-8c81-47f3-e16ba4def85e@redhat.com> <1534167743-sup-5403@lrrr.local> Message-ID: On 08/13/2018 03:46 PM, Doug Hellmann wrote: > Excerpts from Dmitry Tantsur's message of 2018-08-13 15:35:23 +0200: >> Hi, >> >> The plugins are branchless and should stay so. Let us not dive into this madness >> again please. > > You are correct that we do not want to branch, because we want the > same tests running against all branches of services in our CI system > to help us avoid (or at least recognize) API-breaking changes across > release boundaries. Okay, thank you for clarification. I stand corrected and apologize if my frustration was expressed too loudly or harshly :) > > We *do* need to tag so that people consuming the plugins to certify > their clouds know which version of the plugin works with the version > of the software they are installing. Newer versions of plugins may > rely on features or changes in newer versions of tempest, or other > dependencies, that are not available in an environment that is > running an older cloud. ++ > > We will apply those tags in the series-specific deliverable files in > openstack/releases so that the version numbers appear together on > releases.openstack.org on the relevant release page so that users > looking for the "rocky" version of a plugin can find it easily. Okay, this makes sense now. > > Doug > >> >> Dmitry >> >> On 08/12/2018 10:41 AM, Ghanshyam Mann wrote: >>> Hi All, >>> >>> Rocky release is few weeks away and we all agreed to release Tempest plugin with cycle-with-intermediary. Detail discussion are in ML [1] in case you missed. >>> >>> This is reminder to tag your project tempest plugins for Rocky release. You should be able to find your plugins deliverable file under rocky folder in releases repo[3]. You can refer cinder-tempest-plugin release as example. >>> >>> Feel free to reach to release/QA team for any help/query. >> >> Please make up your mind. Please. Please. Please. >> >>> >>> [1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131810.html >>> [2] https://review.openstack.org/#/c/590025/ >>> [3] https://github.com/openstack/releases/tree/master/deliverables/rocky >>> >>> -gmann >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jimmy at openstack.org Mon Aug 13 13:53:03 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 13 Aug 2018 08:53:03 -0500 Subject: [openstack-dev] Speaker Selection Process: OpenStack Summit Berlin Message-ID: <5B718D3F.9030202@openstack.org> Greetings! The speakers for the OpenStack Summit Berlin will be announced August 14, at 4:00 AM UTC. Ahead of that, we want to take this opportunity to thank our Programming Committee! They have once again taken time out of their busy schedules to help create another round of outstanding content for the OpenStack Summit. The OpenStack Foundation relies on the community-nominated Programming Committee, along with your Community Votes to select the content of the summit. If you're curious about this process, you can read more about it here where we have also listed the Programming Committee members. If you'd like to nominate yourself or someone you know for the OpenStack Summit Denver Programming Committee, you can do so here: * *https://openstackfoundation.formstack.com/forms/openstackdenver2019_programmingcommitteenom Thanks a bunch and we look forward to seeing everyone in Berlin! Cheers, Jimmy * * -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Mon Aug 13 13:56:44 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 13 Aug 2018 09:56:44 -0400 Subject: [openstack-dev] [barbican][ara][helm][tempest] Removal of fedora-27 nodes In-Reply-To: <20180803000146.GA23278@localhost.localdomain> References: <20180803000146.GA23278@localhost.localdomain> Message-ID: <20180813135644.GA29768@localhost.localdomain> On Thu, Aug 02, 2018 at 08:01:46PM -0400, Paul Belanger wrote: > Greetings, > > We've had fedora-28 nodes online for some time in openstack-infra, I'd like to > finish the migration process and remove fedora-27 images. > > Please take a moment to review and approve the following patches[1]. We'll be > using the fedora-latest nodeset now, which make is a little easier for > openstack-infra to migrate to newer versions of fedora. Next time around, we'll > send out an email to the ML once fedora-29 is online to give projects some time > to test before we make the change. > > Thanks > - Paul > > [1] https://review.openstack.org/#/q/topic:fedora-latest > Thanks for the approval of the patches above, today we are blocked by the following backport for barbican[2]. If we can land this today, we can proceed with the removal from nodepool. Thanks - Paul [2] https://review.openstack.org/590420/ From gmann at ghanshyammann.com Mon Aug 13 13:58:29 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 13 Aug 2018 22:58:29 +0900 Subject: [openstack-dev] [releases][rocky][tempest-plugins][ptl] Reminder to tag the Tempest plugins for Rocky release In-Reply-To: <90e7a800-9cf2-8c81-47f3-e16ba4def85e@redhat.com> References: <1652d4c0292.bc627e0c30615.4963641156798511337@ghanshyammann.com> <90e7a800-9cf2-8c81-47f3-e16ba4def85e@redhat.com> Message-ID: <1653394ba6d.e1f3f6c750214.7297904163901040186@ghanshyammann.com> ---- On Mon, 13 Aug 2018 22:35:23 +0900 Dmitry Tantsur wrote ---- > Hi, > > The plugins are branchless and should stay so. Let us not dive into this madness > again please. > > Dmitry > > On 08/12/2018 10:41 AM, Ghanshyam Mann wrote: > > Hi All, > > > > Rocky release is few weeks away and we all agreed to release Tempest plugin with cycle-with-intermediary. Detail discussion are in ML [1] in case you missed. > > > > This is reminder to tag your project tempest plugins for Rocky release. You should be able to find your plugins deliverable file under rocky folder in releases repo[3]. You can refer cinder-tempest-plugin release as example. > > > > Feel free to reach to release/QA team for any help/query. > > Please make up your mind. Please. Please. Please. Not sure why it is being understood as to cut the branch for plugins. This thread is just to remind plugins owner to tag the plugins for Rocky release. 'cycle-with-intermediary' does not need to be cut the branch always, for plugins and tempest it is just to release a tag for current OpenStack release. -gmann > > > > > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131810.html > > [2] https://review.openstack.org/#/c/590025/ > > [3] https://github.com/openstack/releases/tree/master/deliverables/rocky > > > > -gmann > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From allison at openstack.org Mon Aug 13 13:59:31 2018 From: allison at openstack.org (Allison Price) Date: Mon, 13 Aug 2018 08:59:31 -0500 Subject: [openstack-dev] Speaker Selection Process: OpenStack Summit Berlin In-Reply-To: <5B718D3F.9030202@openstack.org> References: <5B718D3F.9030202@openstack.org> Message-ID: <5B515018-FDF4-49D8-89F0-DC3C8ED942CF@openstack.org> Hi everyone, One quick clarification. The speakers will be announced on August 14 at 1300 UTC / 4:00 AM PDT. Cheers, Allison > On Aug 13, 2018, at 8:53 AM, Jimmy McArthur wrote: > > Greetings! > > The speakers for the OpenStack Summit Berlin will be announced August 14, at 4:00 AM UTC. Ahead of that, we want to take this opportunity to thank our Programming Committee! They have once again taken time out of their busy schedules to help create another round of outstanding content for the OpenStack Summit. > > The OpenStack Foundation relies on the community-nominated Programming Committee, along with your Community Votes to select the content of the summit. If you're curious about this process, you can read more about it here where we have also listed the Programming Committee members. > > If you'd like to nominate yourself or someone you know for the OpenStack Summit Denver Programming Committee, you can do so here: > https://openstackfoundation.formstack.com/forms/openstackdenver2019_programmingcommitteenom > > Thanks a bunch and we look forward to seeing everyone in Berlin! > > Cheers, > Jimmy > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Aug 13 14:00:05 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 13 Aug 2018 10:00:05 -0400 Subject: [openstack-dev] [releases][rocky][tempest-plugins][ptl] Reminder to tag the Tempest plugins for Rocky release In-Reply-To: References: <1652d4c0292.bc627e0c30615.4963641156798511337@ghanshyammann.com> <90e7a800-9cf2-8c81-47f3-e16ba4def85e@redhat.com> <1534167743-sup-5403@lrrr.local> Message-ID: <1534168731-sup-4254@lrrr.local> Excerpts from Jim Rollenhagen's message of 2018-08-13 09:50:34 -0400: > // jim > > On Mon, Aug 13, 2018 at 9:46 AM, Doug Hellmann > wrote: > > > Excerpts from Dmitry Tantsur's message of 2018-08-13 15:35:23 +0200: > > > Hi, > > > > > > The plugins are branchless and should stay so. Let us not dive into this > > madness > > > again please. > > > > You are correct that we do not want to branch, because we want the > > same tests running against all branches of services in our CI system > > to help us avoid (or at least recognize) API-breaking changes across > > release boundaries. > > > > We *do* need to tag so that people consuming the plugins to certify > > their clouds know which version of the plugin works with the version > > of the software they are installing. Newer versions of plugins may > > rely on features or changes in newer versions of tempest, or other > > dependencies, that are not available in an environment that is > > running an older cloud. > > > > We will apply those tags in the series-specific deliverable files in > > openstack/releases so that the version numbers appear together on > > releases.openstack.org on the relevant release page so that users > > looking for the "rocky" version of a plugin can find it easily. > > > > Thanks Doug. My confusion was around the cycle-with-intermediary model, > which I thought implied a stable branch. Tagging at end of cycle seems > fine to me. :) > > // jim Normally cycle-with-intermediary would imply a branch, but it's really focused more around the releases than the branching. There's a separate flag that most deliverables don't need to use that controls the branching policy, and in this case we treat these repos as branchless. Doug From doug at doughellmann.com Mon Aug 13 14:01:33 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 13 Aug 2018 10:01:33 -0400 Subject: [openstack-dev] [releases][rocky][tempest-plugins][ptl] Reminder to tag the Tempest plugins for Rocky release In-Reply-To: References: <1652d4c0292.bc627e0c30615.4963641156798511337@ghanshyammann.com> <90e7a800-9cf2-8c81-47f3-e16ba4def85e@redhat.com> <1534167743-sup-5403@lrrr.local> Message-ID: <1534168812-sup-500@lrrr.local> Excerpts from Dmitry Tantsur's message of 2018-08-13 15:51:56 +0200: > On 08/13/2018 03:46 PM, Doug Hellmann wrote: > > Excerpts from Dmitry Tantsur's message of 2018-08-13 15:35:23 +0200: > >> Hi, > >> > >> The plugins are branchless and should stay so. Let us not dive into this madness > >> again please. > > > > You are correct that we do not want to branch, because we want the > > same tests running against all branches of services in our CI system > > to help us avoid (or at least recognize) API-breaking changes across > > release boundaries. > > Okay, thank you for clarification. I stand corrected and apologize if my > frustration was expressed too loudly or harshly :) Not at all. This is new territory, and we made a decision somewhat quickly, so I am not surprised that we need to do a little more work to communicate the results. > > > > > We *do* need to tag so that people consuming the plugins to certify > > their clouds know which version of the plugin works with the version > > of the software they are installing. Newer versions of plugins may > > rely on features or changes in newer versions of tempest, or other > > dependencies, that are not available in an environment that is > > running an older cloud. > > ++ > > > > > We will apply those tags in the series-specific deliverable files in > > openstack/releases so that the version numbers appear together on > > releases.openstack.org on the relevant release page so that users > > looking for the "rocky" version of a plugin can find it easily. > > Okay, this makes sense now. Good. Now, we just need someone to figure out where to write all of that down so we don't have to have the same conversation next cycle. :-) Doug > > > > > Doug > > > >> > >> Dmitry > >> > >> On 08/12/2018 10:41 AM, Ghanshyam Mann wrote: > >>> Hi All, > >>> > >>> Rocky release is few weeks away and we all agreed to release Tempest plugin with cycle-with-intermediary. Detail discussion are in ML [1] in case you missed. > >>> > >>> This is reminder to tag your project tempest plugins for Rocky release. You should be able to find your plugins deliverable file under rocky folder in releases repo[3]. You can refer cinder-tempest-plugin release as example. > >>> > >>> Feel free to reach to release/QA team for any help/query. > >> > >> Please make up your mind. Please. Please. Please. > >> > >>> > >>> [1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131810.html > >>> [2] https://review.openstack.org/#/c/590025/ > >>> [3] https://github.com/openstack/releases/tree/master/deliverables/rocky > >>> > >>> -gmann > >>> > >>> > >>> > >>> __________________________________________________________________________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >> > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From gmann at ghanshyammann.com Mon Aug 13 14:03:18 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 13 Aug 2018 23:03:18 +0900 Subject: [openstack-dev] [releases][rocky][tempest-plugins][ptl] Reminder to tag the Tempest plugins for Rocky release In-Reply-To: <1534167743-sup-5403@lrrr.local> References: <1652d4c0292.bc627e0c30615.4963641156798511337@ghanshyammann.com> <90e7a800-9cf2-8c81-47f3-e16ba4def85e@redhat.com> <1534167743-sup-5403@lrrr.local> Message-ID: <16533992115.12b0b7a10170.919690754664039341@ghanshyammann.com> ---- On Mon, 13 Aug 2018 22:46:42 +0900 Doug Hellmann wrote ---- > Excerpts from Dmitry Tantsur's message of 2018-08-13 15:35:23 +0200: > > Hi, > > > > The plugins are branchless and should stay so. Let us not dive into this madness > > again please. > > You are correct that we do not want to branch, because we want the > same tests running against all branches of services in our CI system > to help us avoid (or at least recognize) API-breaking changes across > release boundaries. > > We *do* need to tag so that people consuming the plugins to certify > their clouds know which version of the plugin works with the version > of the software they are installing. Newer versions of plugins may > rely on features or changes in newer versions of tempest, or other > dependencies, that are not available in an environment that is > running an older cloud. > > We will apply those tags in the series-specific deliverable files in > openstack/releases so that the version numbers appear together on > releases.openstack.org on the relevant release page so that users > looking for the "rocky" version of a plugin can find it easily. Thanks Doug for clarifying it again :). Details can be found on original ML[1] also about goal behind tagging the plugins. Next item pending on branchless testing is to setup the Plugin CI jobs for stable branches also like Tempest does. That is one item for QA team to help plugins in stein. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131810.html -gmann > > Doug > > > > > Dmitry > > > > On 08/12/2018 10:41 AM, Ghanshyam Mann wrote: > > > Hi All, > > > > > > Rocky release is few weeks away and we all agreed to release Tempest plugin with cycle-with-intermediary. Detail discussion are in ML [1] in case you missed. > > > > > > This is reminder to tag your project tempest plugins for Rocky release. You should be able to find your plugins deliverable file under rocky folder in releases repo[3]. You can refer cinder-tempest-plugin release as example. > > > > > > Feel free to reach to release/QA team for any help/query. > > > > Please make up your mind. Please. Please. Please. > > > > > > > > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131810.html > > > [2] https://review.openstack.org/#/c/590025/ > > > [3] https://github.com/openstack/releases/tree/master/deliverables/rocky > > > > > > -gmann > > > > > > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mbooth at redhat.com Mon Aug 13 14:10:44 2018 From: mbooth at redhat.com (Matthew Booth) Date: Mon, 13 Aug 2018 15:10:44 +0100 Subject: [openstack-dev] [nova] Do we still want to lowercase metadata keys? In-Reply-To: <0af69e26-e73e-9257-1ca0-e2c43cde9a5d@gmail.com> References: <0af69e26-e73e-9257-1ca0-e2c43cde9a5d@gmail.com> Message-ID: On Mon, 13 Aug 2018 at 14:05, Jay Pipes wrote: > > On 08/13/2018 06:06 AM, Matthew Booth wrote: > > Thanks mriedem for answering my previous question, and also pointing > > out the related previous spec around just forcing all metadata to be > > lowercase: > > > > (Spec: approved in Newton) https://review.openstack.org/#/c/311529/ > > (Online migration: not merged, abandoned) > > https://review.openstack.org/#/c/329737/ > > > > There are other code patches, but the above is representative. What I > > had read was the original bug: > > > > https://bugs.launchpad.net/nova/+bug/1538011 > > > > The tl;dr is that the default collation used by MySQL results in a bug > > when creating 2 metadata keys which differ only in case. The proposal > > was obviously to simply make all metadata keys lower case. However, as > > melwitt pointed out in the bug at the time that's a potentially user > > hostile change. After some lost IRC discussion it seems that folks > > believed at the time that to fix this properly would seriously > > compromise the performance of these queries. The agreed way forward > > was to allow existing keys to keep their case, but force new keys to > > be lower case (so I wonder how the above online migration came > > about?). > > > > Anyway, as Rajesh's patch shows, it's actually very easy just to fix > > the MySQL misconfiguration: > > > > https://review.openstack.org/#/c/504885/ > > > > So my question is, given that the previous series remains potentially > > user hostile, the fix isn't as complex as previously believed, and it > > doesn't involve a performance penalty, are there any other reasons why > > we might want to resurrect it rather than just go with Rajesh's patch? > > Or should we ask Rajesh to expand his patch into a series covering > > other metadata? > > Keep in mind this patch is only related to *aggregate* metadata, AFAICT. Right, but the original bug pointed out that the same problem applies equally to a bunch of different metadata stores. I haven't verified, but the provenance was good ;) There would have to be other patches for the other metadata stores. > > Any patch series that tries to "fix" this issue needs to include all of > the following: > > * input automatically lower-cased [1] > * inline (note: not online, inline) data migration inside the > InstanceMeta object's _from_db_object() method for existing > non-lowercased keys I suspect I've misunderstood, but I was arguing this is an anti-goal. There's no reason to do this if the db is working correctly, and it would violate the principal of least surprise in dbs with legacy datasets (being all current dbs). These values have always been mixed case, lets just leave them be and fix the db. > * change the collation of the aggregate_metadata.key column (note: this > will require an entire rebuild of the table, since this column is part > of a unique constraint [3] Rajesh's patch changes the collation of the table, which I would assume applies to its columns? I assume this is going to be a moderately expensive, but one-off, operation similar in cost to adding a new unique constraint. > * online data migration for migrating non-lowercased keys to their > lowercased counterpars (essentially doing `UPDATE key = LOWER(key) WHERE > LOWER(key) != key` once the collation has been changed) > None of the above touches the API layer. I suppose some might argue that > the REST API should be microversion-bumped since the expected behaviour > of the API will change (data will be transparently changed in one > version of the API and not another). I don't personally think that's > something I would require a microversion for, but who knows what others > may say. Again, I was considering this is an anti-goal. As I understand, Rajesh's patch removes the requirement to make this api change. What did I miss? Thanks, Matt > > Best, > -jay > > [1] > https://github.com/openstack/nova/blob/16f89fd093217d22530570e8277b561ea79f46ff/nova/objects/aggregate.py#L295 > and > https://github.com/openstack/nova/blob/16f89fd093217d22530570e8277b561ea79f46ff/nova/objects/aggregate.py#L331 > and > https://github.com/openstack/nova/blob/16f89fd093217d22530570e8277b561ea79f46ff/nova/objects/aggregate.py#L356 > > > [2] > https://github.com/openstack/nova/blob/16f89fd093217d22530570e8277b561ea79f46ff/nova/objects/aggregate.py#L248 > > [3] > https://github.com/openstack/nova/blob/16f89fd093217d22530570e8277b561ea79f46ff/nova/db/sqlalchemy/api_models.py#L64 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) From gmann at ghanshyammann.com Mon Aug 13 14:12:51 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 13 Aug 2018 23:12:51 +0900 Subject: [openstack-dev] [releases][rocky][tempest-plugins][ptl] Reminder to tag the Tempest plugins for Rocky release In-Reply-To: <1534168812-sup-500@lrrr.local> References: <1652d4c0292.bc627e0c30615.4963641156798511337@ghanshyammann.com> <90e7a800-9cf2-8c81-47f3-e16ba4def85e@redhat.com> <1534167743-sup-5403@lrrr.local> <1534168812-sup-500@lrrr.local> Message-ID: <16533a1e0e7.d47eeec4656.8022181314291607201@ghanshyammann.com> ---- On Mon, 13 Aug 2018 23:01:33 +0900 Doug Hellmann wrote ---- > Excerpts from Dmitry Tantsur's message of 2018-08-13 15:51:56 +0200: > > On 08/13/2018 03:46 PM, Doug Hellmann wrote: > > > Excerpts from Dmitry Tantsur's message of 2018-08-13 15:35:23 +0200: > > >> Hi, > > >> > > >> The plugins are branchless and should stay so. Let us not dive into this madness > > >> again please. > > > > > > You are correct that we do not want to branch, because we want the > > > same tests running against all branches of services in our CI system > > > to help us avoid (or at least recognize) API-breaking changes across > > > release boundaries. > > > > Okay, thank you for clarification. I stand corrected and apologize if my > > frustration was expressed too loudly or harshly :) > > Not at all. This is new territory, and we made a decision somewhat > quickly, so I am not surprised that we need to do a little more work to > communicate the results. > > > > > > > > > We *do* need to tag so that people consuming the plugins to certify > > > their clouds know which version of the plugin works with the version > > > of the software they are installing. Newer versions of plugins may > > > rely on features or changes in newer versions of tempest, or other > > > dependencies, that are not available in an environment that is > > > running an older cloud. > > > > ++ > > > > > > > > We will apply those tags in the series-specific deliverable files in > > > openstack/releases so that the version numbers appear together on > > > releases.openstack.org on the relevant release page so that users > > > looking for the "rocky" version of a plugin can find it easily. > > > > Okay, this makes sense now. > > Good. > > Now, we just need someone to figure out where to write all of that down > so we don't have to have the same conversation next cycle. :-) +1, this is very imp. I was discussing the same with amotoki today on QA channel. I have added a TODO for me to write the 1. "How Plugins should cover the stable branch testing with branchless repo" now i can add 2nd TODO also 2. "Release model & tagging clarification of Tempest Plugins". I do not know the best common place to add those doc but as start i can write those in Tempest doc and later we can refer/move the same on Plugins side too. I have added this TODO on qa stein ptg etherpad also for reminder/feedback- https://etherpad.openstack.org/p/qa-stein-ptg -gmann > > Doug > > > > > > > > > Doug > > > > > >> > > >> Dmitry > > >> > > >> On 08/12/2018 10:41 AM, Ghanshyam Mann wrote: > > >>> Hi All, > > >>> > > >>> Rocky release is few weeks away and we all agreed to release Tempest plugin with cycle-with-intermediary. Detail discussion are in ML [1] in case you missed. > > >>> > > >>> This is reminder to tag your project tempest plugins for Rocky release. You should be able to find your plugins deliverable file under rocky folder in releases repo[3]. You can refer cinder-tempest-plugin release as example. > > >>> > > >>> Feel free to reach to release/QA team for any help/query. > > >> > > >> Please make up your mind. Please. Please. Please. > > >> > > >>> > > >>> [1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131810.html > > >>> [2] https://review.openstack.org/#/c/590025/ > > >>> [3] https://github.com/openstack/releases/tree/master/deliverables/rocky > > >>> > > >>> -gmann > > >>> > > >>> > > >>> > > >>> __________________________________________________________________________ > > >>> OpenStack Development Mailing List (not for usage questions) > > >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >>> > > >> > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From matt at nycresistor.com Mon Aug 13 14:20:05 2018 From: matt at nycresistor.com (Matt Joyce) Date: Mon, 13 Aug 2018 10:20:05 -0400 Subject: [openstack-dev] [Openstack-operators] Speaker Selection Process: OpenStack Summit Berlin In-Reply-To: <5B515018-FDF4-49D8-89F0-DC3C8ED942CF@openstack.org> References: <5B718D3F.9030202@openstack.org> <5B515018-FDF4-49D8-89F0-DC3C8ED942CF@openstack.org> Message-ID: CFP work is hard as hell. Much respect to the review panel members. It's a thankless difficult job. So, in lieu of being thankless, THANK YOU -Matt On Mon, Aug 13, 2018 at 9:59 AM, Allison Price wrote: > Hi everyone, > > One quick clarification. The speakers will be announced on* August 14 at > 1300 UTC / 4:00 AM PDT.* > > Cheers, > Allison > > > On Aug 13, 2018, at 8:53 AM, Jimmy McArthur wrote: > > Greetings! > > The speakers for the OpenStack Summit Berlin will be announced August 14, > at 4:00 AM UTC. Ahead of that, we want to take this opportunity to thank > our Programming Committee! They have once again taken time out of their > busy schedules to help create another round of outstanding content for the > OpenStack Summit. > > The OpenStack Foundation relies on the community-nominated Programming > Committee, along with your Community Votes to select the content of the > summit. If you're curious about this process, you can read more about it > here > > where we have also listed the Programming Committee members. > > If you'd like to nominate yourself or someone you know for the OpenStack > Summit Denver Programming Committee, you can do so here: > https://openstackfoundation.formstack.com/forms/openstackdenver2019_ > programmingcommitteenom > > Thanks a bunch and we look forward to seeing everyone in Berlin! > > Cheers, > Jimmy > > > > > * > * > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Mon Aug 13 14:26:09 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 13 Aug 2018 10:26:09 -0400 Subject: [openstack-dev] [nova] Do we still want to lowercase metadata keys? In-Reply-To: References: <0af69e26-e73e-9257-1ca0-e2c43cde9a5d@gmail.com> Message-ID: On 08/13/2018 10:10 AM, Matthew Booth wrote: > On Mon, 13 Aug 2018 at 14:05, Jay Pipes wrote: >> >> On 08/13/2018 06:06 AM, Matthew Booth wrote: >>> Thanks mriedem for answering my previous question, and also pointing >>> out the related previous spec around just forcing all metadata to be >>> lowercase: >>> >>> (Spec: approved in Newton) https://review.openstack.org/#/c/311529/ >>> (Online migration: not merged, abandoned) >>> https://review.openstack.org/#/c/329737/ >>> >>> There are other code patches, but the above is representative. What I >>> had read was the original bug: >>> >>> https://bugs.launchpad.net/nova/+bug/1538011 >>> >>> The tl;dr is that the default collation used by MySQL results in a bug >>> when creating 2 metadata keys which differ only in case. The proposal >>> was obviously to simply make all metadata keys lower case. However, as >>> melwitt pointed out in the bug at the time that's a potentially user >>> hostile change. After some lost IRC discussion it seems that folks >>> believed at the time that to fix this properly would seriously >>> compromise the performance of these queries. The agreed way forward >>> was to allow existing keys to keep their case, but force new keys to >>> be lower case (so I wonder how the above online migration came >>> about?). >>> >>> Anyway, as Rajesh's patch shows, it's actually very easy just to fix >>> the MySQL misconfiguration: >>> >>> https://review.openstack.org/#/c/504885/ >>> >>> So my question is, given that the previous series remains potentially >>> user hostile, the fix isn't as complex as previously believed, and it >>> doesn't involve a performance penalty, are there any other reasons why >>> we might want to resurrect it rather than just go with Rajesh's patch? >>> Or should we ask Rajesh to expand his patch into a series covering >>> other metadata? >> >> Keep in mind this patch is only related to *aggregate* metadata, AFAICT. > > Right, but the original bug pointed out that the same problem applies > equally to a bunch of different metadata stores. I haven't verified, > but the provenance was good ;) There would have to be other patches > for the other metadata stores. Yes, it is quite unfortunate that OpenStack has about 15 different ways of storing metadata key/value information. >> >> Any patch series that tries to "fix" this issue needs to include all of >> the following: >> >> * input automatically lower-cased [1] >> * inline (note: not online, inline) data migration inside the >> InstanceMeta object's _from_db_object() method for existing >> non-lowercased keys > > I suspect I've misunderstood, but I was arguing this is an anti-goal. > There's no reason to do this if the db is working correctly, and it > would violate the principal of least surprise in dbs with legacy > datasets (being all current dbs). These values have always been mixed > case, lets just leave them be and fix the db. Do you want case-insensitive keys or do you not want case-insensitive keys? It seems to me that people complain that MySQL is case-insensitive by default but actually *like* the concept that a metadata key of "abc" should be "equal to" a metadata key of "ABC". In other words, it seems to me that users actually expect that: > nova aggregate-create agg1 > nova aggregate-set-metadata agg1 abc=1 > nova aggregate-set-metadata agg1 ABC=2 should result in the original "abc" metadata item getting its value set to "2". If that isn't the case -- and I have a very different impression of what users *actually* expect from the CLI/UI -- then let me know. -jay From amy at demarco.com Mon Aug 13 14:27:13 2018 From: amy at demarco.com (Amy Marrich) Date: Mon, 13 Aug 2018 09:27:13 -0500 Subject: [openstack-dev] User Committee Election Nominations Reminder Message-ID: Just wanted to remind everyone that the nomination period for the User Committee elections are open until August 17, 05:59 UTC. If you are an AUC and thinking about running what's stopping you? If you know of someone who would make a great committee member nominate them! Help make a difference for Operators, Users and the Community! Thanks, Amy Marrich (spotz) User Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Aug 13 14:35:24 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 13 Aug 2018 10:35:24 -0400 Subject: [openstack-dev] [tc] Technical Committee update for 13 Aug Message-ID: <1534170829-sup-2598@lrrr.local> This is the weekly summary of work being done by the Technical Committee members. The full list of active items is managed in the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recent Activity == Project updates: - add tripleo ansible repo: https://review.openstack.org/#/c/583416/ - add openstack-chef repo: https://review.openstack.org/#/c/585473/ Other changes: - Stein goal for upgrade check automation: https://review.openstack.org/#/c/585491/ - confirm the Stein election results: https://review.openstack.org/#/c/589691/ - confirm Darisz Krol PTL for Trove: https://review.openstack.org/#/c/588510/ - confirm Dirk Mueller PTL for rpm-packaging project: https://review.openstack.org/#/c/588617/ - remove expired extra ATCs: https://review.openstack.org/#/c/588586/ == Leaderless teams after PTL elections == As part of the discussion of how to deal with re-appointments of PTLs where no governance patch would be needed and hence we would have no place to vote, Chris has proposed a schema change that will let us handle the changes in the governance repo instead of updating the election repository after the election is over. - https://review.openstack.org/#/c/590790/ The RefStack team is being dissolved and the repositories moved to the interop working group. - https://review.openstack.org/#/c/590179/ Claudiu Belu volunteered to serve as PTL for the Winstackers team again for Stein. - https://review.openstack.org/#/c/590386/ We now have volunteers to serve as PTL for both Freezer and Searchlight, so we need to decide what action to take with those teams. - removing both from the rocky release: https://review.openstack.org/#/c/588605/ - removing freezer from governance: http://lists.openstack.org/pipermail/openstack-dev/2018-August/132873.html and https://review.openstack.org/#/c/588645/ - removing searchlight from governance: http://lists.openstack.org/pipermail/openstack-dev/2018-August/132874.html and https://review.openstack.org/#/c/588644/ - Trinh Nguyen has volunteered to be PTL for Searchlight: https://review.openstack.org/#/c/590601/ - Changcai Geng has volunteered to be PTL for Freezer: https://review.openstack.org/#/c/590071/ == Ongoing Discussions == Ian has updated his proposals to change the project testing interface to support PDF generation and documentation translation. These need to be reviewed by folks familiar with the tools and processes. - https://review.openstack.org/#/c/572559/ - https://review.openstack.org/#/c/588110/ The TC is planning 2 meetings during the week of the PTG. The proposed agendas are up for comment. - https://etherpad.openstack.org/p/tc-stein-ptg == TC member actions/focus/discussions for the coming week(s) == The PTG is approaching quickly. Please complete any remaining team health checks. Besides the items listed above as ongoing discussions, we have several other governance reviews open without sufficient votes to be approved. Please review. - https://review.openstack.org/#/q/project:openstack/governance+is:open == Contacting the TC == The Technical Committee uses a series of weekly "office hour" time slots for synchronous communication. We hope that by having several such times scheduled, we will have more opportunities to engage with members of the community from different timezones. Office hour times in #openstack-tc: - 09:00 UTC on Tuesdays - 01:00 UTC on Wednesdays - 15:00 UTC on Thursdays If you have something you would like the TC to discuss, you can add it to our office hour conversation starter etherpad at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Many of us also run IRC bouncers which stay in #openstack-tc most of the time, so please do not feel that you need to wait for an office hour time to pose a question or offer a suggestion. You can use the string "tc-members" to alert the members to your question. You will find channel logs with past conversations at http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ If you expect your topic to require significant discussion or to need input from members of the community other than the TC, please start a mailing list discussion on openstack-dev at lists.openstack.org and use the subject tag "[tc]" to bring it to the attention of TC members. From doug at doughellmann.com Mon Aug 13 14:39:18 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 13 Aug 2018 10:39:18 -0400 Subject: [openstack-dev] [releases][rocky][tempest-plugins][ptl] Reminder to tag the Tempest plugins for Rocky release In-Reply-To: <16533a1e0e7.d47eeec4656.8022181314291607201@ghanshyammann.com> References: <1652d4c0292.bc627e0c30615.4963641156798511337@ghanshyammann.com> <90e7a800-9cf2-8c81-47f3-e16ba4def85e@redhat.com> <1534167743-sup-5403@lrrr.local> <1534168812-sup-500@lrrr.local> <16533a1e0e7.d47eeec4656.8022181314291607201@ghanshyammann.com> Message-ID: <1534171097-sup-1733@lrrr.local> Excerpts from Ghanshyam Mann's message of 2018-08-13 23:12:51 +0900: > > > > ---- On Mon, 13 Aug 2018 23:01:33 +0900 Doug Hellmann wrote ---- > > Excerpts from Dmitry Tantsur's message of 2018-08-13 15:51:56 +0200: > > > On 08/13/2018 03:46 PM, Doug Hellmann wrote: > > > > Excerpts from Dmitry Tantsur's message of 2018-08-13 15:35:23 +0200: > > > >> Hi, > > > >> > > > >> The plugins are branchless and should stay so. Let us not dive into this madness > > > >> again please. > > > > > > > > You are correct that we do not want to branch, because we want the > > > > same tests running against all branches of services in our CI system > > > > to help us avoid (or at least recognize) API-breaking changes across > > > > release boundaries. > > > > > > Okay, thank you for clarification. I stand corrected and apologize if my > > > frustration was expressed too loudly or harshly :) > > > > Not at all. This is new territory, and we made a decision somewhat > > quickly, so I am not surprised that we need to do a little more work to > > communicate the results. > > > > > > > > > > > > > We *do* need to tag so that people consuming the plugins to certify > > > > their clouds know which version of the plugin works with the version > > > > of the software they are installing. Newer versions of plugins may > > > > rely on features or changes in newer versions of tempest, or other > > > > dependencies, that are not available in an environment that is > > > > running an older cloud. > > > > > > ++ > > > > > > > > > > > We will apply those tags in the series-specific deliverable files in > > > > openstack/releases so that the version numbers appear together on > > > > releases.openstack.org on the relevant release page so that users > > > > looking for the "rocky" version of a plugin can find it easily. > > > > > > Okay, this makes sense now. > > > > Good. > > > > Now, we just need someone to figure out where to write all of that down > > so we don't have to have the same conversation next cycle. :-) > > +1, this is very imp. I was discussing the same with amotoki today on QA channel. I have added a TODO for me to write the 1. "How Plugins should cover the stable branch testing with branchless repo" now i can add 2nd TODO also 2. "Release model & tagging clarification of Tempest Plugins". I do not know the best common place to add those doc but as start i can write those in Tempest doc and later we can refer/move the same on Plugins side too. > > I have added this TODO on qa stein ptg etherpad also for reminder/feedback- https://etherpad.openstack.org/p/qa-stein-ptg We have a reference page for deliverable types in the releases repository (https://releases.openstack.org/reference/deliverable_types.html). That could be a place to talk about the tagging and branching expectations. It doesn't cover tempest-plugins at all, yet. Doug From sean.mcginnis at gmx.com Mon Aug 13 15:22:48 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 13 Aug 2018 10:22:48 -0500 Subject: [openstack-dev] [PTL][TC] Stein Cycle Goals Message-ID: <20180813152247.GA25512@sm-workstation> We now have two cycle goals accepted for the Stein cycle. I think both are very beneficial goals to work towards, so personally I am very happy with where we landed on this. The two goals, with links to their full descriptions and nitty gritty details, can be found here: https://governance.openstack.org/tc/goals/stein/index.html Goals ===== Here are some high level details on the goals. Run under Python 3 by default (python3-first) --------------------------------------------- In Pike we had a goal for all projects to support Python 3.5. As a continuation of that effort, and in preparation for the EOL of Python 2, we now want to look at all of the ancillary things around projects and make sure that we are using Python 3 everywhere except those jobs explicitly intended for testing Python 2 support. This means all docs, linters, and other tools and utility jobs we use should be run using Python 3. https://governance.openstack.org/tc/goals/stein/python3-first.html Thanks to Doug Hellmann, Nguyễn Trí Hải, Ma Lei, and Huang Zhiping for championing this goal. Support Pre Upgrade Checks (upgrade-checkers) --------------------------------------------- One of the hot topics we've been discussing for some time at Forum and PTG events has been making upgrades better. To that end, we want to add tooling for each service to provide an "upgrade checker" tool that can check for various known issues so we can either give operators some assurance that they are ready to upgrade, or to let them know if some step was overlooked that will need to be done before attempting the upgrade. This goal follows the Nova `nova-status upgrade check` command precendent to make it a consistent capability for each service. The checks should look for things like missing or changed configuration options, incompatible object states, or other conditions that could lead to failures upgrading that project. More details can be found in the goal: https://governance.openstack.org/tc/goals/stein/upgrade-checkers.html Thanks to Matt Riedemann for championing this goal. Schedule ======== We hope to have all projects complete this goal by the week of March 4, 2019: https://releases.openstack.org/stein/schedule.html This is the same week as the Stein-3 milestone, as well as Feature Freeze and client lib freeze. Future Goals ============ We welcome any ideas for future cycle goals. Ideally these should be things that can actually be accomplished within one development cycle and would have a positive, and hopefully visible, impact for users and operators. Feel free to pitch any ideas here on the mailing list or drop by the #openstack-tc channel at any point. Thanks! -- Sean (smcginnis) From mbooth at redhat.com Mon Aug 13 15:29:14 2018 From: mbooth at redhat.com (Matthew Booth) Date: Mon, 13 Aug 2018 16:29:14 +0100 Subject: [openstack-dev] [nova] Do we still want to lowercase metadata keys? In-Reply-To: References: <0af69e26-e73e-9257-1ca0-e2c43cde9a5d@gmail.com> Message-ID: On Mon, 13 Aug 2018 at 15:27, Jay Pipes wrote: > > On 08/13/2018 10:10 AM, Matthew Booth wrote: > > On Mon, 13 Aug 2018 at 14:05, Jay Pipes wrote: > >> > >> On 08/13/2018 06:06 AM, Matthew Booth wrote: > >>> Thanks mriedem for answering my previous question, and also pointing > >>> out the related previous spec around just forcing all metadata to be > >>> lowercase: > >>> > >>> (Spec: approved in Newton) https://review.openstack.org/#/c/311529/ > >>> (Online migration: not merged, abandoned) > >>> https://review.openstack.org/#/c/329737/ > >>> > >>> There are other code patches, but the above is representative. What I > >>> had read was the original bug: > >>> > >>> https://bugs.launchpad.net/nova/+bug/1538011 > >>> > >>> The tl;dr is that the default collation used by MySQL results in a bug > >>> when creating 2 metadata keys which differ only in case. The proposal > >>> was obviously to simply make all metadata keys lower case. However, as > >>> melwitt pointed out in the bug at the time that's a potentially user > >>> hostile change. After some lost IRC discussion it seems that folks > >>> believed at the time that to fix this properly would seriously > >>> compromise the performance of these queries. The agreed way forward > >>> was to allow existing keys to keep their case, but force new keys to > >>> be lower case (so I wonder how the above online migration came > >>> about?). > >>> > >>> Anyway, as Rajesh's patch shows, it's actually very easy just to fix > >>> the MySQL misconfiguration: > >>> > >>> https://review.openstack.org/#/c/504885/ > >>> > >>> So my question is, given that the previous series remains potentially > >>> user hostile, the fix isn't as complex as previously believed, and it > >>> doesn't involve a performance penalty, are there any other reasons why > >>> we might want to resurrect it rather than just go with Rajesh's patch? > >>> Or should we ask Rajesh to expand his patch into a series covering > >>> other metadata? > >> > >> Keep in mind this patch is only related to *aggregate* metadata, AFAICT. > > > > Right, but the original bug pointed out that the same problem applies > > equally to a bunch of different metadata stores. I haven't verified, > > but the provenance was good ;) There would have to be other patches > > for the other metadata stores. > > Yes, it is quite unfortunate that OpenStack has about 15 different ways > of storing metadata key/value information. > > >> > >> Any patch series that tries to "fix" this issue needs to include all of > >> the following: > >> > >> * input automatically lower-cased [1] > >> * inline (note: not online, inline) data migration inside the > >> InstanceMeta object's _from_db_object() method for existing > >> non-lowercased keys > > > > I suspect I've misunderstood, but I was arguing this is an anti-goal. > > There's no reason to do this if the db is working correctly, and it > > would violate the principal of least surprise in dbs with legacy > > datasets (being all current dbs). These values have always been mixed > > case, lets just leave them be and fix the db. > > Do you want case-insensitive keys or do you not want case-insensitive keys? > > It seems to me that people complain that MySQL is case-insensitive by > default but actually *like* the concept that a metadata key of "abc" > should be "equal to" a metadata key of "ABC". > > In other words, it seems to me that users actually expect that: > > > nova aggregate-create agg1 > > nova aggregate-set-metadata agg1 abc=1 > > nova aggregate-set-metadata agg1 ABC=2 > > should result in the original "abc" metadata item getting its value set > to "2". > > If that isn't the case -- and I have a very different impression of what > users *actually* expect from the CLI/UI -- then let me know. I don't know what users want, tbh, I was simply coming from the POV of not breaking the current behaviour. Although I think you're pointing out that either solution breaks the current behaviour: 1. You lower case everything. This breaks users who query user metadata and don't expect keys to be modified. 2. You fix the case sensitivity. This breaks users who add 'Foo' and now expect to query 'foo'. You're saying that although (2) is an artifact of a bug, there could equally be people relying on it. Eurgh. Yeah, that sucks. Objectively though, I think I still like Rajesh's patch better because: * It's vastly simpler to implement correctly and verifiably, and therefore also less prone to future bugs. * It's how it was originally intended to work. * It's simpler to document. Of these, the first is by far the most persuasive. Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) From mbooth at redhat.com Mon Aug 13 15:36:42 2018 From: mbooth at redhat.com (Matthew Booth) Date: Mon, 13 Aug 2018 16:36:42 +0100 Subject: [openstack-dev] [nova] Do we still want to lowercase metadata keys? In-Reply-To: References: <0af69e26-e73e-9257-1ca0-e2c43cde9a5d@gmail.com> Message-ID: On Mon, 13 Aug 2018 at 15:27, Jay Pipes wrote: > Do you want case-insensitive keys or do you not want case-insensitive keys? > > It seems to me that people complain that MySQL is case-insensitive by > default but actually *like* the concept that a metadata key of "abc" > should be "equal to" a metadata key of "ABC". > > In other words, it seems to me that users actually expect that: > > > nova aggregate-create agg1 > > nova aggregate-set-metadata agg1 abc=1 > > nova aggregate-set-metadata agg1 ABC=2 > > should result in the original "abc" metadata item getting its value set > to "2". Incidentally, this particular example won't work today: it will just throw an error. I believe the same would apply to user metadata on an instance. IOW this particular example doesn't regress if you fix the bug. The regression would be anything user-facing which queries by metadata key. What does that? Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) From no-reply at openstack.org Mon Aug 13 15:42:39 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Mon, 13 Aug 2018 15:42:39 -0000 Subject: [openstack-dev] openstack-cyborg 1.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for openstack-cyborg for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/cyborg/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/openstack-cyborg/log/?h=stable/rocky Release notes for openstack-cyborg can be found at: https://docs.openstack.org/releasenotes/openstack-cyborg/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/openstack-cyborg and tag it *rocky-rc-potential* to bring it to the openstack-cyborg release crew's attention. From doug at doughellmann.com Mon Aug 13 15:42:59 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 13 Aug 2018 11:42:59 -0400 Subject: [openstack-dev] [goal][python3] week 1 update: here we go! Message-ID: <1534174674-sup-6906@lrrr.local> This is week 1 of the roll-out of the "Run under Python 3 by default" goal (https://governance.openstack.org/tc/goals/stein/python3-first.html). I intend to send a summary message like this roughly weekly to report progress on the Zuul setting migrations, because that portion of the goal needs to be coordinated. I will also be watching for changes to https://wiki.openstack.org/wiki/Python3 and will try to report on progress with functional or unit test jobs as teams log the information there. This email is a bit longer than I hope they will usually be, because it is the first and has some planning information. == The Plan == The completion criteria from the goal are: 1. The Zuul settings to attach jobs have been moved from the openstack-infra/project-config repository into each project repository. 2. Documentation build and publish jobs use python 3. 3. Release notes build and publish jobs use python 3. 4. Source code linter jobs (pep8, pylint, bandit, etc.) should use python 3. 5. Release artifact build and publish jobs use python 3 and publish to PyPI. 6. There are functional test jobs running under python 3. 7. The wiki tracking page is up to date for the project. 8. Projects are running python 3.6 unit test jobs. The goal champions will start by preparing the patches for steps 1-5, and 8. That will give project teams more time to focus on reviewing and on completing steps 6 and 7, which will need the expertise of someone on the team. PLEASE do not write the Zuul configuration migration patches yourselves. The rules for which things need to move ended up a bit complicated, it's all automated, and we do not want to start them before all of the stable/rocky branches are created for a team (otherwise we risk misconfiguring the stable branch). Sit tight and be ready to review the patches when we send them your way. We are waiting to start most teams until the final releases for Rocky are done, but we're going to start some of the teams less affected by that deadline early (like Oslo, Infra, and possibly Documentation). Project teams should be prepared to assist if there are any issues or questions about jobs with branch-specifiers. While the job migration is under way, changes to project-config for the team's repositories will be locked. As soon as the patches to import the job settings are landed for *all* of the repositories owned by a team we will update the project-config repo to remove those settings and then the job settings for the team will be unfrozen again. So, please review zuul configuration changes quickly, but carefully. :-) Step 6 involves adding more test jobs and may also require code changes or tox settings updates, so that work will be left to the project team. Step 7, updating the wiki tracking page to indicate what level of testing is being done and where any gaps might be, is the responsibility of the project team. Step 8 starts with a simple patch to add the unit test jobs, which will come as part of the rest of the zuul migration patches. If the tests do not immediately work, We will need the project teams to fix things up in the code or tox.ini files so the tests pass. == Tracking Progress == Because teams potentially need to do something in each repository, I have created separate stories for each team in storyboard, with one task per repo. Those stories are all tagged with 'goal-python3-first' and appear on the tracking board at https://storyboard.openstack.org/#!/board/104 The tasks on these stories are for teams to track their work. Feel free to add more tasks or otherwise elaborate on the stories as needed. If you add more stories, please use the same tag so they will show up on the board. Because not all teams have migrated to storyboard, some of the tasks are associated with the openstack/governance repo instead of a team repository. If your team wants to track work using a different tool, please come back and update the storyboard tasks as part of completing your goal so anyone tracking progress can find it all in one place. PTLs should update the stories based on the instructions in the goal procedures (https://governance.openstack.org/tc/goals/index.html#team-acknowledgment-of-goals ). If you include the story and task information in commit messages, some of that updating will happen for free (this is one of the reasons we're using storyboard). The goal champions will use a separate story, with one task per team, to track the work we are going to do with the zuul configuration migration. See https://storyboard.openstack.org/#!/story/2002586 for details. == Ongoing work == We are making good progress with migrating the zuul settings for the Oslo team as a final test of the full process. We will start with other regular teams after the final release candidates for Rocky are done (after 24 Aug), and the cycle-trailing projects after their release deadline has passed (after 31 Aug). == Next Steps == If your team is interested in being one of the first to have your zuul settings migrated, please let us know by following up to this email. We will start with the volunteers, and then start working our way through the other teams. After the final Rocky release, I will propose the change to project-config to change all of the packaging jobs to use the new publish-to-pypi-python3 template with the intent of having that change in place before the first milestone for Stein so that we have an opportunity to test it. Besides standardizing all of the Python package building jobs, this change will add a new check job to try to build a package and verify that it could be published by running "setup.py check". This should not cause any problems for projects that were released during Rocky, because we added the same check to the release validation job in openstack/releases. However, it may cause problems for projects that were not released during Rocky. Let the release know if you run into trouble and we will help figure out what's wrong. See https://governance.openstack.org/tc/goals/stein/python3-first.html#release-artifact-publishing for more details. == How can you help? == The best way for most people to help is to review the patches for the zuul changes and to work on the functional test jobs. We have scripts to prepare the zuul patches, but they're a bit finicky and it's easier if we have a small team using them and fixing them as we go. The champions cannot do the review work or the functional test job work, so that is *really* the best way to help. Another way to help is to pick up a patch that has failing tests and help fix it. For example, the patches to change the documentation jobs or add python 3.6 unit test jobs may need some code changes before they will work. The champions are signed up to write those patches, but won't necessarily be able to fix everything that is wrong. You'll find a list of goal-related patches failing tests at https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) == How can you ask for help? == If you have any questions, please post them here to the -dev list with the topic tag [python3] in the subject line. Posting questions here will give the widest audience the chance to see the answers. == Reference Material == Goal description: https://governance.openstack.org/tc/goals/stein/python3-first.html Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-first+is:open Storyboard: https://storyboard.openstack.org/#!/board/104 Zuul migration notes: https://etherpad.openstack.org/p/python3-first Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 From chris.friesen at windriver.com Mon Aug 13 15:46:41 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Mon, 13 Aug 2018 09:46:41 -0600 Subject: [openstack-dev] [nova] about live-resize down the instance In-Reply-To: References: Message-ID: <5B71A7E1.1090709@windriver.com> On 08/13/2018 02:07 AM, Rambo wrote: > Hi,all > > I find it is important that live-resize the instance in production > environment,especially live downsize the disk.And we have talked it many > years.But I don't know why the bp[1] didn't approved.Can you tell me more about > this ?Thank you very much. > > [1]https://review.openstack.org/#/c/141219/ It's been reviewed a number of times...I thought it was going to get approved for Rocky, but I think it didn't quite make it in...you'd have to ask the nova cores why not. It should be noted though that the above live-resize spec explicitly did not cover resizing smaller, only larger. Chris From aspiers at suse.com Mon Aug 13 15:47:30 2018 From: aspiers at suse.com (Adam Spiers) Date: Mon, 13 Aug 2018 16:47:30 +0100 Subject: [openstack-dev] [Openstack-sigs] [self-healing][heat][vitrage][mistral] Self-Healing with Vitrage, Heat, and Mistral In-Reply-To: References: Message-ID: <20180813154730.5w7lgrltggooqdow@pacific.linksys.moosehall> Hi Rico, Firstly sorry for the slow reply! I am finally catching up on my backlog. Rico Lin wrote: >Dear all > >Back to Vancouver Summit, Ifat brings out the idea of integrating Heat, >Vitrage, and Mistral to bring better self-healing scenario. >For previous works, There already works cross Heat, Mistral, and Zaqar for >self-healing [1]. >And there is works cross Vitrage, and Mistral [2]. >Now we plan to start working on integrating two works (as much as it >can/should be) and to make sure the scenario works and keep it working. >The integrated scenario flow will look something like this: >An existing monitor detect host/network failure and send an alarm to >Vitrage -> Vitrage deduces that the instance is down (based on the topology >and based on Vitrage templates [2]) -> Vitrage triggers Mistral to fix the >instance -> application is recovered >We created an Etherpad [3] to document all discussion/feedbacks/plans (and >will add more detail through time) >Also, create a story in self-healing SIG to track all task. > >The current plans are: > > - A spec for Vitrage resources in Heat [5] > - Create Vitrage resources in Heat > - Write Heat Template and Vitrage Template for this scenario > - A tempest task for above scenario > - Add periodic job for this scenario (with above task). The best place > to host this job (IMO) is under self-healing SIG This is great! It's a perfect example of the kind of cross-project collaboration which I always hoped the SIG would host. And I really love the idea of Heat making it even easier to deploy Vitrage templates automatically. Originally I thought that this would be too hard and that the SIG would initially need to focus on documenting how to manually deploy self-healing configurations, but supporting automation early on is a very nice bonus :-) So I expect that implementing this can make lives a lot easier for operators (and users) who need self-healing :-) And yes, I agree that the SIG would be the best place to host this job. >To create a periodic job for self-healing sig means we might also need a >place to manage those self-healing tempest test. For this scenario, I think >it will make sense if we use heat-tempest-plugin to store that scenario >test (since it will wrap as a Heat template) or use vitrage-tempest-plugin >(since most of the test scenario are actually already there). Sounds good. >Not sure what will happen if we create a new tempest plugin for >self-healing and no manager for it. Sorry for my ignorance - do you mean manager objects here[0], or some other kind of manager? [0] https://docs.openstack.org/tempest/latest/write_tests.html#manager-objects >We still got some uncertainty to clear during working on it, but the big >picture looks like all will works(if we doing all well on above tasks). >Please provide your feedback or question if you have any. >We do needs feedbacks and reviews on patches or any works. >If you're interested in this, please join us (we need users/ops/devs!). > >[1] https://github.com/openstack/heat-templates/tree/master/hot/autohealing >[2] >https://github.com/openstack/self-healing-sig/blob/master/specs/vitrage-mistral-integration.rst >[3] https://etherpad.openstack.org/p/self-healing-with-vitrage-mistral-heat >[4] https://storyboard.openstack.org/#!/story/2002684 >[5] https://review.openstack.org/#/c/578786 Thanks a lot for creating the story in Storyboard - this is really helpful :-) I'll try to help with reviews etc. and maybe even testing if I can find some extra time for it over the next few months. I can also try to help "market" this initiative in the community by promoting awareness and trying to get operators more involved. Thanks again! Excited about the direction this is heading in :-) Adam From chris.friesen at windriver.com Mon Aug 13 15:56:03 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Mon, 13 Aug 2018 09:56:03 -0600 Subject: [openstack-dev] [nova] Do we still want to lowercase metadata keys? In-Reply-To: References: <0af69e26-e73e-9257-1ca0-e2c43cde9a5d@gmail.com> Message-ID: <5B71AA13.8060008@windriver.com> On 08/13/2018 08:26 AM, Jay Pipes wrote: > On 08/13/2018 10:10 AM, Matthew Booth wrote: >> I suspect I've misunderstood, but I was arguing this is an anti-goal. >> There's no reason to do this if the db is working correctly, and it >> would violate the principal of least surprise in dbs with legacy >> datasets (being all current dbs). These values have always been mixed >> case, lets just leave them be and fix the db. > > Do you want case-insensitive keys or do you not want case-insensitive keys? > > It seems to me that people complain that MySQL is case-insensitive by default > but actually *like* the concept that a metadata key of "abc" should be "equal > to" a metadata key of "ABC". How do we behave on PostgreSQL? (I realize it's unsupported, but it still has users.) It's case-sensitive by default, do we override that? Personally, I've worked on case-sensitive systems long enough that I'd actually be surprised if "abc" matched "ABC". :) Chris From cboylan at sapwetik.org Mon Aug 13 15:57:00 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 13 Aug 2018 08:57:00 -0700 Subject: [openstack-dev] [nova] CI job running functional against a mysql DB In-Reply-To: References: Message-ID: <1534175820.3227520.1472605512.3763A87A@webmail.messagingengine.com> On Mon, Aug 13, 2018, at 1:50 AM, Matthew Booth wrote: > I was reviewing https://review.openstack.org/#/c/504885/ . The change > looks good to me and I believe the test included exercises the root > cause of the problem. However, I'd like to be certain that the test > has been executed against MySQL rather than, eg, SQLite. > > Zuul has voted +1 on the change. Can anybody tell me if any of those > jobs ran the included functional test against a MySQL DB?, Both functional jobs configured a MySQL and PostgeSQL database for use by the test suite [0][1]. Looking at Nova's tests, the migration tests (nova/tests/functional/db/api/test_migrations.py and nova/tests/unit/db/test_migrations.py) use the oslo.db ModelsMigrationsSync class which should use these real databases. I'm not finding evidence that any other tests classes will use the real databases. [0] http://logs.openstack.org/85/504885/9/check/nova-tox-functional/fa3327b/job-output.txt.gz#_2018-08-13_10_32_09_943951 [1] http://logs.openstack.org/85/504885/9/check/nova-tox-functional-py35/1f04657/job-output.txt.gz#_2018-08-13_10_31_00_289802 Hope this helps, Clark From jaypipes at gmail.com Mon Aug 13 16:10:22 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 13 Aug 2018 12:10:22 -0400 Subject: [openstack-dev] [nova] Do we still want to lowercase metadata keys? In-Reply-To: <5B71AA13.8060008@windriver.com> References: <0af69e26-e73e-9257-1ca0-e2c43cde9a5d@gmail.com> <5B71AA13.8060008@windriver.com> Message-ID: <25491ca8-4b5a-ccb9-173e-1b7f391e07ea@gmail.com> On 08/13/2018 11:56 AM, Chris Friesen wrote: > On 08/13/2018 08:26 AM, Jay Pipes wrote: >> On 08/13/2018 10:10 AM, Matthew Booth wrote: > >>> I suspect I've misunderstood, but I was arguing this is an anti-goal. >>> There's no reason to do this if the db is working correctly, and it >>> would violate the principal of least surprise in dbs with legacy >>> datasets (being all current dbs). These values have always been mixed >>> case, lets just leave them be and fix the db. >> >> Do you want case-insensitive keys or do you not want case-insensitive >> keys? >> >> It seems to me that people complain that MySQL is case-insensitive by >> default >> but actually *like* the concept that a metadata key of "abc" should be >> "equal >> to" a metadata key of "ABC". > > How do we behave on PostgreSQL?  (I realize it's unsupported, but it > still has users.)  It's case-sensitive by default, do we override that? > > Personally, I've worked on case-sensitive systems long enough that I'd > actually be surprised if "abc" matched "ABC". :) You have worked with case-insensitive systems for as long or longer, maybe without realizing it: All URLs are case-insensitive. If a user types in http://google.com they go to the same place as http://Google.com because DNS is case-insensitive [1] and has been since its beginning. Users -- of HTTP APIs in particular -- have tended to become accustomed to case-insensitivity in their HTTP API calls. This case is no different, IMHO. Best, -jay [1] https://tools.ietf.org/html/rfc4343#section-4 From mbooth at redhat.com Mon Aug 13 16:16:19 2018 From: mbooth at redhat.com (Matthew Booth) Date: Mon, 13 Aug 2018 17:16:19 +0100 Subject: [openstack-dev] [nova] Do we still want to lowercase metadata keys? In-Reply-To: <5B71AA13.8060008@windriver.com> References: <0af69e26-e73e-9257-1ca0-e2c43cde9a5d@gmail.com> <5B71AA13.8060008@windriver.com> Message-ID: On Mon, 13 Aug 2018 at 16:56, Chris Friesen wrote: > > On 08/13/2018 08:26 AM, Jay Pipes wrote: > > On 08/13/2018 10:10 AM, Matthew Booth wrote: > > >> I suspect I've misunderstood, but I was arguing this is an anti-goal. > >> There's no reason to do this if the db is working correctly, and it > >> would violate the principal of least surprise in dbs with legacy > >> datasets (being all current dbs). These values have always been mixed > >> case, lets just leave them be and fix the db. > > > > Do you want case-insensitive keys or do you not want case-insensitive keys? > > > > It seems to me that people complain that MySQL is case-insensitive by default > > but actually *like* the concept that a metadata key of "abc" should be "equal > > to" a metadata key of "ABC". > > How do we behave on PostgreSQL? (I realize it's unsupported, but it still has > users.) It's case-sensitive by default, do we override that? > > Personally, I've worked on case-sensitive systems long enough that I'd actually > be surprised if "abc" matched "ABC". :) To the best of my knowledge, the hypothetical PostgreSQL db works exactly how you, me, and pretty much any developer would expect :) Honestly, though, SQLite is probably more interesting as it's at least used for testing. SQLite's default collation is binary, which is obviously case sensitive as you'd expect. As a developer I'm heavily biased in favour of implementing the simplest fix with the simplest and most obvious behaviour, which is to change the default collation to do what everybody expected it did in the first place (which is what Rajesh's patch does). As Jay points out, though, I do concede that those pesky users may be impacted by fixing this bug if they've come to rely on accidental buggy behaviour. The question really comes down to how we can determine what the user impact is for each solution. And here I'm talking about all the various forms of metadata, assuming that whatever solution we picked we'd apply to all. So: - What API calls allow a user to query a thing by metadata key? I believe these API calls would be the only things affected by fixing the collation of metadata keys. If we know what they are we can ask what the impact of changing the behaviour would be. Setting metadata keys isn't subject to regression, as this was previously broken. Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) From pkovar at redhat.com Mon Aug 13 16:40:55 2018 From: pkovar at redhat.com (Petr Kovar) Date: Mon, 13 Aug 2018 18:40:55 +0200 Subject: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2 Message-ID: <20180813184055.a846b4a4d5a513722dbcc4ae@redhat.com> Hi all, This is a request for an FFE to release openstackdocstheme 1.21.2. This mostly fixes usability issues in rendering docs content, so we would like to update the theme across all project team docs on docs.o.o. See also the update constraint request at https://review.openstack.org/#/c/591020/. Thanks, pk From Kevin.Fox at pnnl.gov Mon Aug 13 16:50:40 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Mon, 13 Aug 2018 16:50:40 +0000 Subject: [openstack-dev] [PTL][TC] Stein Cycle Goals In-Reply-To: <20180813152247.GA25512@sm-workstation> References: <20180813152247.GA25512@sm-workstation> Message-ID: <1A3C52DFCD06494D8528644858247BF01C170074@EX10MBOX03.pnnl.gov> Since the upgrade checking has not been written yet, now would be a good time to unify them, so you upgrade check your openstack upgrade, not status check nova, status check neutron, status check glance, status check cinder ..... ad nauseam. Thanks, Kevin ________________________________________ From: Sean McGinnis [sean.mcginnis at gmx.com] Sent: Monday, August 13, 2018 8:22 AM To: openstack-dev at lists.openstack.org Subject: [openstack-dev] [PTL][TC] Stein Cycle Goals We now have two cycle goals accepted for the Stein cycle. I think both are very beneficial goals to work towards, so personally I am very happy with where we landed on this. The two goals, with links to their full descriptions and nitty gritty details, can be found here: https://governance.openstack.org/tc/goals/stein/index.html Goals ===== Here are some high level details on the goals. Run under Python 3 by default (python3-first) --------------------------------------------- In Pike we had a goal for all projects to support Python 3.5. As a continuation of that effort, and in preparation for the EOL of Python 2, we now want to look at all of the ancillary things around projects and make sure that we are using Python 3 everywhere except those jobs explicitly intended for testing Python 2 support. This means all docs, linters, and other tools and utility jobs we use should be run using Python 3. https://governance.openstack.org/tc/goals/stein/python3-first.html Thanks to Doug Hellmann, Nguyễn Trí Hải, Ma Lei, and Huang Zhiping for championing this goal. Support Pre Upgrade Checks (upgrade-checkers) --------------------------------------------- One of the hot topics we've been discussing for some time at Forum and PTG events has been making upgrades better. To that end, we want to add tooling for each service to provide an "upgrade checker" tool that can check for various known issues so we can either give operators some assurance that they are ready to upgrade, or to let them know if some step was overlooked that will need to be done before attempting the upgrade. This goal follows the Nova `nova-status upgrade check` command precendent to make it a consistent capability for each service. The checks should look for things like missing or changed configuration options, incompatible object states, or other conditions that could lead to failures upgrading that project. More details can be found in the goal: https://governance.openstack.org/tc/goals/stein/upgrade-checkers.html Thanks to Matt Riedemann for championing this goal. Schedule ======== We hope to have all projects complete this goal by the week of March 4, 2019: https://releases.openstack.org/stein/schedule.html This is the same week as the Stein-3 milestone, as well as Feature Freeze and client lib freeze. Future Goals ============ We welcome any ideas for future cycle goals. Ideally these should be things that can actually be accomplished within one development cycle and would have a positive, and hopefully visible, impact for users and operators. Feel free to pitch any ideas here on the mailing list or drop by the #openstack-tc channel at any point. Thanks! -- Sean (smcginnis) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From aj at suse.com Mon Aug 13 17:16:50 2018 From: aj at suse.com (Andreas Jaeger) Date: Mon, 13 Aug 2018 19:16:50 +0200 Subject: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2 In-Reply-To: <20180813184055.a846b4a4d5a513722dbcc4ae@redhat.com> References: <20180813184055.a846b4a4d5a513722dbcc4ae@redhat.com> Message-ID: <197ac3a5-df13-8003-16eb-d2cedee9ec7f@suse.com> On 2018-08-13 18:40, Petr Kovar wrote: > Hi all, > > This is a request for an FFE to release openstackdocstheme 1.21.2. > This mostly fixes usability issues in rendering docs content, so we would > like to update the theme across all project team docs on docs.o.o. I suggest to release quickly a 1.21.3 with https://review.openstack.org/#/c/585517/ - and use that one instead. > See also the update constraint request at > https://review.openstack.org/#/c/591020/. Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From jaypipes at gmail.com Mon Aug 13 17:21:56 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 13 Aug 2018 13:21:56 -0400 Subject: [openstack-dev] [oslo][tooz][etcd] need help debugging tooz test failure In-Reply-To: <1534104610-sup-3898@lrrr.local> References: <1534104610-sup-3898@lrrr.local> Message-ID: On 08/12/2018 04:11 PM, Doug Hellmann wrote: > The tooz tests on master and stable/rocky are failing with an error: > > UnicodeDecodeError: 'utf8' codec can't decode byte 0xc4 in position 0: > invalid continuation byte > > This is unrelated to the change, which is simply importing test job > settings or updating the .gitreview file. I need someone familiar with > the library to help debug the issue. > > Can we get a volunteer? Looking into it. Seems to be related to this upstream patch to python-etcd3gw: https://github.com/dims/etcd3-gateway/commit/224f40972b42c4ff16234c0e78ea765e3fe1af95 Best, -jay From sean.mcginnis at gmx.com Mon Aug 13 18:00:45 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 13 Aug 2018 13:00:45 -0500 Subject: [openstack-dev] [PTL][TC] Stein Cycle Goals In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C170074@EX10MBOX03.pnnl.gov> References: <20180813152247.GA25512@sm-workstation> <1A3C52DFCD06494D8528644858247BF01C170074@EX10MBOX03.pnnl.gov> Message-ID: <20180813180044.GA3691@sm-workstation> On Mon, Aug 13, 2018 at 04:50:40PM +0000, Fox, Kevin M wrote: > Since the upgrade checking has not been written yet, now would be a good time to unify them, so you upgrade check your openstack upgrade, not status check nova, status check neutron, status check glance, status check cinder ..... ad nauseam. > > Thanks, > Kevin > ________________________________________ That would be a good outcome of this. I think before we can have an overall upgrade check, each service in use needs to have that mechanism in place to perform their specific checks. As long as the pattern is followed as $project-status check upgrade, then it should be very feasible to write a higher level tool that iterates through all services and runs the checks. From aj at suse.com Mon Aug 13 18:28:23 2018 From: aj at suse.com (Andreas Jaeger) Date: Mon, 13 Aug 2018 20:28:23 +0200 Subject: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2 In-Reply-To: <197ac3a5-df13-8003-16eb-d2cedee9ec7f@suse.com> References: <20180813184055.a846b4a4d5a513722dbcc4ae@redhat.com> <197ac3a5-df13-8003-16eb-d2cedee9ec7f@suse.com> Message-ID: On 2018-08-13 19:16, Andreas Jaeger wrote: > On 2018-08-13 18:40, Petr Kovar wrote: >> Hi all, >> >> This is a request for an FFE to release openstackdocstheme 1.21.2. > > >> This mostly fixes usability issues in rendering docs content, so we would >> like to update the theme across all project team docs on docs.o.o. > > I suggest to release quickly a 1.21.3 with > https://review.openstack.org/#/c/585517/ - and use that one instead. Release request: https://review.openstack.org/591485 Andreas > >> See also the update constraint request at >> https://review.openstack.org/#/c/591020/. > > Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From juliaashleykreger at gmail.com Mon Aug 13 18:40:38 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 13 Aug 2018 14:40:38 -0400 Subject: [openstack-dev] [ironic] ironic-staging-drivers: what to do? Message-ID: Greetings fellow ironicans! As many of you might know an openstack/ironic-staging-drivers[1] repository exists. What most might not know is that it was intentionally created outside of ironic's governance[2]. At the time it was created ironic was moving towards removing drivers that did not meet our third-party CI requirement[3] to be in-tree. The repository was an attempt to give a home to what some might find useful or where third party CI is impractical or cost-prohibitive and thus could not be officially part of Ironic the service. There was hope that drivers could land in ironic-staging-drivers and possibly graduate to being moved in-tree with third-party CI. As our community has evolved we've not stopped and revisited the questions. With our most recent release over, I believe we need to ask ourselves if we should consider moving ironic-staging-drivers into our governance. Over the last couple of releases several contributors have found themselves trying to seek out two available reviewers to merge even trivial fixes[4]. Due to the team being so small this was no easy task. As a result, I'm wondering why not move the repository into governance, grant ironic-core review privileges upon the repository, and maintain the purpose and meaning of the repository. This would also result in the repository's release becoming managed via the release management process which is a plus. We could then propose an actual graduation process and help alleviate some of the issues where driver code is iterated upon for long periods of time before landing. At the same time I can see at least one issue which is if we were to do that, then we would also need to manage removal through the same path. I know there are concerns over responsibility in terms of code ownership and quality, but I feel like we already hit such issues[5], like those encountered when Dmitry removed classic drivers[6] from the repository and also encountered issues just prior to the latest release[7][8]. This topic has come up in passing at PTGs and most recently on IRC[9], and I think we ought to discuss it during our next weekly meeting[10]. I've gone ahead and added an item to the agenda, but we can also discuss via email. -Julia [1]: http://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/projects.yaml#n4571 [2]: http://git.openstack.org/cgit/openstack/ironic-staging-drivers/tree/README.rst#n16 [3]: https://specs.openstack.org/openstack/ironic-specs/specs/approved/third-party-ci.html [4]: https://review.openstack.org/#/c/548943/ [5]: https://review.openstack.org/#/c/541916/ [6]: https://review.openstack.org/567902 [7]: https://review.openstack.org/590352 [8]: https://review.openstack.org/590401 [9]: http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2018-08-09.log.html#t2018-08-09T11:55:27 [10]: https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting From chris.friesen at windriver.com Mon Aug 13 19:10:37 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Mon, 13 Aug 2018 13:10:37 -0600 Subject: [openstack-dev] [Openstack-operators] [nova] StarlingX diff analysis In-Reply-To: <45bd7236-b9f8-026d-620b-7356d4effa49@gmail.com> References: <45bd7236-b9f8-026d-620b-7356d4effa49@gmail.com> Message-ID: <5B71D7AD.9060508@windriver.com> On 08/07/2018 07:29 AM, Matt Riedemann wrote: > On 8/7/2018 1:10 AM, Flint WALRUS wrote: >> I didn’t had time to check StarlingX code quality, how did you feel it while >> you were doing your analysis? > > I didn't dig into the test diffs themselves, but it was my impression that from > what I was poking around in the local git repo, there were several changes which > didn't have any test coverage. Full disclosure, I'm on the StarlingX team. Certainly some changes didn't have unit/functional test coverage, generally due to the perceived cost of writing useful tests. (And when you don't have a lot of experience writing tests this becomes a self-fulfilling prophecy.) On the other hand, we had fairly robust periodic integration testing including multi-node testing with physical hardware. > For the really big full stack changes (L3 CAT, CPU scaling and shared/pinned > CPUs on same host), toward the end I just started glossing over a lot of that > because it's so much code in so many places, so I can't really speak very well > to how it was written or how well it is tested (maybe WindRiver had a more > robust CI system running integration tests, I don't know). We didn't have a per-commit CI system, though that's starting to change. We do have a QA team running regression and targeted tests. > There were also some things which would have been caught in code review > upstream. For example, they ignore the "force" parameter for live migration so > that live migration requests always go through the scheduler. However, the > "force" parameter is only on newer microversions. Before that, if you specified > a host at all it would bypass the scheduler, but the change didn't take that > into account, so they still have gaps in some of the things they were trying to > essentially disable in the API. Agreed, that's not up to upstream quality. In this case we made some simplifying assumptions because our customers were expected to use the matching modified clients and to use the "current" microversion rather than explicitly specifying older microversions. Chris From gfidente at redhat.com Mon Aug 13 19:47:21 2018 From: gfidente at redhat.com (Giulio Fidente) Date: Mon, 13 Aug 2018 21:47:21 +0200 Subject: [openstack-dev] [tripleo] Edge clouds and controlplane updates In-Reply-To: References: Message-ID: <41de3af6-5f7e-94e5-cfe3-a9090fb8218f@redhat.com> Hello, I'd like to get some feedback regarding the remaining work for the split controlplane spec implementation [1] Specifically, while for some services like nova-compute it is not necessary to update the controlplane nodes after an edge cloud is deployed, for other services, like cinder (or glance, probably others), it is necessary to do an update of the config files on the controlplane when a new edge cloud is deployed. In fact for services like cinder or glance, which are hosted in the controlplane, we need to pull data from the edge clouds (for example the newly deployed ceph cluster keyrings and fsid) to configure cinder (or glance) with a new backend. It looks like this demands for some architectural changes to solve the following two: - how do we trigger/drive updates of the controlplane nodes after the edge cloud is deployed? - how do we scale the controlplane parameters to accomodate for N backends of the same type? A very rough approach to the latter could be to use jinja to scale up the CephClient service so that we can have multiple copies of it in the controlplane. Each instance of CephClient should provide the ceph config file and keyring necessary for each cinder (or glance) backend. Also note that Ceph is only a particular example but we'd need a similar workflow for any backend type. The etherpad for the PTG session [2] touches this, but it'd be good to start this conversation before then. 1. https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html 2. https://etherpad.openstack.org/p/tripleo-ptg-queens-split-controlplane -- Giulio Fidente GPG KEY: 08D733BA From davanum at gmail.com Mon Aug 13 19:50:54 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Tue, 14 Aug 2018 03:50:54 +0800 Subject: [openstack-dev] [oslo][tooz][etcd] need help debugging tooz test failure In-Reply-To: References: <1534104610-sup-3898@lrrr.local> Message-ID: Thanks Jay. Pushed out a 0.2.4 with a revert to the offending PR On Tue, Aug 14, 2018 at 1:22 AM Jay Pipes wrote: > On 08/12/2018 04:11 PM, Doug Hellmann wrote: > > The tooz tests on master and stable/rocky are failing with an error: > > > > UnicodeDecodeError: 'utf8' codec can't decode byte 0xc4 in position > 0: > > invalid continuation byte > > > > This is unrelated to the change, which is simply importing test job > > settings or updating the .gitreview file. I need someone familiar with > > the library to help debug the issue. > > > > Can we get a volunteer? > > Looking into it. Seems to be related to this upstream patch to > python-etcd3gw: > > > https://github.com/dims/etcd3-gateway/commit/224f40972b42c4ff16234c0e78ea765e3fe1af95 > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Aug 13 19:52:46 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 13 Aug 2018 15:52:46 -0400 Subject: [openstack-dev] [oslo][tooz][etcd] need help debugging tooz test failure In-Reply-To: References: <1534104610-sup-3898@lrrr.local> Message-ID: <1534189935-sup-4609@lrrr.local> Excerpts from Jay Pipes's message of 2018-08-13 13:21:56 -0400: > On 08/12/2018 04:11 PM, Doug Hellmann wrote: > > The tooz tests on master and stable/rocky are failing with an error: > > > > UnicodeDecodeError: 'utf8' codec can't decode byte 0xc4 in position 0: > > invalid continuation byte > > > > This is unrelated to the change, which is simply importing test job > > settings or updating the .gitreview file. I need someone familiar with > > the library to help debug the issue. > > > > Can we get a volunteer? > > Looking into it. Seems to be related to this upstream patch to > python-etcd3gw: > > https://github.com/dims/etcd3-gateway/commit/224f40972b42c4ff16234c0e78ea765e3fe1af95 > > Best, > -jay > Thanks, Jay! I see that Dims says he pushed a release. Is that something we need to update in the constraints list, then? Doug From doug at doughellmann.com Mon Aug 13 20:00:55 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 13 Aug 2018 16:00:55 -0400 Subject: [openstack-dev] [oslo][tooz][etcd] need help debugging tooz test failure In-Reply-To: References: <1534104610-sup-3898@lrrr.local> Message-ID: <1534190389-sup-6383@lrrr.local> Excerpts from Davanum Srinivas (dims)'s message of 2018-08-14 03:50:54 +0800: > Thanks Jay. Pushed out a 0.2.4 with a revert to the offending PR > > On Tue, Aug 14, 2018 at 1:22 AM Jay Pipes wrote: > > > On 08/12/2018 04:11 PM, Doug Hellmann wrote: > > > The tooz tests on master and stable/rocky are failing with an error: > > > > > > UnicodeDecodeError: 'utf8' codec can't decode byte 0xc4 in position > > 0: > > > invalid continuation byte > > > > > > This is unrelated to the change, which is simply importing test job > > > settings or updating the .gitreview file. I need someone familiar with > > > the library to help debug the issue. > > > > > > Can we get a volunteer? > > > > Looking into it. Seems to be related to this upstream patch to > > python-etcd3gw: > > > > > > https://github.com/dims/etcd3-gateway/commit/224f40972b42c4ff16234c0e78ea765e3fe1af95 > > > > Best, > > -jay > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > I filed the constraint update: https://review.openstack.org/591498 I also set the tooz patches to depend on that as a test: https://review.openstack.org/588720 Doug From jaypipes at gmail.com Mon Aug 13 20:03:20 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 13 Aug 2018 16:03:20 -0400 Subject: [openstack-dev] [oslo][tooz][etcd] need help debugging tooz test failure In-Reply-To: <1534189935-sup-4609@lrrr.local> References: <1534104610-sup-3898@lrrr.local> <1534189935-sup-4609@lrrr.local> Message-ID: <25b902f6-e167-cf19-8a6f-9502851cc43b@gmail.com> On 08/13/2018 03:52 PM, Doug Hellmann wrote: > Excerpts from Jay Pipes's message of 2018-08-13 13:21:56 -0400: >> On 08/12/2018 04:11 PM, Doug Hellmann wrote: >>> The tooz tests on master and stable/rocky are failing with an error: >>> >>> UnicodeDecodeError: 'utf8' codec can't decode byte 0xc4 in position 0: >>> invalid continuation byte >>> >>> This is unrelated to the change, which is simply importing test job >>> settings or updating the .gitreview file. I need someone familiar with >>> the library to help debug the issue. >>> >>> Can we get a volunteer? >> >> Looking into it. Seems to be related to this upstream patch to >> python-etcd3gw: >> >> https://github.com/dims/etcd3-gateway/commit/224f40972b42c4ff16234c0e78ea765e3fe1af95 >> >> Best, >> -jay >> > > Thanks, Jay! > > I see that Dims says he pushed a release. Is that something we need to > update in the constraints list, then? Yeah, likely. We'll need to blacklist the 0.2.3 release of etcd3-gateway library in the openstack/tooz requirements file. I think? :) -jay From davanum at gmail.com Mon Aug 13 20:05:23 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Tue, 14 Aug 2018 04:05:23 +0800 Subject: [openstack-dev] [oslo][tooz][etcd] need help debugging tooz test failure In-Reply-To: <25b902f6-e167-cf19-8a6f-9502851cc43b@gmail.com> References: <1534104610-sup-3898@lrrr.local> <1534189935-sup-4609@lrrr.local> <25b902f6-e167-cf19-8a6f-9502851cc43b@gmail.com> Message-ID: Jay, Doug, We need to blacklist 0.2.2 and 0.2.3 (looking at changelog in) https://github.com/dims/etcd3-gateway/releases Thanks! -- Dims On Tue, Aug 14, 2018 at 4:03 AM Jay Pipes wrote: > On 08/13/2018 03:52 PM, Doug Hellmann wrote: > > Excerpts from Jay Pipes's message of 2018-08-13 13:21:56 -0400: > >> On 08/12/2018 04:11 PM, Doug Hellmann wrote: > >>> The tooz tests on master and stable/rocky are failing with an error: > >>> > >>> UnicodeDecodeError: 'utf8' codec can't decode byte 0xc4 in > position 0: > >>> invalid continuation byte > >>> > >>> This is unrelated to the change, which is simply importing test job > >>> settings or updating the .gitreview file. I need someone familiar with > >>> the library to help debug the issue. > >>> > >>> Can we get a volunteer? > >> > >> Looking into it. Seems to be related to this upstream patch to > >> python-etcd3gw: > >> > >> > https://github.com/dims/etcd3-gateway/commit/224f40972b42c4ff16234c0e78ea765e3fe1af95 > >> > >> Best, > >> -jay > >> > > > > Thanks, Jay! > > > > I see that Dims says he pushed a release. Is that something we need to > > update in the constraints list, then? > > Yeah, likely. We'll need to blacklist the 0.2.3 release of etcd3-gateway > library in the openstack/tooz requirements file. > > I think? :) > > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Aug 13 20:38:09 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 13 Aug 2018 16:38:09 -0400 Subject: [openstack-dev] [oslo][tooz][etcd] need help debugging tooz test failure In-Reply-To: References: <1534104610-sup-3898@lrrr.local> <1534189935-sup-4609@lrrr.local> <25b902f6-e167-cf19-8a6f-9502851cc43b@gmail.com> Message-ID: <1534192673-sup-3024@lrrr.local> Excerpts from Davanum Srinivas (dims)'s message of 2018-08-14 04:05:23 +0800: > Jay, Doug, > > We need to blacklist 0.2.2 and 0.2.3 (looking at changelog in) > https://github.com/dims/etcd3-gateway/releases Done in https://review.openstack.org/591517 Thanks, Doug From no-reply at openstack.org Mon Aug 13 20:42:26 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Mon, 13 Aug 2018 20:42:26 -0000 Subject: [openstack-dev] barbican 7.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for barbican for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/barbican/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/barbican/log/?h=stable/rocky Release notes for barbican can be found at: https://docs.openstack.org/releasenotes/barbican/ From openstack at nemebean.com Mon Aug 13 21:06:52 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 13 Aug 2018 16:06:52 -0500 Subject: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ... In-Reply-To: <9b0850aa-2c4d-57c6-5a65-746c28607122@gmail.com> References: <9b0850aa-2c4d-57c6-5a65-746c28607122@gmail.com> Message-ID: On 08/08/2018 12:04 PM, Jay S Bryant wrote: > Team, > > A reminder that we have our weekly Cinder meeting on Wednesdays at 16:00 > UTC.  I bring this up as I can no longer send the courtesy pings without > being kicked from IRC.  So, if you wish to join the meeting please add a > reminder to your calendar of choice. Do you have any idea why you're being kicked? I'm wondering how to avoid getting into this situation with the Oslo pings. From prometheanfire at gentoo.org Mon Aug 13 21:17:30 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 13 Aug 2018 16:17:30 -0500 Subject: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2 In-Reply-To: References: <20180813184055.a846b4a4d5a513722dbcc4ae@redhat.com> <197ac3a5-df13-8003-16eb-d2cedee9ec7f@suse.com> Message-ID: <20180813211730.cze4vpknwncpqg3b@gentoo.org> On 18-08-13 20:28:23, Andreas Jaeger wrote: > On 2018-08-13 19:16, Andreas Jaeger wrote: > > On 2018-08-13 18:40, Petr Kovar wrote: > > > Hi all, > > > > > > This is a request for an FFE to release openstackdocstheme 1.21.2. > > > > > > > This mostly fixes usability issues in rendering docs content, so we would > > > like to update the theme across all project team docs on docs.o.o. > > > > I suggest to release quickly a 1.21.3 with > > https://review.openstack.org/#/c/585517/ - and use that one instead. > > Release request: > https://review.openstack.org/591485 > Would this be a upper-constraint only bump? If so reqs acks it -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From openstack at fried.cc Mon Aug 13 21:25:43 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 13 Aug 2018 16:25:43 -0500 Subject: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ... In-Reply-To: References: <9b0850aa-2c4d-57c6-5a65-746c28607122@gmail.com> Message-ID: Are you talking about the nastygram from "Sigyn" saying: "Your actions in # tripped automated anti-spam measures (nicks/hilight spam), but were ignored based on your time in channel; stop now, or automated action will still be taken. If you have any questions, please don't hesitate to contact a member of staff" I'm getting this too, and (despite the implication to the contrary) it sometimes cuts off my messages in an unpredictable spot. I'm contacting "a member of staff" to see if there's any way to get "whitelisted" for big messages. In the meantime, the only solution I'm aware of is to chop your pasteypaste up into smaller chunks, and wait a couple seconds between pastes. -efried On 08/13/2018 04:06 PM, Ben Nemec wrote: > > > On 08/08/2018 12:04 PM, Jay S Bryant wrote: >> Team, >> >> A reminder that we have our weekly Cinder meeting on Wednesdays at >> 16:00 UTC.  I bring this up as I can no longer send the courtesy pings >> without being kicked from IRC.  So, if you wish to join the meeting >> please add a reminder to your calendar of choice. > > Do you have any idea why you're being kicked?  I'm wondering how to > avoid getting into this situation with the Oslo pings. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From amy at demarco.com Mon Aug 13 21:29:27 2018 From: amy at demarco.com (Amy Marrich) Date: Mon, 13 Aug 2018 16:29:27 -0500 Subject: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ... In-Reply-To: References: <9b0850aa-2c4d-57c6-5a65-746c28607122@gmail.com> Message-ID: I know we did a ping last week in #openstack-ansible for our meeting no issue. I wonder if it's a length of names thing or a channel setting. Amy (spotz) On Mon, Aug 13, 2018 at 4:25 PM, Eric Fried wrote: > Are you talking about the nastygram from "Sigyn" saying: > > "Your actions in # tripped automated anti-spam measures > (nicks/hilight spam), but were ignored based on your time in channel; > stop now, or automated action will still be taken. If you have any > questions, please don't hesitate to contact a member of staff" > > I'm getting this too, and (despite the implication to the contrary) it > sometimes cuts off my messages in an unpredictable spot. > > I'm contacting "a member of staff" to see if there's any way to get > "whitelisted" for big messages. In the meantime, the only solution I'm > aware of is to chop your pasteypaste up into smaller chunks, and wait a > couple seconds between pastes. > > -efried > > On 08/13/2018 04:06 PM, Ben Nemec wrote: > > > > > > On 08/08/2018 12:04 PM, Jay S Bryant wrote: > >> Team, > >> > >> A reminder that we have our weekly Cinder meeting on Wednesdays at > >> 16:00 UTC. I bring this up as I can no longer send the courtesy pings > >> without being kicked from IRC. So, if you wish to join the meeting > >> please add a reminder to your calendar of choice. > > > > Do you have any idea why you're being kicked? I'm wondering how to > > avoid getting into this situation with the Oslo pings. > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Mon Aug 13 21:42:08 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 13 Aug 2018 14:42:08 -0700 Subject: [openstack-dev] [nova] about live-resize down the instance In-Reply-To: <5B71A7E1.1090709@windriver.com> References: <5B71A7E1.1090709@windriver.com> Message-ID: <2ba474cf-5439-b293-be2b-e72d5325e07d@gmail.com> On Mon, 13 Aug 2018 09:46:41 -0600, Chris Friesen wrote: > On 08/13/2018 02:07 AM, Rambo wrote: >> Hi,all >> >> I find it is important that live-resize the instance in production >> environment,especially live downsize the disk.And we have talked it many >> years.But I don't know why the bp[1] didn't approved.Can you tell me more about >> this ?Thank you very much. >> >> [1]https://review.openstack.org/#/c/141219/ > > > It's been reviewed a number of times...I thought it was going to get approved > for Rocky, but I think it didn't quite make it in...you'd have to ask the nova > cores why not. > > It should be noted though that the above live-resize spec explicitly did not > cover resizing smaller, only larger. From what I find in the PTG notes [1] and the spec, it looks like this didn't go forward for lack of general interest. We have a lot of work to review every cycle and we generally focus on functionality that impact operators the most and look for +1s on specs from operators who are interested in the features. From what I can tell from the comments/votes, there isn't much/any operator interest about live-resize. As has been mentioned, resize down is hypervisor-specific whether or not it's supported. For example, in the libvirt driver, resize down of ephemeral disk is not allowed at all and resize down of root disk is only allowed if the instance is boot-from-volume [2]. The xenapi driver disallows resize down of ephemeral disk [3], the vmware driver disallows resize down of root disk [4], the hyperv driver disallows resize down of root disk [5]. So, allowing only live-resize up would be a way to behave consistently across virt drivers. -melanie [1] https://etherpad.openstack.org/p/nova-ptg-rocky L690 [2] https://github.com/openstack/nova/blob/afe4512bf66c89a061b1a7ccd3e7ac8e3b1b284d/nova/virt/libvirt/driver.py#L8243-L8246 [3] https://github.com/openstack/nova/blob/afe4512bf66c89a061b1a7ccd3e7ac8e3b1b284d/nova/virt/xenapi/vmops.py#L1357-L1359 [4] https://github.com/openstack/nova/blob/afe4512bf66c89a061b1a7ccd3e7ac8e3b1b284d/nova/virt/vmwareapi/vmops.py#L1421-L1427 [5] https://github.com/openstack/nova/blob/afe4512bf66c89a061b1a7ccd3e7ac8e3b1b284d/nova/virt/hyperv/migrationops.py#L107-L114 From pabelanger at redhat.com Mon Aug 13 22:14:57 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 13 Aug 2018 18:14:57 -0400 Subject: [openstack-dev] [barbican][ara][helm][tempest] Removal of fedora-27 nodes In-Reply-To: <20180813135644.GA29768@localhost.localdomain> References: <20180803000146.GA23278@localhost.localdomain> <20180813135644.GA29768@localhost.localdomain> Message-ID: <20180813221211.GA9417@localhost.localdomain> On Mon, Aug 13, 2018 at 09:56:44AM -0400, Paul Belanger wrote: > On Thu, Aug 02, 2018 at 08:01:46PM -0400, Paul Belanger wrote: > > Greetings, > > > > We've had fedora-28 nodes online for some time in openstack-infra, I'd like to > > finish the migration process and remove fedora-27 images. > > > > Please take a moment to review and approve the following patches[1]. We'll be > > using the fedora-latest nodeset now, which make is a little easier for > > openstack-infra to migrate to newer versions of fedora. Next time around, we'll > > send out an email to the ML once fedora-29 is online to give projects some time > > to test before we make the change. > > > > Thanks > > - Paul > > > > [1] https://review.openstack.org/#/q/topic:fedora-latest > > > Thanks for the approval of the patches above, today we are blocked by the > following backport for barbican[2]. If we can land this today, we can proceed > with the removal from nodepool. > > Thanks > - Paul > > [2] https://review.openstack.org/590420/ > Thanks to the fast approvals today, we've been able to fully remove fedora-27 from nodepool. All jobs will now use fedora-latest, which is currently fedora-28. We'll send out an enail once we are ready to bring fedora-29 online, and promote it to fedora-latest. Thanks - Paul From openstack at nemebean.com Mon Aug 13 22:39:28 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 13 Aug 2018 17:39:28 -0500 Subject: [openstack-dev] =?utf-8?q?=5Boslo=5D_proposing_Mois=C3=A9s_Guimar?= =?utf-8?q?=C3=A3es_for_oslo=2Econfig_core?= In-Reply-To: <1533733971-sup-7865@lrrr.local> References: <1533129742-sup-2007@lrrr.local> <1533733971-sup-7865@lrrr.local> Message-ID: <54361874-8077-a0a6-188f-21001b806740@nemebean.com> On 08/08/2018 08:18 AM, Doug Hellmann wrote: > Excerpts from Doug Hellmann's message of 2018-08-01 09:27:09 -0400: >> Moisés Guimarães (moguimar) did quite a bit of work on oslo.config >> during the Rocky cycle to add driver support. Based on that work, >> and a discussion we have had since then about general cleanup needed >> in oslo.config, I think he would make a good addition to the >> oslo.config review team. >> >> Please indicate your approval or concerns with +1/-1. >> >> Doug > > Normally I would have added moguimar to the oslo-config-core team > today, after a week's wait. Funny story, though. There is no > oslo-config-core team. > > oslo.config is one of a few of our libraries that we never set up with a > separate review team. It is managed by oslo-core. We could set up a new > review team for that library, but after giving it some thought I > realized that *most* of the libraries are fairly stable, our team is > pretty small, and Moisés is a good guy so maybe we don't need to worry > about that. > > I spoke with Moisés, and he agreed to be part of the larger core team. > He pointed out that the next phase of the driver work is going to happen > in castellan, so it would be useful to have another reviewer there. And > I'm sure we can trust him to be careful with reviews in other repos > until he learns his way around. > > So, I would like to amend my original proposal and suggest that we add > Moisés to the oslo-core team. > > Please indicate support with +1 or present any concerns you have. I > apologize for the confusion on my part. I'm good with this reasoning, so +1 from me. From fungi at yuggoth.org Mon Aug 13 22:44:33 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 13 Aug 2018 22:44:33 +0000 Subject: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ... In-Reply-To: References: <9b0850aa-2c4d-57c6-5a65-746c28607122@gmail.com> Message-ID: <20180813224432.3az2nrzesubvocql@yuggoth.org> On 2018-08-13 16:29:27 -0500 (-0500), Amy Marrich wrote: > I know we did a ping last week in #openstack-ansible for our meeting no > issue. I wonder if it's a length of names thing or a channel setting. [...] Freenode's Sigyn bot may not have been invited to #openstack-ansible. We might want to consider kicking it from channels while they have nick registration enforced. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From d.lake at surrey.ac.uk Mon Aug 13 23:08:33 2018 From: d.lake at surrey.ac.uk (d.lake at surrey.ac.uk) Date: Mon, 13 Aug 2018 23:08:33 +0000 Subject: [openstack-dev] OVS-DPDK with NetVirt In-Reply-To: References: , Message-ID: I'm really getting nowhere fast with this. The latest in set of issues appears to be related to the "Permission denied" on the socket for qemu. Just to reprise - this is OVS with DPDK, All-In-One with Intel NICs and ODL NetVirt. Can ANYONE shed any light on this please - I can't believe that this isn't a very standard deployment and given that it works without DPDK on OVS I can't believe that it hasn't been seen hundreds of times beore. Thanks David From: Lake D Mr (PG/R - Elec Electronic Eng) Sent: 13 August 2018 16:35 To: 'Venkatrangan G - ERS, HCL Tech' ; dayavanti.gopal.kamath at ericsson.com; netvirt-dev at lists.opendaylight.org Subject: RE: OVS-DPDK with NetVirt Hi OK - I found some more guides which told me I needed to add: [ovs] datapath_type=netdev to ML2_conf which I have done with an extra line in local.conf. Now I am seeing the ports trying to be added as vhost-user ports. BUT. I am seeing this issue in the log: qemu-kvm: -chardev socket,id=charnet0,path=/var/run/openvswitch/vhuab608c58-ae: Failed to connect socket /var/run/openvswitch/vhuab608c58-ae: Permission denied\n']#033[00m Any ideas? This is on an all-in-one system using CentOS 7.5 Thanks David From: Venkatrangan G - ERS, HCL Tech > Sent: 13 August 2018 10:36 To: Lake D Mr (PG/R - Elec Electronic Eng) >; dayavanti.gopal.kamath at ericsson.com; netvirt-dev at lists.opendaylight.org Subject: RE: OVS-DPDK with NetVirt Hi David, I think you can run this ommand on your control node sudo neutron-odl-ovs-hostconfig --config-file=/etc/neutron/neutron.conf --debug --ovs_dpdk --bridge_mappings=physnet1:br-physnet1 (Not exactly sure of all the arguments, Please run this command in the control node with dpdk option, I think that should help) Regards, Venkat G (When there is no wind....row!!!) From: netvirt-dev-bounces at lists.opendaylight.org > On Behalf Of d.lake at surrey.ac.uk Sent: 13 August 2018 14:01 To: dayavanti.gopal.kamath at ericsson.com; netvirt-dev at lists.opendaylight.org Subject: Re: [netvirt-dev] OVS-DPDK with NetVirt Good morning all I wonder if someone could help with this please. I don't know whether I need to add anything into ML2 to have the br-int installed in netdev mode or whether something else is wrong. Thank you in advance David Sent from my iPhone ________________________________ From: Lake D Mr (PG/R - Elec Electronic Eng) Sent: Friday, August 10, 2018 10:57:02 PM To: Dayavanti Gopal Kamath; netvirt-dev at lists.opendaylight.org Subject: RE: OVS-DPDK with NetVirt Hi The first link you sent doesn't work? I've no idea what a pseudoagent binding driver is.... All I've done is to follow the instructions for moving to DPDK on my existing ODL+OpenStack system which uses Devstack to install. My understanding is that I needed to enable DPDK in OVS. I do that with the following command: ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true I then unbound the DPDK NICs from the kernel mode driver and bound them to vfio-pci using "dpdk-devbind." Once that is done, I created 4 bridges in OVS which all use the netdev datapath: ovs-vsctl add-br br-dpdk1 -- set bridge br-dpdk1 datapath_type=netdev ovs-vsctl add-br br-dpdk2 -- set bridge br-dpdk2 datapath_type=netdev ovs-vsctl add-br br-dpdk3 -- set bridge br-dpdk3 datapath_type=netdev ovs-vsctl add-br br-dpdk4 -- set bridge br-dpdk4 datapath_type=netdev Then I added the ports for the NICs to each bridge: sudo ovs-vsctl add-port br-dpdk1 dpdk-p1 -- set Interface dpdk-p1 type=dpdk options:dpdk-devargs=0000:04:00.0 sudo ovs-vsctl add-port br-dpdk2 dpdk-p2 -- set Interface dpdk-p2 type=dpdk options:dpdk-devargs=0000:04:00.1 sudo ovs-vsctl add-port br-dpdk3 dpdk-p3 -- set Interface dpdk-p3 type=dpdk options:dpdk-devargs=0000:05:00.0 sudo ovs-vsctl add-port br-dpdk4 dpdk-p4 -- set Interface dpdk-p4 type=dpdk options:dpdk-devargs=0000:05:00.1 Having done that, I can verify that I can see traffic in the bridge using ovs-tcpdump so I know that the data is reaching OVS from the wire. Then I run Devstack stack.sh and I get a working system with four physical networks. However, this blog - https://joshhershberg.wordpress.com/2017/03/07/opendaylight-netvirt-dpdk-plumbing-how-it-all-works-together/ seems to indicate that the br-int should be automatically created by ODL as part of the installation process in netdev mode by virtue of the fact that it has read the datapath type from OVSDB and would therefore ensure that all ports are created with netdev. But this doesn't appear to be happening because I see messages in karaf.log telling me that the ports are NOT in dpdk mode. The symptom is that when I create a VM, a TAP interface is built and I can see traffic into OVS and to/from the netns qdhcp, but traffic is not crossing between the br-dpdk ports and the ports associated with the VMs. I've also read this note https://software.intel.com/en-us/forums/networking/topic/704506 which seems to indicate some additional ML2 configuration is required but that would seem to run counter to the instructions given in the blog referenced earlier! I'm loathed to start manually changing anything in the OVS table because last time I asked a question about adding OVS rules to do routing across OVS I was told that really one should not touch the OVS tables manually if integrated with ODL and NetVirt. This is all rather confusing. David From: Dayavanti Gopal Kamath > Sent: 10 August 2018 19:03 To: Lake D Mr (PG/R - Elec Electronic Eng) >; netvirt-dev at lists.opendaylight.org Subject: RE: OVS-DPDK with NetVirt Hi david, Are you using the pseudoagent binding driver for binding the vif? In that case, ovsdb openvswitch table needs to be populated with host config information- https:/github/.com/openstack/networking-odl/blob/master/doc/source/devref/hostconfig.rst https://blueprints.launchpad.net/networking-odl/+spec/pseudo-agentdb-binding for netdev, your openvswitch table could look like this - external_ids: odl_os_hostconfig_hostid= external_ids: host_type= ODL_L2 external_ids: odl_os_hostconfig_config_odl_l2 = "{"supported_vnic_types": [{"vnic_type": ["normal"], "vif_type": "vhostuser", "vif_details": {"uuid": "TEST_UUID", "has_datapath_type_netdev": True, "support_vhost_user": True, "port_prefix": "vhu", "vhostuser_socket_dir": "/var/run/openvswitch", "vhostuser_ovs_plug": True, "vhostuser_mode": "server", "vhostuser_socket": "/var/run/openvswitch/vhu$PORT_ID"} }], "allowed_network_types": ["vlan", "vxlan"], "bridge_mappings": {" physnet1":"br-ex"}}" From: netvirt-dev-bounces at lists.opendaylight.org [mailto:netvirt-dev-bounces at lists.opendaylight.org] On Behalf Of d.lake at surrey.ac.uk Sent: Friday, August 10, 2018 7:59 PM To: netvirt-dev at lists.opendaylight.org Subject: [netvirt-dev] OVS-DPDK with NetVirt Hello I have installed OVS with DPDK support and created bridges to map my DPDK-mode interfaces to provider networks as below: ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true ovs-vsctl add-br br-dpdk1 -- set bridge br-dpdk1 datapath_type=netdev ovs-vsctl add-br br-dpdk2 -- set bridge br-dpdk2 datapath_type=netdev ovs-vsctl add-br br-dpdk3 -- set bridge br-dpdk3 datapath_type=netdev ovs-vsctl add-br br-dpdk4 -- set bridge br-dpdk4 datapath_type=netdev sudo ovs-vsctl add-port br-dpdk1 dpdk-p1 -- set Interface dpdk-p1 type=dpdk options:dpdk-devargs=0000:04:00.0 sudo ovs-vsctl add-port br-dpdk2 dpdk-p2 -- set Interface dpdk-p2 type=dpdk options:dpdk-devargs=0000:04:00.1 sudo ovs-vsctl add-port br-dpdk3 dpdk-p3 -- set Interface dpdk-p3 type=dpdk options:dpdk-devargs=0000:05:00.0 sudo ovs-vsctl add-port br-dpdk4 dpdk-p4 -- set Interface dpdk-p4 type=dpdk options:dpdk-devargs=0000:05:00.1 I have ODL provider mappings between physnet1:br-dpdk1 etc and I can create flat networks using the provider network names. BUT. I am still seeing the tap interfaces in the ovs-vsctl show and in karaf.log it appears that the VM interfaces are NOT being created as type vhostuser. This blog - https://joshhershberg.wordpress.com/2017/03/07/opendaylight-netvirt-dpdk-plumbing-how-it-all-works-together/ - seems to suggest that the br-int should be created as a netdev but I don't think this is happening. Is there any config change I need to make to ML2 to make br-int into a netdev datapath? Thanks David ::DISCLAIMER:: -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only. E-mail transmission is not guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or may contain viruses in transmission. The e mail and its contents (with or without referred errors) shall therefore not attach any liability on the originator or HCL or its affiliates. Views or opinions, if any, presented in this email are solely those of the author and may not necessarily reflect the views or opinions of HCL or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of this message without the prior written consent of authorized representative of HCL is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. Before opening any email and/or attachments, please check them for viruses and other defects. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Mon Aug 13 23:10:30 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Mon, 13 Aug 2018 16:10:30 -0700 Subject: [openstack-dev] [requirements][heat][congress] gabbi<1.42.1 causing error in queens dsvm Message-ID: It appears that gabbi<1.42.1 is causing on error with heat tempest plugin in congress stable/queens dsvm job [1][2][3]. The issue was addressed in heat tempest plugin [4], but the problem remains for stable/queens jobs because the queens upper-constraint is still at 1.40.0 [5]. Any suggestions on how to proceed? Thank you! [1] https://bugs.launchpad.net/heat-tempest-plugin/+bug/1749218 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1609361 [3] http://logs.openstack.org/41/567941/2/check/congress-devstack-api-mysql/c232d8a/job-output.txt.gz#_2018-08-13_11_46_28_441837 [4] https://review.openstack.org/#/c/544025/ [5] https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L245 From amy at demarco.com Mon Aug 13 23:10:33 2018 From: amy at demarco.com (Amy Marrich) Date: Mon, 13 Aug 2018 18:10:33 -0500 Subject: [openstack-dev] =?utf-8?q?=28no_subject=29?= Message-ID: Hi everyone, If you’re running OpenStack, please participate in the User Survey to share more about the technology you are using and provide feedback for the community by *August 21 - hurry, it’s next week!!* By completing a deployment, you will qualify as an AUC and receive a $300 USD ticket to the two upcoming Summits. Please help us spread the word. a we're trying to gather as much real-world deployment data as possible to share back with both the operator and developer communities. We are only conducting one survey this year, and the report will be published at the Berlin Summit . II you would like OpenStack user data in the meantime, check out the analytics dashboard updates in real time, throughout the year. The information provided is confidential and will only be presented in aggregate unless you consent to make it public. The deadline to complete the survey and be part of the next report is next *Tuesday, August 21** at 23:59 UTC.* - You can login and complete the OpenStack User Survey here: http://www.openstack.org/user-survey - If you’re interested in joining the OpenStack User Survey Working Group to help with the survey analysis, please complete this form: https://openstackfoundation.formstack.com/forms/user_survey_working_group - Help us promote the User Survey: https://twitter.com/Op enStack/status/993589356312088577 Please let me know if you have any questions. Thanks, Amy Amy Marrich (spotz) OpenStack User Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Mon Aug 13 23:14:31 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 13 Aug 2018 18:14:31 -0500 Subject: [openstack-dev] [requirements][heat][congress] gabbi<1.42.1 causing error in queens dsvm In-Reply-To: References: Message-ID: <20180813231431.xxv7nqnf6m54qd2o@gentoo.org> On 18-08-13 16:10:30, Eric K wrote: > It appears that gabbi<1.42.1 is causing on error with heat tempest > plugin in congress stable/queens dsvm job [1][2][3]. The issue was > addressed in heat tempest plugin [4], but the problem remains for > stable/queens jobs because the queens upper-constraint is still at > 1.40.0 [5]. > > Any suggestions on how to proceed? Thank you! > > [1] https://bugs.launchpad.net/heat-tempest-plugin/+bug/1749218 > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1609361 > [3] http://logs.openstack.org/41/567941/2/check/congress-devstack-api-mysql/c232d8a/job-output.txt.gz#_2018-08-13_11_46_28_441837 > [4] https://review.openstack.org/#/c/544025/ > [5] https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L245 > iirc, a UC bump is allowed if it fixes gating, so that's alright by me (reqs) -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From amotoki at gmail.com Tue Aug 14 04:19:27 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 14 Aug 2018 13:19:27 +0900 Subject: [openstack-dev] [requirements][release][osc] FFE osc-lib 1.11.1 release Message-ID: Hi, I would like to request FFE for osc-lib 1.11.1 release. https://review.openstack.org/591556 osc-iib commit e3d772050f3f4de6369b3dd1ba1269e2903666f7 replaced issubclass() with isinstance() unexpectedly. As a result, osc-lib 1.11.0 breaks existing OSC plugins and the neutronclient OSC plugin gate is now broken. To fix the gate, osc-lib 1.11.1 release would be appreciated. upper-constraints is bumped to osc-lib 1.11.1. It is better to block osc-lib 1.11.0 but I am familiar whether we need to block it or not. Thanks, Akihiro Motoki (amotoki) From prometheanfire at gentoo.org Tue Aug 14 04:38:01 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 13 Aug 2018 23:38:01 -0500 Subject: [openstack-dev] [requirements][release][osc] FFE osc-lib 1.11.1 release In-Reply-To: References: Message-ID: <20180814043801.wfvtnj3nctyzarnb@gentoo.org> On 18-08-14 13:19:27, Akihiro Motoki wrote: > Hi, > > I would like to request FFE for osc-lib 1.11.1 release. > > https://review.openstack.org/591556 > > osc-iib commit e3d772050f3f4de6369b3dd1ba1269e2903666f7 replaced > issubclass() with isinstance() unexpectedly. > As a result, osc-lib 1.11.0 breaks existing OSC plugins and > the neutronclient OSC plugin gate is now broken. > To fix the gate, osc-lib 1.11.1 release would be appreciated. > > upper-constraints is bumped to osc-lib 1.11.1. > It is better to block osc-lib 1.11.0 but I am familiar whether we need > to block it or not. > What libs (further down the dep tree) would need the exclusion? They'd likely also need a FFE for at least a UC bump. You have my (and requirements) ack for a UC only bump at least. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From amotoki at gmail.com Tue Aug 14 04:56:28 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 14 Aug 2018 13:56:28 +0900 Subject: [openstack-dev] [requirements][release][osc] FFE osc-lib 1.11.1 release In-Reply-To: <20180814043801.wfvtnj3nctyzarnb@gentoo.org> References: <20180814043801.wfvtnj3nctyzarnb@gentoo.org> Message-ID: 2018年8月14日(火) 13:38 Matthew Thode : > > On 18-08-14 13:19:27, Akihiro Motoki wrote: > > Hi, > > > > I would like to request FFE for osc-lib 1.11.1 release. > > > > https://review.openstack.org/591556 > > > > osc-iib commit e3d772050f3f4de6369b3dd1ba1269e2903666f7 replaced > > issubclass() with isinstance() unexpectedly. > > As a result, osc-lib 1.11.0 breaks existing OSC plugins and > > the neutronclient OSC plugin gate is now broken. > > To fix the gate, osc-lib 1.11.1 release would be appreciated. > > > > upper-constraints is bumped to osc-lib 1.11.1. > > It is better to block osc-lib 1.11.0 but I am familiar whether we need > > to block it or not. > > > > What libs (further down the dep tree) would need the exclusion? They'd > likely also need a FFE for at least a UC bump. > You have my (and requirements) ack for a UC only bump at least. AFAIK all OSC plugins and OSC directly consume osc-lib and there is no libs to consume osc-lib. In this case, we don't need to block a specific version of osc-lib, right? Perhaps it is just because I don't understand the current policy well. >From neutronclient and other OSC plugin perspective, it is fine to bump UC only. Thanks, Akihiro Motoki (amotoki) > > -- > Matthew Thode (prometheanfire) > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ramishra at redhat.com Tue Aug 14 05:10:55 2018 From: ramishra at redhat.com (Rabi Mishra) Date: Tue, 14 Aug 2018 10:40:55 +0530 Subject: [openstack-dev] [requirements][heat][congress] gabbi<1.42.1 causing error in queens dsvm In-Reply-To: References: Message-ID: On Tue, Aug 14, 2018 at 4:40 AM, Eric K wrote: > It appears that gabbi<1.42.1 is causing on error with heat tempest > plugin in congress stable/queens dsvm job [1][2][3]. I wonder why you're enabling heat-tempest-plugin in the first place? I see a number of tempest plugins enabled. However, you don't seem to gate on the tests in those plugins[1]. [1] https://github.com/openstack/congress/blob/master/playbooks/legacy/congress-devstack-api-base/run.yaml#L61 > The issue was > addressed in heat tempest plugin [4], but the problem remains for > stable/queens jobs because the queens upper-constraint is still at > 1.40.0 [5]. > > Yeah, a uc bump to 1.42.1 is required if you're enabling it. > Any suggestions on how to proceed? Thank you! > > [1] https://bugs.launchpad.net/heat-tempest-plugin/+bug/1749218 > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1609361 > [3] http://logs.openstack.org/41/567941/2/check/congress- > devstack-api-mysql/c232d8a/job-output.txt.gz#_2018-08-13_11_46_28_441837 > [4] https://review.openstack.org/#/c/544025/ > [5] https://github.com/openstack/requirements/blob/stable/ > queens/upper-constraints.txt#L245 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Tue Aug 14 05:11:53 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Tue, 14 Aug 2018 00:11:53 -0500 Subject: [openstack-dev] [requirements][release][osc] FFE osc-lib 1.11.1 release In-Reply-To: References: <20180814043801.wfvtnj3nctyzarnb@gentoo.org> Message-ID: <20180814051153.q7jecuy23ljlg2aw@gentoo.org> On 18-08-14 13:56:28, Akihiro Motoki wrote: > 2018年8月14日(火) 13:38 Matthew Thode : > > > > On 18-08-14 13:19:27, Akihiro Motoki wrote: > > > Hi, > > > > > > I would like to request FFE for osc-lib 1.11.1 release. > > > > > > https://review.openstack.org/591556 > > > > > > osc-iib commit e3d772050f3f4de6369b3dd1ba1269e2903666f7 replaced > > > issubclass() with isinstance() unexpectedly. > > > As a result, osc-lib 1.11.0 breaks existing OSC plugins and > > > the neutronclient OSC plugin gate is now broken. > > > To fix the gate, osc-lib 1.11.1 release would be appreciated. > > > > > > upper-constraints is bumped to osc-lib 1.11.1. > > > It is better to block osc-lib 1.11.0 but I am familiar whether we need > > > to block it or not. > > > > > > > What libs (further down the dep tree) would need the exclusion? They'd > > likely also need a FFE for at least a UC bump. > > You have my (and requirements) ack for a UC only bump at least. > > AFAIK all OSC plugins and OSC directly consume osc-lib and there is no > libs to consume osc-lib. > In this case, we don't need to block a specific version of osc-lib, right? > Perhaps it is just because I don't understand the current policy well. > > From neutronclient and other OSC plugin perspective, it is fine to bump UC only. > The current list is the following. +----------------------------------------+---------------------------------------------------------------------------------------------+------+-----------------------------------------------+ | Repository | Filename | Line | Text | +----------------------------------------+---------------------------------------------------------------------------------------------+------+-----------------------------------------------+ | openstack-zuul-jobs | playbooks/legacy/requirements-integration-dsvm/run.yaml | 76 | export PROJECTS="openstack/osc-lib $PROJECTS" | | openstack-zuul-jobs | playbooks/legacy/requirements-integration-dsvm-ubuntu-trusty/run.yaml | 77 | export PROJECTS="openstack/osc-lib $PROJECTS" | | osc-placement | requirements.txt | 8 | osc-lib>=1.2.0 # Apache-2.0 | | osops-tools-contrib | ansible_requirements.txt | 31 | osc-lib==1.1.0 | | python-adjutantclient | requirements.txt | 9 | osc-lib>=1.5.1 # Apache-2.0 | | python-aodhclient | requirements.txt | 7 | osc-lib>=1.0.1 # Apache-2.0 | | python-cyborgclient | requirements.txt | 16 | osc-lib>=1.8.0 # Apache-2.0 | | python-designateclient | requirements.txt | 6 | osc-lib>=1.8.0 # Apache-2.0 | | python-distilclient | requirements.txt | 4 | osc-lib>=1.7.0 # Apache-2.0 | | python-glareclient | requirements.txt | 13 | osc-lib>=1.7.0 # Apache-2.0 | | python-heatclient | requirements.txt | 9 | osc-lib>=1.8.0 # Apache-2.0 | | python-iotronicclient | requirements.txt | 9 | osc-lib>=1.2.0 # Apache-2.0 | | python-ironic-inspector-client | requirements.txt | 5 | osc-lib>=1.8.0 # Apache-2.0 | | python-ironicclient | requirements.txt | 9 | osc-lib>=1.10.0 # Apache-2.0 | | python-karborclient | requirements.txt | 11 | osc-lib>=1.8.0 # Apache-2.0 | | python-kingbirdclient | requirements.txt | 5 | osc-lib>=1.2.0 # Apache-2.0 | | python-magnumclient | requirements.txt | 16 | osc-lib>=1.8.0 # Apache-2.0 | | python-masakariclient | requirements.txt | 6 | osc-lib>=1.8.0 # Apache-2.0 | | python-mistralclient | requirements.txt | 5 | osc-lib>=1.8.0 # Apache-2.0 | | python-moganclient | requirements.txt | 6 | osc-lib>=1.8.0 # Apache-2.0 | | python-monascaclient | requirements.txt | 5 | osc-lib>=1.8.0 # Apache-2.0 | | python-muranoclient | requirements.txt | 15 | osc-lib>=1.8.0 # Apache-2.0 | | python-neutronclient | requirements.txt | 9 | osc-lib>=1.8.0 # Apache-2.0 | | python-octaviaclient | requirements.txt | 20 | osc-lib>=1.8.0 # Apache-2.0 | | python-openstackclient | requirements.txt | 11 | osc-lib>=1.10.0 # Apache-2.0 | | python-pankoclient | requirements.txt | 6 | osc-lib>=1.8.0 # Apache-2.0 | | python-picassoclient | requirements.txt | 6 | osc-lib>=1.2.0 # Apache-2.0 | | python-qinlingclient | requirements.txt | 13 | osc-lib>=1.8.0 # Apache-2.0 | | python-rsdclient | requirements.txt | 7 | osc-lib>=1.7.0 # Apache-2.0 | | python-saharaclient | requirements.txt | 9 | osc-lib>=1.11.0 # Apache-2.0 | | python-searchlightclient | requirements.txt | 7 | osc-lib>=1.8.0 # Apache-2.0 | | python-senlinclient | requirements.txt | 10 | osc-lib>=1.8.0 # Apache-2.0 | | python-tackerclient | requirements.txt | 15 | osc-lib>=1.8.0 # Apache-2.0 | | python-tatuclient | requirements.txt | 6 | osc-lib>=1.8.0 # Apache-2.0 | | python-tricircleclient | requirements.txt | 7 | osc-lib>=1.8.0 # Apache-2.0 | | python-tripleoclient | requirements.txt | 17 | osc-lib>=1.8.0 # Apache-2.0 | | python-troveclient | requirements.txt | 15 | osc-lib>=1.8.0 # Apache-2.0 | | python-valenceclient | requirements.txt | 9 | osc-lib>=1.8.0 # Apache-2.0 | | python-vitrageclient | requirements.txt | 8 | osc-lib>=1.8.0 # Apache-2.0 | | python-watcherclient | requirements.txt | 7 | osc-lib>=1.8.0 # Apache-2.0 | | python-zaqarclient | requirements.txt | 16 | osc-lib>=1.8.0 # Apache-2.0 | | python-zunclient | requirements.txt | 9 | osc-lib>=1.8.0 # Apache-2.0 | | requirements | global-requirements.txt | 147 | osc-lib # Apache-2.0 | | requirements | openstack_requirements/tests/files/upper-constraints.txt | 313 | osc-lib===1.3.0 | | stx-upstream | openstack/python-openstackclient/centos/meta_patches/1000-remove-version-requirements.patch | 19 | -Requires: python-osc-lib >= 1.7.0 | | upstream-institute-virtual-environment | elements/upstream-training/static/tmp/requirements.txt | 120 | osc-lib==1.8.0 | | vmware-nsx | requirements.txt | 14 | osc-lib>=1.8.0 # Apache-2.0 | +----------------------------------------+---------------------------------------------------------------------------------------------+------+-----------------------------------------------+ Maybe it'd be better to figure out what's using that removed method and those would need the update? -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From tony at bakeyournoodle.com Tue Aug 14 05:28:07 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 14 Aug 2018 15:28:07 +1000 Subject: [openstack-dev] [requirements][release][osc] FFE osc-lib 1.11.1 release In-Reply-To: <20180814051153.q7jecuy23ljlg2aw@gentoo.org> References: <20180814043801.wfvtnj3nctyzarnb@gentoo.org> <20180814051153.q7jecuy23ljlg2aw@gentoo.org> Message-ID: <20180814052807.GB6001@thor.bakeyournoodle.com> On Tue, Aug 14, 2018 at 12:11:53AM -0500, Matthew Thode wrote: > Maybe it'd be better to figure out what's using that removed method and > those would need the update? Given we have per-project deps in rocky only those that *need* the exclusion will need to apply it. I think it's fair to accept the U-c bump and block 0.11.0 in global-requirements. Then any project that find they're broken next week can just add the exclusion themselves and move on. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Tue Aug 14 06:03:11 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 14 Aug 2018 16:03:11 +1000 Subject: [openstack-dev] [requirements][heat][congress] gabbi<1.42.1 causing error in queens dsvm In-Reply-To: References: Message-ID: <20180814060311.GC6001@thor.bakeyournoodle.com> On Mon, Aug 13, 2018 at 04:10:30PM -0700, Eric K wrote: > It appears that gabbi<1.42.1 is causing on error with heat tempest > plugin in congress stable/queens dsvm job [1][2][3]. The issue was > addressed in heat tempest plugin [4], but the problem remains for > stable/queens jobs because the queens upper-constraint is still at > 1.40.0 [5]. > > Any suggestions on how to proceed? Thank you! https://review.openstack.org/591561 Should fix it. You can create a no-op test that: Depends-On: https://review.openstack.org/591561 to verify it works. Doing so and reporting the change ID would be really helpful. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From gergely.csatari at nokia.com Tue Aug 14 07:34:17 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Tue, 14 Aug 2018 07:34:17 +0000 Subject: [openstack-dev] [edge][glance][nova][starlngx]: Edge sessions on the PTG Message-ID: Hi, We will have a whole day discussion on edge on the Tuesday of the PTG [1] where we plan [2] to have discussions on image handling, Nova, Keystone and StarlingX. If you are interested in any of these please add your name to the name lists. Also if you have some ideas to discuss during the day please add it to the list. Thanks, Gerg0 [1]: https://www.openstack.org/ptg#tab_schedule [2]: https://etherpad.openstack.org/p/EdgeComputingGroupPTG4 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpena at redhat.com Tue Aug 14 08:22:47 2018 From: jpena at redhat.com (Javier Pena) Date: Tue, 14 Aug 2018 04:22:47 -0400 (EDT) Subject: [openstack-dev] [rpm-packaging] Step down as a reviewer In-Reply-To: <0a42dac71ee047ff9f4b1ef87114f019c617d6b8.camel@suse.de> References: <0a42dac71ee047ff9f4b1ef87114f019c617d6b8.camel@suse.de> Message-ID: <105826055.24955099.1534234967799.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Dear rpm-packaging team, > > I was lucky to help doing reviews for the rpm-packaging OpenStack > project for the last couple of release cycles. I learned a lot during > this time. > > I will change my role at SUSE at the end of the month (August 2018), so > I request to be removed from the core position on those projects. > > Also, a big thank to the team for the provided help during this time. > Alberto, thank you very much for your contributions. I wish you the best in your new position. Regards, Javier > Saludos! > > -- > SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) > Maxfeldstraße 5, 90409 Nürnberg, Germany > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From ashlee at openstack.org Tue Aug 14 11:04:56 2018 From: ashlee at openstack.org (Ashlee Ferguson) Date: Tue, 14 Aug 2018 06:04:56 -0500 Subject: [openstack-dev] Berlin Summit Schedule Live! Message-ID: <4008CE47-25A0-43B1-8FEF-D2CD52330C7C@openstack.org> The schedule for the Berlin Summit is live! Check out 100+ sessions, demos, and workshops covering 35+ open source projects in the following Tracks: • CI/CD • Container Infrastructure • Edge Computing • HPC / GPU / AI • Private & Hybrid Cloud • Public Cloud • Telecom & NFV Log in with your OpenStackID and start building your schedule now! Register for the Summit - Get your Summit ticket for USD $699 before the price increases on August 21 at 11:59pm PT (August 22 at 6:59 UTC) For speakers with accepted sessions, look for an email from speakersupport at openstack.org for next steps on registration. Thank you to our Programming Committee! They have once again taken time out of their busy schedules to help create another round of outstanding content for the OpenStack Summit. The OpenStack Foundation relies on the community-nominated Programming Committee, along with your Community Votes to select the content of the summit. If you're curious about this process, you can read more about it here where we have also listed the Programming Committee members. Interested in sponsoring the Berlin Summit? Learn more here Cheers, Ashlee Ashlee Ferguson OpenStack Foundation ashlee at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Tue Aug 14 12:22:12 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 14 Aug 2018 14:22:12 +0200 Subject: [openstack-dev] [ironic] ironic-staging-drivers: what to do? In-Reply-To: References: Message-ID: <39838cdb-0888-68d0-7f13-815e75c1890a@redhat.com> Inline On 08/13/2018 08:40 PM, Julia Kreger wrote: > Greetings fellow ironicans! > > As many of you might know an openstack/ironic-staging-drivers[1] > repository exists. What most might not know is that it was > intentionally created outside of ironic's governance[2]. > > At the time it was created ironic was moving towards removing drivers > that did not meet our third-party CI requirement[3] to be in-tree. The > repository was an attempt to give a home to what some might find > useful or where third party CI is impractical or cost-prohibitive and > thus could not be officially part of Ironic the service. There was > hope that drivers could land in ironic-staging-drivers and possibly > graduate to being moved in-tree with third-party CI. As our community > has evolved we've not stopped and revisited the questions. > > With our most recent release over, I believe we need to ask ourselves > if we should consider moving ironic-staging-drivers into our > governance. Not voting on this, since I'm obviously biased. Consider me +0 :) > > Over the last couple of releases several contributors have found > themselves trying to seek out two available reviewers to merge even > trivial fixes[4]. Due to the team being so small this was no easy > task. As a result, I'm wondering why not move the repository into > governance, grant ironic-core review privileges upon the repository, > and maintain the purpose and meaning of the repository. This would > also result in the repository's release becoming managed via the > release management process which is a plus. Strictly speaking, nothing prevents us from granting ironic-core +2 on it right now. It's a different question whether they'll actually review it. We need a commitment from >50% cores to review it more or less regularly, otherwise we'll end up in the same situation. > > We could then propose an actual graduation process and help alleviate > some of the issues where driver code is iterated upon for long periods > of time before landing. At the same time I can see at least one issue > which is if we were to do that, then we would also need to manage > removal through the same path. I don't think we really "need to", but we certainly can. Now that I think that we could use ironic-staging-drivers as an *actual* staging area for new drivers, I'm starting to lean towards +1 on this whole idea. This still leaves some drivers that will never get a CI. > > I know there are concerns over responsibility in terms of code > ownership and quality, but I feel like we already hit such issues[5], > like those encountered when Dmitry removed classic drivers[6] from the > repository and also encountered issues just prior to the latest > release[7][8]. Well, yes, I personally have to care about this repository anyway. Dmitry > > This topic has come up in passing at PTGs and most recently on IRC[9], > and I think we ought to discuss it during our next weekly meeting[10]. > I've gone ahead and added an item to the agenda, but we can also > discuss via email. > > -Julia > > [1]: http://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/projects.yaml#n4571 > [2]: http://git.openstack.org/cgit/openstack/ironic-staging-drivers/tree/README.rst#n16 > [3]: https://specs.openstack.org/openstack/ironic-specs/specs/approved/third-party-ci.html > [4]: https://review.openstack.org/#/c/548943/ > [5]: https://review.openstack.org/#/c/541916/ > [6]: https://review.openstack.org/567902 > [7]: https://review.openstack.org/590352 > [8]: https://review.openstack.org/590401 > [9]: http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2018-08-09.log.html#t2018-08-09T11:55:27 > [10]: https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From cdent+os at anticdent.org Tue Aug 14 12:35:05 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 14 Aug 2018 13:35:05 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-33 Message-ID: HTML: https://anticdent.org/tc-report-18-33.html ## Dead, Gone or Stable [Last week](https://anticdent.org/tc-report-18-32.html) saw plenty of discussion about how to deal with projects for which no PTL was found by election or acclaim. That discussion continued this week, but also stimulated discussion on the differences between a project being dead, gone from OpenStack, or stable. * [Needing a point person](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-07.log.html#t2018-08-07T13:05:48) * [Risks (or lack) of being leaderless](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-13.log.html#t2018-08-13T08:09:23) Mixed in with that are some dribbles of a theme which has become increasingly common of late: * [Contribution from foundation member corps](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-09.log.html#t2018-08-09T12:49:27) * [The need for janitors, and board members not being the people able to provide](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-09.log.html#t2018-08-09T13:30:12) As a group, the TC has very mixed feelings on these issues. On the one hand everyone would like to keep projects healthy and within OpenStack, where possible. On the other hand, it is important that people who are upstream contributors stop over-committing to compensate for lack of commitment from a downstream that benefits hugely from their labor. Letting projects "die" or become unofficial is one way to clearly signal that there are resourcing problems. In fact, doing so with both [Searchlight](https://review.openstack.org/#/c/588644/) and [Freezer](https://review.openstack.org/#/c/588645/) raised some volunteers to help out as PTLs. However, both of those projects have been languishing for many months. How many second chances do projects get? ## IRC for PTLs Within all the discussion about the health of projects, there was some [discussion](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-08.log.html#t2018-08-08T17:25:20) of whether it was appropriate to expect PTLs to have and use IRC nicks. As the character of the pool of potential PTLs evolves, it might not fit. See the log for a bit more nuance on the issues. ## TC Elections Soon Six seats on the TC will be up for election. The nomination period will start at the end of this month. If you're considering running and have any questions, please feel free to ask me. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From jim at jimrollenhagen.com Tue Aug 14 12:38:10 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Tue, 14 Aug 2018 08:38:10 -0400 Subject: [openstack-dev] [ironic] ironic-staging-drivers: what to do? In-Reply-To: References: Message-ID: On Mon, Aug 13, 2018 at 2:40 PM, Julia Kreger wrote: > Greetings fellow ironicans! > > As many of you might know an openstack/ironic-staging-drivers[1] > repository exists. What most might not know is that it was > intentionally created outside of ironic's governance[2]. > > At the time it was created ironic was moving towards removing drivers > that did not meet our third-party CI requirement[3] to be in-tree. The > repository was an attempt to give a home to what some might find > useful or where third party CI is impractical or cost-prohibitive and > thus could not be officially part of Ironic the service. There was > hope that drivers could land in ironic-staging-drivers and possibly > graduate to being moved in-tree with third-party CI. As our community > has evolved we've not stopped and revisited the questions. > Which questions? With our most recent release over, I believe we need to ask ourselves > if we should consider moving ironic-staging-drivers into our > governance. > > Over the last couple of releases several contributors have found > themselves trying to seek out two available reviewers to merge even > trivial fixes[4]. Due to the team being so small this was no easy > task. As a result, I'm wondering why not move the repository into > governance, grant ironic-core review privileges upon the repository, > and maintain the purpose and meaning of the repository. This would > also result in the repository's release becoming managed via the > release management process which is a plus. > Agree with Dmitry, just because ironic-core has +2 doesn't mean they will look at it. I'm not opposed to adding more of the ironic-core folks to it outside of governance, though. While I probably wouldn't review ironic-staging-drivers often whether it is in our governance or not, I'd be happy to review trivial changes to help move things along. We could then propose an actual graduation process and help alleviate > some of the issues where driver code is iterated upon for long periods > of time before landing. At the same time I can see at least one issue > which is if we were to do that, then we would also need to manage > removal through the same path. > Is there any reason we can't have the same graduation process without bringing it into ironic's governance? I know there are concerns over responsibility in terms of code > ownership and quality, but I feel like we already hit such issues[5], > like those encountered when Dmitry removed classic drivers[6] from the > repository and also encountered issues just prior to the latest > release[7][8]. > Yes, this is my primary concern. I don't believe that the ironic team should be responsible for this code, when we can't validate it (manually or via CI). Any code is going to have quality issues from time to time. The difference is who is responsible for taking care of those. [5] and [6] are examples of where we knew we were going to break out-of-tree drivers, and helped fix them because we're kind people - not where we were taking ownership of the code. I suspect if we were aware of other out-of-tree drivers we would have been happy to fix those as well. [7] and [8] are just general code maintenance, and aren't really an argument to me for having the ironic team take over this project. Besides, as Dmitry notes, he "has to" care about ironic-staging-drivers anyway, and 3 out of those 4 commits are his. :) Overall I'm -1, but will live by whatever decision we come to. // jim This topic has come up in passing at PTGs and most recently on IRC[9], > and I think we ought to discuss it during our next weekly meeting[10]. > I've gone ahead and added an item to the agenda, but we can also > discuss via email. > -Julia > > [1]: http://git.openstack.org/cgit/openstack-infra/project- > config/tree/gerrit/projects.yaml#n4571 > [2]: http://git.openstack.org/cgit/openstack/ironic-staging- > drivers/tree/README.rst#n16 > [3]: https://specs.openstack.org/openstack/ironic-specs/specs/ > approved/third-party-ci.html > [4]: https://review.openstack.org/#/c/548943/ > [5]: https://review.openstack.org/#/c/541916/ > [6]: https://review.openstack.org/567902 > [7]: https://review.openstack.org/590352 > [8]: https://review.openstack.org/590401 > [9]: http://eavesdrop.openstack.org/irclogs/%23openstack- > ironic/%23openstack-ironic.2018-08-09.log.html#t2018-08-09T11:55:27 > [10]: https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_ > for_next_meeting > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Tue Aug 14 13:19:31 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 14 Aug 2018 15:19:31 +0200 Subject: [openstack-dev] [tripleo][Edge][FEMDC] Edge clouds and controlplane updates In-Reply-To: <41de3af6-5f7e-94e5-cfe3-a9090fb8218f@redhat.com> References: <41de3af6-5f7e-94e5-cfe3-a9090fb8218f@redhat.com> Message-ID: <0a519cf3-41b6-d040-3759-0c036a44f869@redhat.com> On 8/13/18 9:47 PM, Giulio Fidente wrote: > Hello, > > I'd like to get some feedback regarding the remaining > work for the split controlplane spec implementation [1] > > Specifically, while for some services like nova-compute it is not > necessary to update the controlplane nodes after an edge cloud is > deployed, for other services, like cinder (or glance, probably > others), it is necessary to do an update of the config files on the > controlplane when a new edge cloud is deployed. > > In fact for services like cinder or glance, which are hosted in the > controlplane, we need to pull data from the edge clouds (for example > the newly deployed ceph cluster keyrings and fsid) to configure cinder > (or glance) with a new backend. > > It looks like this demands for some architectural changes to solve the > following two: > > - how do we trigger/drive updates of the controlplane nodes after the > edge cloud is deployed? Note, there is also a strict(?) requirement of local management capabilities for edge clouds temporary disconnected off the central controlplane. That complicates the updates triggering even more. We'll need at least a notification-and-triggering system to perform required state synchronizations, including conflicts resolving. If that's the case, the architecture changes for TripleO deployment framework are inevitable AFAICT. > > - how do we scale the controlplane parameters to accomodate for N > backends of the same type? > > A very rough approach to the latter could be to use jinja to scale up > the CephClient service so that we can have multiple copies of it in the > controlplane. > > Each instance of CephClient should provide the ceph config file and > keyring necessary for each cinder (or glance) backend. > > Also note that Ceph is only a particular example but we'd need a similar > workflow for any backend type. > > The etherpad for the PTG session [2] touches this, but it'd be good to > start this conversation before then. > > 1. > https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html > > 2. https://etherpad.openstack.org/p/tripleo-ptg-queens-split-controlplane > -- Best regards, Bogdan Dobrelya, Irc #bogdando From sean.mcginnis at gmx.com Tue Aug 14 13:56:27 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 14 Aug 2018 08:56:27 -0500 Subject: [openstack-dev] [releases][rocky][tempest-plugins][ptl] Reminder to tag the Tempest plugins for Rocky release In-Reply-To: <1652d4c0292.bc627e0c30615.4963641156798511337@ghanshyammann.com> References: <1652d4c0292.bc627e0c30615.4963641156798511337@ghanshyammann.com> Message-ID: <20180814135626.GA24601@sm-workstation> On Sun, Aug 12, 2018 at 05:41:20PM +0900, Ghanshyam Mann wrote: > Hi All, > > Rocky release is few weeks away and we all agreed to release Tempest plugin with cycle-with-intermediary. Detail discussion are in ML [1] in case you missed. > > This is reminder to tag your project tempest plugins for Rocky release. You should be able to find your plugins deliverable file under rocky folder in releases repo[3]. You can refer cinder-tempest-plugin release as example. > > Feel free to reach to release/QA team for any help/query. > > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131810.html > [2] https://review.openstack.org/#/c/590025/ > [3] https://github.com/openstack/releases/tree/master/deliverables/rocky > > -gmann > It should also be noted that these repos will need to be set up with a release job if they do not already have one. They will need either publish-to-pypi or publish-to-pypi-python3, similar to what was done here: https://review.openstack.org/#/c/587623/ And before that is done, please make sure the package is registered and set up correctly on pypi following these insructions: https://docs.openstack.org/infra/manual/creators.html#pypi Requests to create a tempest plugin release will fail validation if this has not be set up ahead of time. Thanks, Sean From balazs.gibizer at ericsson.com Tue Aug 14 14:01:32 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 14 Aug 2018 16:01:32 +0200 Subject: [openstack-dev] [nova]Notification subteam meeting cancelled Message-ID: <1534255292.758.1@smtp.office365.com> Hi, There won't be notification subteam meeting this week. Cheers, gibi From doug at doughellmann.com Mon Aug 13 22:30:02 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 13 Aug 2018 18:30:02 -0400 Subject: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2 In-Reply-To: <20180813211730.cze4vpknwncpqg3b@gentoo.org> References: <20180813184055.a846b4a4d5a513722dbcc4ae@redhat.com> <197ac3a5-df13-8003-16eb-d2cedee9ec7f@suse.com> <20180813211730.cze4vpknwncpqg3b@gentoo.org> Message-ID: <1534199243-sup-5500@lrrr.local> Excerpts from Matthew Thode's message of 2018-08-13 16:17:30 -0500: > On 18-08-13 20:28:23, Andreas Jaeger wrote: > > On 2018-08-13 19:16, Andreas Jaeger wrote: > > > On 2018-08-13 18:40, Petr Kovar wrote: > > > > Hi all, > > > > > > > > This is a request for an FFE to release openstackdocstheme 1.21.2. > > > > > > > > > > This mostly fixes usability issues in rendering docs content, so we would > > > > like to update the theme across all project team docs on docs.o.o. > > > > > > I suggest to release quickly a 1.21.3 with > > > https://review.openstack.org/#/c/585517/ - and use that one instead. > > > > Release request: > > https://review.openstack.org/591485 > > > > Would this be a upper-constraint only bump? > > If so reqs acks it We need, eventually, to retrigger documentation builds in all of the open branches using this new version of the theme so we can inject the status info into those old pages. We will have an opportunity to trigger those builds when we move the zuul configuration into each repo as part of the python3 goal this cycle. So, for now we need to update the constraints list on master. But we also need to work quickly to update the constraints lists in the open branches so that is done before we approve the goal-related changes in those branches. Doug From doug at doughellmann.com Tue Aug 14 14:46:24 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 14 Aug 2018 10:46:24 -0400 Subject: [openstack-dev] [requirements][release][osc] FFE osc-lib 1.11.1 release In-Reply-To: <20180814052807.GB6001@thor.bakeyournoodle.com> References: <20180814043801.wfvtnj3nctyzarnb@gentoo.org> <20180814051153.q7jecuy23ljlg2aw@gentoo.org> <20180814052807.GB6001@thor.bakeyournoodle.com> Message-ID: <1534257706-sup-6791@lrrr.local> Excerpts from Tony Breeds's message of 2018-08-14 15:28:07 +1000: > On Tue, Aug 14, 2018 at 12:11:53AM -0500, Matthew Thode wrote: > > > Maybe it'd be better to figure out what's using that removed method and > > those would need the update? > > Given we have per-project deps in rocky only those that *need* the > exclusion will need to apply it. Right. Now that we no longer sync dependencies, releases no longer automatically trigger updates in the consuming projects. We could exclude the bad version in the global list, but I don't think we need to make that a prerequisite for anything else. > I think it's fair to accept the U-c bump and block 0.11.0 in > global-requirements. Then any project that find they're broken next > week can just add the exclusion themselves and move on. Exactly. Our main concern should be about the potential breadth of impact to our own CI systems, which we can mitigate with the constraints list. In this case we know we have a version of something we manage that broke several other things we manage. We should be able to go ahead with the release and keep an eye on things in case we introduce any new breakages. Doug From jungleboyj at gmail.com Tue Aug 14 15:37:59 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Tue, 14 Aug 2018 10:37:59 -0500 Subject: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ... In-Reply-To: References: <9b0850aa-2c4d-57c6-5a65-746c28607122@gmail.com> Message-ID: Ben, Don't fully understand why it was kicking me.  I guess one of the behaviors that is considered suspicious is trying to message a bunch of nicks at once.  I had tried reducing the number of people in my ping but it still kicked me and so I decided to not risk it again. Sounds like the moral of the story is if sigyn is in the channel, be careful.  :-) Jay On 8/13/2018 4:06 PM, Ben Nemec wrote: > > > On 08/08/2018 12:04 PM, Jay S Bryant wrote: >> Team, >> >> A reminder that we have our weekly Cinder meeting on Wednesdays at >> 16:00 UTC.  I bring this up as I can no longer send the courtesy >> pings without being kicked from IRC.  So, if you wish to join the >> meeting please add a reminder to your calendar of choice. > > Do you have any idea why you're being kicked?  I'm wondering how to > avoid getting into this situation with the Oslo pings. From jungleboyj at gmail.com Tue Aug 14 15:38:47 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Tue, 14 Aug 2018 10:38:47 -0500 Subject: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ... In-Reply-To: <20180813224432.3az2nrzesubvocql@yuggoth.org> References: <9b0850aa-2c4d-57c6-5a65-746c28607122@gmail.com> <20180813224432.3az2nrzesubvocql@yuggoth.org> Message-ID: <548e6d35-dfd3-5a41-7d01-c30de2342cab@gmail.com> On 8/13/2018 5:44 PM, Jeremy Stanley wrote: > On 2018-08-13 16:29:27 -0500 (-0500), Amy Marrich wrote: >> I know we did a ping last week in #openstack-ansible for our meeting no >> issue. I wonder if it's a length of names thing or a channel setting. > [...] > > Freenode's Sigyn bot may not have been invited to > #openstack-ansible. We might want to consider kicking it from > channels while they have nick registration enforced. > It does seem that we don't really need the monitoring if registration is enforced.  I would be up for doing this. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Tue Aug 14 15:43:21 2018 From: amy at demarco.com (Amy Marrich) Date: Tue, 14 Aug 2018 10:43:21 -0500 Subject: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ... In-Reply-To: <20180813224432.3az2nrzesubvocql@yuggoth.org> References: <9b0850aa-2c4d-57c6-5a65-746c28607122@gmail.com> <20180813224432.3az2nrzesubvocql@yuggoth.org> Message-ID: That bot is indeed missing from the channel Amy (spotz) On Mon, Aug 13, 2018 at 5:44 PM, Jeremy Stanley wrote: > On 2018-08-13 16:29:27 -0500 (-0500), Amy Marrich wrote: > > I know we did a ping last week in #openstack-ansible for our meeting no > > issue. I wonder if it's a length of names thing or a channel setting. > [...] > > Freenode's Sigyn bot may not have been invited to > #openstack-ansible. We might want to consider kicking it from > channels while they have nick registration enforced. > -- > Jeremy Stanley > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Tue Aug 14 16:13:13 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Tue, 14 Aug 2018 11:13:13 -0500 Subject: [openstack-dev] [releases][requirements][cycle-with-intermediary][cycle-trailing] requirements is going to branch stable/rocky at ~08-15-2018 2100Z Message-ID: <20180814161313.tvtdg6ife7q3anyf@gentoo.org> This is to warn and call out all those projects that do not have a stable/rocky branch yet. If you are in the folloing list your project will need to realize that your master is testing against the requirements/constraints from stein, not rocky. Any branching / tests you do will need to keep that in mind. ansible-role-container-registry ansible-role-redhat-subscription ansible-role-tripleo-modify-image barbican-tempest-plugin blazar-tempest-plugin cinder-tempest-plugin cloudkitty-dashboard cloudkitty-tempest-plugin cloudkitty congress-tempest-plugin designate-tempest-plugin ec2api-tempest-plugin heat-agents heat-dashboard heat-tempest-plugin instack-undercloud ironic-tempest-plugin karbor-dashboard karbor keystone-tempest-plugin kuryr-kubernetes kuryr-libnetwork kuryr-tempest-plugin magnum-tempest-plugin magnum-ui manila-tempest-plugin mistral-tempest-plugin monasca-kibana-plugin monasca-tempest-plugin murano-tempest-plugin networking-generic-switch-tempest-plugin networking-hyperv neutron-tempest-plugin octavia-tempest-plugin os-apply-config os-collect-config os-net-config os-refresh-config oswin-tempest-plugin paunch python-tricircleclient sahara-tests senlin-tempest-plugin solum-tempest-plugin swift tacker-horizon tacker telemetry-tempest-plugin tempest-tripleo-ui tempest tripleo-common-tempest-plugin tripleo-ipsec tripleo-ui tripleo-validations trove-tempest-plugin vitrage-tempest-plugin watcher-tempest-plugin zaqar-tempest-plugin zun-tempest-plugin zun-ui zun kolla-ansible kolla puppet-aodh puppet-barbican puppet-ceilometer puppet-cinder puppet-cloudkitty puppet-congress puppet-designate puppet-ec2api puppet-freezer puppet-glance puppet-glare puppet-gnocchi puppet-heat puppet-horizon puppet-ironic puppet-keystone puppet-magnum puppet-manila puppet-mistral puppet-monasca puppet-murano puppet-neutron puppet-nova puppet-octavia puppet-openstack_extras puppet-openstacklib puppet-oslo puppet-ovn puppet-panko puppet-qdr puppet-rally puppet-sahara puppet-swift puppet-tacker puppet-tempest puppet-tripleo puppet-trove puppet-vitrage puppet-vswitch puppet-watcher puppet-zaqar python-tripleoclient tripleo-common tripleo-heat-templates tripleo-image-elements tripleo-puppet-elements So please branch :D -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jean-philippe at evrard.me Tue Aug 14 16:20:04 2018 From: jean-philippe at evrard.me (=?utf-8?q?jean-philippe=40evrard=2Eme?=) Date: Tue, 14 Aug 2018 18:20:04 +0200 Subject: [openstack-dev] =?utf-8?q?=5Bopenstack-ansible=5D=5Bptg=5D_PTG_et?= =?utf-8?q?herpad_planning=2E?= Message-ID: <1a9f-5b730100-1f-61dbb300@35945938> Dear community, We are approaching the PTG in Denver, and I want to remind everybody to make sure you are ready for it! I've created the etherpad [1] where you can add any topic you want to discuss, if you haven't done so already. If you are not able to attend the PTG, please mention it on the etherpad, and we'll run the session remotely if we can. Note: There will be the now traditional team photos and dinner too :) Regards, Jean-Philippe Evrard (evrardjp) [1]: https://etherpad.openstack.org/p/osa-stein-ptg From openstack at fried.cc Tue Aug 14 16:29:13 2018 From: openstack at fried.cc (Eric Fried) Date: Tue, 14 Aug 2018 11:29:13 -0500 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> Message-ID: Folks- The patch mentioned below [1] has undergone several rounds of review and collaborative revision, and we'd really like to get your feedback on it. From the commit message: Here are some examples of the debug output: - A request for three resources with no aggregate or trait filters: found 7 providers with available 5 VCPU found 9 providers with available 1024 MEMORY_MB 5 after filtering by previous result found 8 providers with available 1500 DISK_GB 2 after filtering by previous result - The same request, but with a required trait that nobody has, shorts out quickly: found 0 providers after applying required traits filter ({'HW_CPU_X86_AVX2': 65}) - A request for one resource with aggregates and forbidden (but no required) traits: found 2 providers after applying aggregates filter ([['3ed8fb2f-4793-46ee-a55b-fdf42cb392ca']]) found 1 providers after applying forbidden traits filter ({u'CUSTOM_TWO': 201, u'CUSTOM_THREE': 202}) found 3 providers with available 4 VCPU 1 after applying initial aggregate and trait filters Thanks, efried [1] https://review.openstack.org/#/c/590041 > I've created a patch that (hopefully) will address some of the > difficulty that folks have had in diagnosing which parts of a request > caused all providers to be filtered out from the return of GET > /allocation_candidates: > > https://review.openstack.org/#/c/590041 > > This patch changes two primary things: > > 1) Query-splitting > > The patch splits the existing monster SQL query that was being used for > querying for all providers that matched all requested resources, > required traits, forbidden traits and required aggregate associations > into doing multiple queries, one for each requested resource. While this > does increase the number of database queries executed for each call to > GET /allocation_candidates, the changes allow better visibility into > what parts of the request cause an exhaustion of matching providers. > We've benchmarked the new patch and have shown the performance impact of > doing 3 queries versus 1 (when there is a request for 3 resources -- > VCPU, RAM and disk) is minimal (a few extra milliseconds for execution > against a DB with 1K providers having inventory of all three resource > classes). > > 2) Diagnostic logging output > > The patch adds debug log output within each loop iteration, so there is > no logging output that shows how many matching providers were found for > each resource class involved in the request. The output looks like this > in the logs: > > [req-2d30faa8-4190-4490-a91e-610045530140] inside VCPU request loop. > before applying trait and aggregate filters, found 12 matching providers > [req-2d30faa8-4190-4490-a91e-610045530140] found 12 providers with > capacity for the requested 1 VCPU. > [req-2d30faa8-4190-4490-a91e-610045530140] inside MEMORY_MB request > loop. before applying trait and aggregate filters, found 9 matching > providers [req-2d30faa8-4190-4490-a91e-610045530140] found 9 providers > with capacity for the requested 64 MEMORY_MB. before loop iteration we > had 12 matches. [req-2d30faa8-4190-4490-a91e-610045530140] > RequestGroup(use_same_provider=False, resources={MEMORY_MB:64, VCPU:1}, > traits=[], aggregates=[]) (suffix '') returned 9 matches > > If a request includes required traits, forbidden traits or required > aggregate associations, there are additional log messages showing how > many matching providers were found after applying the trait or aggregate > filtering set operation (in other words, the log output shows the impact > of the trait filter or aggregate filter in much the same way that the > existing FilterScheduler logging shows the "before and after" impact > that a particular filter had on a request process. > > Have a look at the patch in question and please feel free to add your > feedback and comments on ways this can be improved to meet your needs. > > Best, > -jay > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tobias.urdin at binero.se Tue Aug 14 16:33:11 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Tue, 14 Aug 2018 18:33:11 +0200 Subject: [openstack-dev] [puppet] migrating to storyboard Message-ID: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> Hello all incredible Puppeters, I've tested setting up an Storyboard instance and test migrated puppet-ceph and it went without any issues there using the documentation [1] [2] with just one minor issue during the SB setup [3]. My goal is that we will be able to swap to Storyboard during the Stein cycle but considering that we have a low activity on bugs my opinion is that we could do this swap very easily anything soon as long as everybody is in favor of it. Please let me know what you think about moving to Storyboard? If everybody is in favor of it we can request a migration to infra according to documentation [2]. I will continue to test the import of all our project while people are collecting their thoughts and feedback :) Best regards Tobias [1] https://docs.openstack.org/infra/storyboard/install/development.html [2] https://docs.openstack.org/infra/storyboard/migration.html [3] It failed with an error about launchpadlib not being installed, solved with `tox -e venv pip install launchpadlib` From aschultz at redhat.com Tue Aug 14 16:35:59 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 14 Aug 2018 10:35:59 -0600 Subject: [openstack-dev] [releases][requirements][cycle-with-intermediary][cycle-trailing] requirements is going to branch stable/rocky at ~08-15-2018 2100Z In-Reply-To: <20180814161313.tvtdg6ife7q3anyf@gentoo.org> References: <20180814161313.tvtdg6ife7q3anyf@gentoo.org> Message-ID: On Tue, Aug 14, 2018 at 10:13 AM, Matthew Thode wrote: .. snip .. > ansible-role-container-registry > ansible-role-redhat-subscription > ansible-role-tripleo-modify-image > instack-undercloud > os-apply-config > os-collect-config > os-net-config > os-refresh-config > paunch > python-tricircleclient > tripleo-common-tempest-plugin > tripleo-ipsec > tripleo-ui > tripleo-validations > puppet-tripleo > python-tripleoclient > tripleo-common > tripleo-heat-templates > tripleo-image-elements > tripleo-puppet-elements >From a tripleo aspect, we're aware and will likely branch the client at the end of the week and others soonish but we're dependent on packaging for the most part. We'll keep an eye on breakages due to any requirement changes. Thanks for the heads up > puppet-aodh > puppet-barbican > puppet-ceilometer > puppet-cinder > puppet-cloudkitty > puppet-congress > puppet-designate > puppet-ec2api > puppet-freezer > puppet-glance > puppet-glare > puppet-gnocchi > puppet-heat > puppet-horizon > puppet-ironic > puppet-keystone > puppet-magnum > puppet-manila > puppet-mistral > puppet-monasca > puppet-murano > puppet-neutron > puppet-nova > puppet-octavia > puppet-openstack_extras > puppet-openstacklib > puppet-oslo > puppet-ovn > puppet-panko > puppet-qdr > puppet-rally > puppet-sahara > puppet-swift > puppet-tacker > puppet-tempest > puppet-trove > puppet-vitrage > puppet-vswitch > puppet-watcher > puppet-zaqar puppet-* are fine as we rely on packaging and requirements bits are only for docs. > -- > Matthew Thode (prometheanfire) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From openstack at nemebean.com Tue Aug 14 16:44:44 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 14 Aug 2018 11:44:44 -0500 Subject: [openstack-dev] [barbican][oslo][release] FFE request for castellan In-Reply-To: <1533914109.23178.37.camel@redhat.com> References: <1533914109.23178.37.camel@redhat.com> Message-ID: On 08/10/2018 10:15 AM, Ade Lee wrote: > Hi all, > > I'd like to request a feature freeze exception to get the following > change in for castellan. > > https://review.openstack.org/#/c/575800/ > > This extends the functionality of the vault backend to provide > previously uninmplemented functionality, so it should not break anyone. > > The castellan vault plugin is used behind barbican in the barbican- > vault plugin. We'd like to get this change into Rocky so that we can > release Barbican with complete functionality on this backend (along > with a complete set of passing functional tests). This does seem fairly low risk since it's just implementing a function that previously raised a NotImplemented exception. However, with it being so late in the cycle I think we need the release team's input on whether this is possible. Most of the release FFE's I've seen have been for critical bugs, not actual new features. I've added that tag to this thread so hopefully they can weigh in. From openstack at nemebean.com Tue Aug 14 16:49:07 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 14 Aug 2018 11:49:07 -0500 Subject: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ... In-Reply-To: References: <9b0850aa-2c4d-57c6-5a65-746c28607122@gmail.com> Message-ID: Okay, thanks. There's no Sigyn in openstack-oslo so I think we're good. :-) On 08/14/2018 10:37 AM, Jay S Bryant wrote: > Ben, > > Don't fully understand why it was kicking me.  I guess one of the > behaviors that is considered suspicious is trying to message a bunch of > nicks at once.  I had tried reducing the number of people in my ping but > it still kicked me and so I decided to not risk it again. > > Sounds like the moral of the story is if sigyn is in the channel, be > careful.  :-) > > Jay > > > On 8/13/2018 4:06 PM, Ben Nemec wrote: >> >> >> On 08/08/2018 12:04 PM, Jay S Bryant wrote: >>> Team, >>> >>> A reminder that we have our weekly Cinder meeting on Wednesdays at >>> 16:00 UTC.  I bring this up as I can no longer send the courtesy >>> pings without being kicked from IRC.  So, if you wish to join the >>> meeting please add a reminder to your calendar of choice. >> >> Do you have any idea why you're being kicked?  I'm wondering how to >> avoid getting into this situation with the Oslo pings. > From jillr at redhat.com Tue Aug 14 17:51:53 2018 From: jillr at redhat.com (Jill Rouleau) Date: Tue, 14 Aug 2018 10:51:53 -0700 Subject: [openstack-dev] [tripleo] ansible roles in tripleo Message-ID: <1534269113.6400.11.camel@redhat.com> Hey folks, Like Alex mentioned[0] earlier, we've created a bunch of ansible roles for tripleo specific bits.  The idea is to start putting some basic cookiecutter type things in them to get things started, then move some low-hanging fruit out of tripleo-heat-templates and into the appropriate roles.  For example, docker/services/keystone.yaml could have upgrade_tasks and fast_forward_upgrade_tasks moved into ansible-role- tripleo-keystone/tasks/(upgrade.yml|fast_forward_upgrade.yml), and the t-h-t updated to  include_role: ansible-role-tripleo-keystone    tasks_from: upgrade.yml  without having to modify any puppet or heat directives. This would let us define some patterns for implementing these tripleo roles during Stein while looking at how we can make use of ansible for things like core config. t-h-t and config-download will still drive the vast majority of playbook creation for now, but for new playbooks (such as for operations tasks) tripleo-ansible[1] would be our project directory. So in addition to the larger conversation about how deployers can start to standardize how we're all using ansible, I'd like to also have a tripleo-specific conversation at PTG on how we can break out some of our ansible that's currently embedded in t-h-t into more modular and flexible roles. Cheers, Jill [0] http://lists.openstack.org/pipermail/openstack-dev/2018-August/13311 9.html [1] https://git.openstack.org/cgit/openstack/tripleo-ansible/tree/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From ekcs.openstack at gmail.com Tue Aug 14 17:56:19 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Tue, 14 Aug 2018 10:56:19 -0700 Subject: [openstack-dev] [requirements][heat][congress] gabbi<1.42.1 causing error in queens dsvm In-Reply-To: <20180814060311.GC6001@thor.bakeyournoodle.com> References: <20180814060311.GC6001@thor.bakeyournoodle.com> Message-ID: On 8/13/18, 11:03 PM, "Tony Breeds" wrote: >On Mon, Aug 13, 2018 at 04:10:30PM -0700, Eric K wrote: >> It appears that gabbi<1.42.1 is causing on error with heat tempest >> plugin in congress stable/queens dsvm job [1][2][3]. The issue was >> addressed in heat tempest plugin [4], but the problem remains for >> stable/queens jobs because the queens upper-constraint is still at >> 1.40.0 [5]. >> >> Any suggestions on how to proceed? Thank you! > >https://review.openstack.org/591561 Should fix it. You can create a >no-op test that: > >Depends-On: https://review.openstack.org/591561 > >to verify it works. Doing so and reporting the change ID would be >really helpful. > >Yours Tony. Thank you, Tony! Here's a no-op test: https://review.openstack.org/#/c/591805/ >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ekcs.openstack at gmail.com Tue Aug 14 18:20:44 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Tue, 14 Aug 2018 11:20:44 -0700 Subject: [openstack-dev] [requirements][heat][congress] gabbi<1.42.1 causing error in queens dsvm In-Reply-To: References: Message-ID: From: Rabi Mishra Date: Monday, August 13, 2018 at 10:10 PM > On Tue, Aug 14, 2018 at 4:40 AM, Eric K wrote: >> It appears that gabbi<1.42.1 is causing on error with heat tempest >> plugin in congress stable/queens dsvm job [1][2][3]. > I wonder why you're enabling heat-tempest-plugin in the first place? I see a > number of tempest plugins enabled. However, you don't seem to gate on the > tests in those plugins[1]. > > [1] > https://github.com/openstack/congress/blob/master/playbooks/legacy/congress-de > vstack-api-base/run.yaml#L61 Hi Rabi, When folks worked on transitioning from in-tree tempest tests to a separate plugin, it seemed as though enabling these plugins was necessary to get some of the skip checks like this to succeed: https://github.com/openstack/congress-tempest-plugin/blob/master/congress_te mpest_plugin/tests/scenario/congress_datasources/test_aodh.py#L32 But maybe that's not the case and there's a better way. Would be great to shed these extra tempest plugins actually. > > >> The issue was >> addressed in heat tempest plugin [4], but the problem remains for >> stable/queens jobs because the queens upper-constraint is still at >> 1.40.0 [5]. >> > > Yeah, a uc bump to 1.42.1 is required if you're enabling it. > >> Any suggestions on how to proceed? Thank you! >> >> [1] https://bugs.launchpad.net/heat-tempest-plugin/+bug/1749218 >> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1609361 >> [3] >> http://logs.openstack.org/41/567941/2/check/congress-devstack-api-mysql/c232d >> 8a/job-output.txt.gz#_2018-08-13_11_46_28_441837 >> [4] https://review.openstack.org/#/c/544025/ >> [5] >> https://github.com/openstack/requirements/blob/stable/queens/upper-constraint >> s.txt#L245 >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Regards, > Rabi Mishra > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Tue Aug 14 18:56:34 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 14 Aug 2018 13:56:34 -0500 Subject: [openstack-dev] [barbican][oslo][release] FFE request for castellan In-Reply-To: References: <1533914109.23178.37.camel@redhat.com> Message-ID: <20180814185634.GA26658@sm-workstation> > On 08/10/2018 10:15 AM, Ade Lee wrote: > > Hi all, > > > > I'd like to request a feature freeze exception to get the following > > change in for castellan. > > > > https://review.openstack.org/#/c/575800/ > > > > This extends the functionality of the vault backend to provide > > previously uninmplemented functionality, so it should not break anyone. > > > > The castellan vault plugin is used behind barbican in the barbican- > > vault plugin. We'd like to get this change into Rocky so that we can > > release Barbican with complete functionality on this backend (along > > with a complete set of passing functional tests). > > This does seem fairly low risk since it's just implementing a function that > previously raised a NotImplemented exception. However, with it being so > late in the cycle I think we need the release team's input on whether this > is possible. Most of the release FFE's I've seen have been for critical > bugs, not actual new features. I've added that tag to this thread so > hopefully they can weigh in. > As far as releases go, this should be fine. If this doesn't affect any other projects and would just be a late merging feature, as long as the castellan team has considered the risk of adding code so late and is comfortable with that, this is OK. Castellan follows the cycle-with-intermediary release model, so the final Rocky release just needs to be done by next Thursday. I do see the stable/rocky branch has already been created for this repo, so it would need to merge to master first (technically stein), then get cherry-picked to stable/rocky. From sean.mcginnis at gmx.com Tue Aug 14 18:59:08 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 14 Aug 2018 13:59:08 -0500 Subject: [openstack-dev] [releases][requirements][cycle-with-intermediary][cycle-trailing] requirements is going to branch stable/rocky at ~08-15-2018 2100Z In-Reply-To: <20180814161313.tvtdg6ife7q3anyf@gentoo.org> References: <20180814161313.tvtdg6ife7q3anyf@gentoo.org> Message-ID: <20180814185907.GB26658@sm-workstation> On Tue, Aug 14, 2018 at 11:13:13AM -0500, Matthew Thode wrote: > This is to warn and call out all those projects that do not have a > stable/rocky branch yet. > > If you are in the folloing list your project will need to realize that > your master is testing against the requirements/constraints from stein, > not rocky. Any branching / tests you do will need to keep that in mind. > > ansible-role-container-registry > ansible-role-redhat-subscription > ansible-role-tripleo-modify-image > barbican-tempest-plugin > blazar-tempest-plugin > cinder-tempest-plugin Point of clarification - *-tempest-plugin repos do not get branched. Releases should be done to track the version for rocky, but stable branches are not created. From kennelson11 at gmail.com Tue Aug 14 19:03:41 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 14 Aug 2018 12:03:41 -0700 Subject: [openstack-dev] [puppet] migrating to storyboard In-Reply-To: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> Message-ID: Hello! The error you hit can be resolved by adding launchpadlib to your tox.ini if I recall correctly.. also, if you'd like, I can run a test migration of puppet's launchpad projects into our storyboard-dev db (where I've done a ton of other test migrations) if you want to see how it looks/works with a larger db. Just let me know and I can kick it off. As for a time to migrate, if you all are good with it, we usually schedule for Friday's so there is even less activity. Its a small project config change and then we just need an infra core to kick off the script once the change merges. -Kendall (diablo_rojo) On Tue, Aug 14, 2018 at 9:33 AM Tobias Urdin wrote: > Hello all incredible Puppeters, > > I've tested setting up an Storyboard instance and test migrated > puppet-ceph and it went without any issues there using the documentation > [1] [2] > with just one minor issue during the SB setup [3]. > > My goal is that we will be able to swap to Storyboard during the Stein > cycle but considering that we have a low activity on > bugs my opinion is that we could do this swap very easily anything soon > as long as everybody is in favor of it. > > Please let me know what you think about moving to Storyboard? > If everybody is in favor of it we can request a migration to infra > according to documentation [2]. > > I will continue to test the import of all our project while people are > collecting their thoughts and feedback :) > > Best regards > Tobias > > [1] https://docs.openstack.org/infra/storyboard/install/development.html > [2] https://docs.openstack.org/infra/storyboard/migration.html > [3] It failed with an error about launchpadlib not being installed, > solved with `tox -e venv pip install launchpadlib` > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Tue Aug 14 19:10:00 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Tue, 14 Aug 2018 15:10:00 -0400 Subject: [openstack-dev] Speaker Selection Process: OpenStack Summit Berlin In-Reply-To: <5B718D3F.9030202@openstack.org> References: <5B718D3F.9030202@openstack.org> Message-ID: On Mon, Aug 13, 2018 at 9:53 AM Jimmy McArthur wrote: > Greetings! > > The speakers for the OpenStack Summit Berlin will be announced August 14, > at 4:00 AM UTC. Ahead of that, we want to take this opportunity to thank > our Programming Committee! They have once again taken time out of their > busy schedules to help create another round of outstanding content for the > OpenStack Summit. > > The OpenStack Foundation relies on the community-nominated Programming > Committee, along with your Community Votes to select the content of the > summit. If you're curious about this process, you can read more about it > here > > where we have also listed the Programming Committee members. > Hi, I particularly want to know the process of selecting Programming Committee of each track. You mentioned that the Programming Committee is "community-nominated" but there is no information about how to select among the list of "community-nominated" candidates. If there are multiple candidates self-nominating themselves, how to ensure the process of selecting Programming Committee members is fair and transparent? I asked that because I have an impression that the selection of Programming Committee members has a significant impact on the final selected content of the summit, and the final selected content has impacts on the visibility of each OpenStack official project. It also appears that if a specific project is not well represented on its relevant track of the Programming Committee, none of the project-related talks was selected. I would wonder if the selection process is fair and give visibility of each official project? > > If you'd like to nominate yourself or someone you know for the OpenStack > Summit Denver Programming Committee, you can do so here: > > https://openstackfoundation.formstack.com/forms/openstackdenver2019_programmingcommitteenom > > Thanks a bunch and we look forward to seeing everyone in Berlin! > > Cheers, > Jimmy > > > > > * > * > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue Aug 14 19:36:38 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 14 Aug 2018 14:36:38 -0500 Subject: [openstack-dev] Speaker Selection Process: OpenStack Summit Berlin In-Reply-To: References: <5B718D3F.9030202@openstack.org> Message-ID: <5B732F46.5080702@openstack.org> Hi Hongbin, Please see below for some general notes... Hongbin Lu wrote: > > > On Mon, Aug 13, 2018 at 9:53 AM Jimmy McArthur > wrote: > > Greetings! > > The speakers for the OpenStack Summit Berlin will be announced > August 14, at 4:00 AM UTC. Ahead of that, we want to take this > opportunity to thank our Programming Committee! They have once > again taken time out of their busy schedules to help create > another round of outstanding content for the OpenStack Summit. > > The OpenStack Foundation relies on the community-nominated > Programming Committee, along with your Community Votes to select > the content of the summit. If you're curious about this process, > you can read more about it here > > where we have also listed the Programming Committee members. > > > Hi, I particularly want to know the process of selecting Programming > Committee of each track. You mentioned that the Programming Committee > is "community-nominated" but there is no information about how to > select among the list of "community-nominated" candidates. If there > are multiple candidates self-nominating themselves, how to ensure the > process of selecting Programming Committee members is fair and > transparent? We announce public track chair nominations, but it's true that we haven't published the nominees in the past. Moving forward, we'll publish those to the MLs. We do publish the selected track chairs as we did this year for Berlin . These are selected from a pool of volunteers (self-nominated or otherwise) and, for the most part, we do not have a large pool of people to choose from. Programming Committee work can be a bit arduous and that's why we're so thankful for the hard work our volunteers put in each Summit. We literally couldn't do it without them. If you feel you could help with the process, I encourage you to volunteer for the Denver 2019 Summit Programming Committee: https://openstackfoundation.formstack.com/forms/openstackdenver2019_programmingcommitteenom > > I asked that because I have an impression that the selection of > Programming Committee members has a significant impact on the final > selected content of the summit, and the final selected content has > impacts on the visibility of each OpenStack official project. It also > appears that if a specific project is not well represented on its > relevant track of the Programming Committee, none of the > project-related talks was selected. I would wonder if the selection > process is fair and give visibility of each official proj Every project at the summit has the opportunity to submit both a Project Update and a Project Onboarding session, outside of the CFP. If an community project wants that space, it's guaranteed. In addition to the CFP, the Forum presents an additional opportunity for projects to speak to the community. On top of that, we encourage diversity in selection of all presentations for the CFP. Including but not limited to diversity in corporate, gender, project, etc... Beyond that, our track chairs have the discretion to select the best content from a field of high quality submissions. Finally, if you feel a project is underrepresented in the community, we strongly encourage you to reach out to local user groups and to present your content there: https://openstackfoundation.formstack.com/forms/user_group_presentation_form This is both a great for community outreach and an opportunity to help gain support for a project, SIG, etc... Cheers, Jimmy > > > If you'd like to nominate yourself or someone you know for the > OpenStack Summit Denver Programming Committee, you can do so here: * > *https://openstackfoundation.formstack.com/forms/openstackdenver2019_programmingcommitteenom > > Thanks a bunch and we look forward to seeing everyone in Berlin! > > Cheers, > Jimmy > * > > > > * > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Aug 14 20:22:16 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 14 Aug 2018 16:22:16 -0400 Subject: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2 In-Reply-To: <1534199243-sup-5500@lrrr.local> References: <20180813184055.a846b4a4d5a513722dbcc4ae@redhat.com> <197ac3a5-df13-8003-16eb-d2cedee9ec7f@suse.com> <20180813211730.cze4vpknwncpqg3b@gentoo.org> <1534199243-sup-5500@lrrr.local> Message-ID: <1534278107-sup-8825@lrrr.local> Excerpts from Doug Hellmann's message of 2018-08-13 18:30:02 -0400: > Excerpts from Matthew Thode's message of 2018-08-13 16:17:30 -0500: > > On 18-08-13 20:28:23, Andreas Jaeger wrote: > > > On 2018-08-13 19:16, Andreas Jaeger wrote: > > > > On 2018-08-13 18:40, Petr Kovar wrote: > > > > > Hi all, > > > > > > > > > > This is a request for an FFE to release openstackdocstheme 1.21.2. > > > > > > > > > > > > > This mostly fixes usability issues in rendering docs content, so we would > > > > > like to update the theme across all project team docs on docs.o.o. > > > > > > > > I suggest to release quickly a 1.21.3 with > > > > https://review.openstack.org/#/c/585517/ - and use that one instead. > > > > > > Release request: > > > https://review.openstack.org/591485 > > > > > > > Would this be a upper-constraint only bump? > > > > If so reqs acks it > > We need, eventually, to retrigger documentation builds in all of the > open branches using this new version of the theme so we can inject the > status info into those old pages. We will have an opportunity to trigger > those builds when we move the zuul configuration into each repo as part > of the python3 goal this cycle. > > So, for now we need to update the constraints list on master. But we > also need to work quickly to update the constraints lists in the open > branches so that is done before we approve the goal-related changes in > those branches. > > Doug Now that https://review.openstack.org/#/c/591671/ has landed, we need someone to propose the backports of the constraint updates to all of the existing stable branches. Doug From opensrloo at gmail.com Tue Aug 14 20:38:48 2018 From: opensrloo at gmail.com (Ruby Loo) Date: Tue, 14 Aug 2018 16:38:48 -0400 Subject: [openstack-dev] [ironic] ironic-staging-drivers: what to do? In-Reply-To: References: Message-ID: Hi Julia, Thanks for bringing this up. On Mon, Aug 13, 2018, 2:41 PM Julia Kreger, wrote: > Greetings fellow ironicans! > > As many of you might know an openstack/ironic-staging-drivers[1] > repository exists. What most might not know is that it was > intentionally created outside of ironic's governance[2]. > > At the time it was created ironic was moving towards removing drivers > that did not meet our third-party CI requirement[3] to be in-tree. The > repository was an attempt to give a home to what some might find > useful or where third party CI is impractical or cost-prohibitive and > thus could not be officially part of Ironic the service. There was > hope that drivers could land in ironic-staging-drivers and possibly > graduate to being moved in-tree with third-party CI. As our community > has evolved we've not stopped and revisited the questions. > > With our most recent release over, I believe we need to ask ourselves > if we should consider moving ironic-staging-drivers into our > governance. > > Over the last couple of releases several contributors have found > themselves trying to seek out two available reviewers to merge even > trivial fixes[4]. Due to the team being so small this was no easy > task. As a result, I'm wondering why not move the repository into > governance, grant ironic-core review privileges upon the repository, > and maintain the purpose and meaning of the repository. This would > also result in the repository's release becoming managed via the > release management process which is a plus. > If I understand, it seems like the main issue is lack of reviewers. As mentioned by others, I would not be opposed to adding existing ironic cores to this repo. Whether folks review is a different question. We could then propose an actual graduation process and help alleviate > some of the issues where driver code is iterated upon for long periods > of time before landing. At the same time I can see at least one issue > which is if we were to do that, then we would also need to manage > removal through the same path. > I am not sure I see any advantages to this. The ansible driver was in the staging repo for awhile before it went into ironic so we know that is do-able :) > I know there are concerns over responsibility in terms of code > ownership and quality, but I feel like we already hit such issues[5], > like those encountered when Dmitry removed classic drivers[6] from the > repository and also encountered issues just prior to the latest > release[7][8]. > I don't mind making changes or reviewing changes to this repo, especially if there are unit tests. However, that is the most responsibility I am comfortable having with this repo. Right now, I don't see any good reasons for putting it under the ironic governance. I am, of course, open to being convinced otherwise! --ruby > This topic has come up in passing at PTGs and most recently on IRC[9], > and I think we ought to discuss it during our next weekly meeting[10]. > I've gone ahead and added an item to the agenda, but we can also > discuss via email. > -Julia > > [1]: > http://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/projects.yaml#n4571 > [2]: > http://git.openstack.org/cgit/openstack/ironic-staging-drivers/tree/README.rst#n16 > [3]: > https://specs.openstack.org/openstack/ironic-specs/specs/approved/third-party-ci.html > [4]: https://review.openstack.org/#/c/548943/ > [5]: https://review.openstack.org/#/c/541916/ > [6]: https://review.openstack.org/567902 > [7]: https://review.openstack.org/590352 > [8]: https://review.openstack.org/590401 > [9]: > http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2018-08-09.log.html#t2018-08-09T11:55:27 > [10]: > https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Aug 14 20:45:08 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 14 Aug 2018 16:45:08 -0400 Subject: [openstack-dev] [oslo][mox][python3][goal] need help with mox3 and python 3.6 Message-ID: <1534279388-sup-8936@lrrr.local> The python 3.6 unit test job has exposed an issue with mox3. It looks like it might just be in the test suite, but I can't tell. I'm looking for one of the folks who suggested we should just keep maintaining mox3 to help fix it. Please go ahead and take over the relevant patch and include whatever changes are needed. https://review.openstack.org/#/c/589591/ Doug From zbitter at redhat.com Tue Aug 14 21:26:09 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 14 Aug 2018 16:26:09 -0500 Subject: [openstack-dev] [oslo][mox][python3][goal] need help with mox3 and python 3.6 In-Reply-To: <1534279388-sup-8936@lrrr.local> References: <1534279388-sup-8936@lrrr.local> Message-ID: <5b550e80-0936-e369-2cf8-88ab940fa39e@redhat.com> On 14/08/18 15:45, Doug Hellmann wrote: > The python 3.6 unit test job has exposed an issue with mox3. It looks > like it might just be in the test suite, but I can't tell. > > I'm looking for one of the folks who suggested we should just keep > maintaining mox3 to help fix it. Please go ahead and take over the > relevant patch and include whatever changes are needed. > > https://review.openstack.org/#/c/589591/ I'm not one of those people (and I'm not oblivious to the fact that this was a not-especially-subtly coded message for mriedem), but I fixed it. Please don't make me a maintainer now ;) cheers, Zane. From ekcs.openstack at gmail.com Tue Aug 14 21:40:57 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Tue, 14 Aug 2018 14:40:57 -0700 Subject: [openstack-dev] [tempest][qa][congress] tempest test conditioning on release version Message-ID: Anyone have an example handy of a tempest test conditioning on service release version (because new features not available in past versions)? Seems like it could get pretty messy and haphazard, so I'm curious to see best practices. Thanks lots! Eric Kao From doug at doughellmann.com Tue Aug 14 21:43:53 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 14 Aug 2018 17:43:53 -0400 Subject: [openstack-dev] [oslo][mox][python3][goal] need help with mox3 and python 3.6 In-Reply-To: <5b550e80-0936-e369-2cf8-88ab940fa39e@redhat.com> References: <1534279388-sup-8936@lrrr.local> <5b550e80-0936-e369-2cf8-88ab940fa39e@redhat.com> Message-ID: <1534282988-sup-688@lrrr.local> Excerpts from Zane Bitter's message of 2018-08-14 16:26:09 -0500: > On 14/08/18 15:45, Doug Hellmann wrote: > > The python 3.6 unit test job has exposed an issue with mox3. It looks > > like it might just be in the test suite, but I can't tell. > > > > I'm looking for one of the folks who suggested we should just keep > > maintaining mox3 to help fix it. Please go ahead and take over the > > relevant patch and include whatever changes are needed. > > > > https://review.openstack.org/#/c/589591/ > > I'm not one of those people (and I'm not oblivious to the fact that this > was a not-especially-subtly coded message for mriedem), but I fixed it. > > Please don't make me a maintainer now ;) > > cheers, > Zane. > I'm not a maintainer myself, but if I *was* a maintainer your behavior here would clearly need to be answered by adding you to that team. So, watch yourself. ;-) And thank you. Doug From matt at oliver.net.au Wed Aug 15 00:30:31 2018 From: matt at oliver.net.au (Matthew Oliver) Date: Wed, 15 Aug 2018 10:30:31 +1000 Subject: [openstack-dev] [First Contact] [SIG] [PTL] Project Liaisons - Please update list Message-ID: Greetings new and continuing PTLs! Now that the PTL elections are over may I ask those of you who haven't done so already to head on over to the First Contact SIG Project Liaison list [0] and make sure you details, especially timezone has been filled out? We of the First Contact SIG are striving to provide a place for new contributors to come and get help, advice, and to better get connected to our awesome online community. But we can only to that if we have coverage of projects. As Kendall mentioned in a previous email [1], unless the liaison for a project is filled by a volunteer, or better yet multiple volunteers, the liaison for a project it will default to the PTL. Please update your details in the list and provide your timezone, this will not only let us know you've looked, but also shows us that you are engaged. What do liaisons do? You'll be the point of call for new contributors. This could be a SIG member adding you to a gerrit review of a new contributor to help engage and show some review love, or being introduced to new contributors to your project via email or IRC. Why is your timezone important? Because new contributors are coming from all around the globe and ideally we'd love to have not only project coverage but timezone as well. This way be can get support and encouragement to new contributors even in timezones that aren't covered. For example, I'm in Australia, so I'm available in the APAC timezone. I can reach out and help a new contributor even if they're interested in a project I'm not familiar with, we are all one team, and their problems may be generic enough I can help. If not, I can point them to a liaison that can, hopefully in a closer timezone, but if not at least get them connected via email. As a SIG we are trying to help support new contributors, get them engaged, and lower the bar to entry. Especially keeping there excitement up when they get their first few patches in. Please partner with us and lets grow the community. Regards, Matt [0] - https://wiki.openstack.org/wiki/First_Contact_SIG#Project_Liaisons [1] - http://lists.openstack.org/pipermail/openstack-dev/2018-June/131264.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Wed Aug 15 00:37:18 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Tue, 14 Aug 2018 17:37:18 -0700 Subject: [openstack-dev] [tempest][qa][congress] help with tempest plugin jobs against stable/queens Message-ID: I'm adding jobs [1] to the tempest plugin to run tests against congress stable/queens. The job output seems to show stable/queens getting checked out [2], but I know the test is *not* run against queens because it's using features not available in queens. The expected result is for several tests to fail as seen here [3]. All hints and tips much appreciated! [1] https://review.openstack.org/#/c/591861/1 [2] http://logs.openstack.org/61/591861/1/check/congress-devstack-api-mysql-queens/f7b5752/job-output.txt.gz#_2018-08-14_22_30_36_899501 [3] https://review.openstack.org/#/c/591805/ (the depends-on is irrelevant because that patch has been merged) From gmann at ghanshyammann.com Wed Aug 15 03:59:41 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 15 Aug 2018 12:59:41 +0900 Subject: [openstack-dev] [tempest][qa][congress] tempest test conditioning on release version In-Reply-To: References: Message-ID: <1653bbd3b63.e17ebae814465.6079495611933806399@ghanshyammann.com> ---- On Wed, 15 Aug 2018 06:40:57 +0900 Eric K wrote ---- > Anyone have an example handy of a tempest test conditioning on service > release version (because new features not available in past versions)? > Seems like it could get pretty messy and haphazard, so I'm curious to > see best practices. Thanks lots! Thanks Eric for query. We do it in many times in Tempest and similar approach can be adopt by tempest plugins. There are 2 ways we can handle this- 1. Using feature flag. Tempest documentation is here [1]. Step1- This is simply adding a config options(feature flag) for new/old feature. Example- https://review.openstack.org/#/c/545627/ https://github.com/openstack/tempest/blob/6a8d495192632fd18dce4baf1a4b213f401a0167/tempest/config.py#L242 Step2- Based on that flag you can skip the tests where that feature is not available. Example- https://github.com/openstack/tempest/blob/d5058a8a9c8c1c5383699d04296087b6d5a24efd/tempest/api/identity/base.py#L315 Step3- For gate, devstack plugin on project side (congress is your case [2]) which is branch aware can set that flag to true and false based on which branch that test is running. For tempest we do the same from devstack/lib/tempest Example - https://review.openstack.org/#/c/545680/ https://github.com/openstack-dev/devstack/blob/8c1052001629d62f001d04c182500fa293858f47/lib/tempest#L308 Step4- For cloud testing(non-gate), tester can manually configure the those flag based on what service version they are testing. 2. Detecting service version via version API - If you can get the service version info from API then you can use that while skipping the tests. - One example if for compute where based on microversion, it can be detected that test running against which release. - Example- https://github.com/openstack/tempest/blob/d5058a8a9c8c1c5383699d04296087b6d5a24efd/tempest/api/compute/base.py#L114 [1] https://docs.openstack.org/tempest/latest/HACKING.html#branchless-tempest-considerations [2] https://github.com/openstack/congress/blob/014361c809517661264d0364eaf1e261e449ea80/devstack/plugin.sh#L88 > > Eric Kao > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gmann at ghanshyammann.com Wed Aug 15 04:34:35 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 15 Aug 2018 13:34:35 +0900 Subject: [openstack-dev] [tempest][qa][congress] help with tempest plugin jobs against stable/queens In-Reply-To: References: Message-ID: <1653bdd2b84.b757c4d214571.1609333555586743410@ghanshyammann.com> ---- On Wed, 15 Aug 2018 09:37:18 +0900 Eric K wrote ---- > I'm adding jobs [1] to the tempest plugin to run tests against > congress stable/queens. The job output seems to show stable/queens > getting checked out [2], but I know the test is *not* run against > queens because it's using features not available in queens. The > expected result is for several tests to fail as seen here [3]. All > hints and tips much appreciated! You are doing it in right way by 'override-checkout: stable/queens'. And as log also show, congress is checkout from stable/queens. I tried to check the results but could not get what tests should fail and why. If you can give me more idea, i can debug that. -gmann > > [1] https://review.openstack.org/#/c/591861/1 > [2] http://logs.openstack.org/61/591861/1/check/congress-devstack-api-mysql-queens/f7b5752/job-output.txt.gz#_2018-08-14_22_30_36_899501 > [3] https://review.openstack.org/#/c/591805/ (the depends-on is > irrelevant because that patch has been merged) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tony at bakeyournoodle.com Wed Aug 15 05:25:29 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 15 Aug 2018 15:25:29 +1000 Subject: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2 In-Reply-To: <1534278107-sup-8825@lrrr.local> References: <20180813184055.a846b4a4d5a513722dbcc4ae@redhat.com> <197ac3a5-df13-8003-16eb-d2cedee9ec7f@suse.com> <20180813211730.cze4vpknwncpqg3b@gentoo.org> <1534199243-sup-5500@lrrr.local> <1534278107-sup-8825@lrrr.local> Message-ID: <20180815052528.GA27536@thor.bakeyournoodle.com> On Tue, Aug 14, 2018 at 04:22:16PM -0400, Doug Hellmann wrote: > Now that https://review.openstack.org/#/c/591671/ has landed, we need > someone to propose the backports of the constraint updates to all of the > existing stable branches. Done: https://review.openstack.org/#/q/owner:tonyb+topic:openstackdocstheme+project:openstack/requirements I'm not entirely convinced such a new release will work on older branches but I guess that's what CI is for :) Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tobias.urdin at binero.se Wed Aug 15 07:07:41 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Wed, 15 Aug 2018 09:07:41 +0200 Subject: [openstack-dev] [puppet] migrating to storyboard In-Reply-To: References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> Message-ID: <01cc050e-c74b-a133-4020-6e0f219b7158@binero.se> Hello Kendall, Thanks for your reply, that sounds awesome! We can then dig around and see how everything looks when all project bugs are imported to stories. I see no issues with being able to move to Storyboard anytime soon if the feedback for moving is positive. Best regards Tobias On 08/14/2018 09:06 PM, Kendall Nelson wrote: > Hello! > > The error you hit can be resolved by adding launchpadlib to your > tox.ini if I recall correctly.. > > also, if you'd like, I can run a test migration of puppet's launchpad > projects into our storyboard-dev db (where I've done a ton of other > test migrations) if you want to see how it looks/works with a larger > db. Just let me know and I can kick it off. > > As for a time to migrate, if you all are good with it, we usually > schedule for Friday's so there is even less activity. Its a small > project config change and then we just need an infra core to kick off > the script once the change merges. > > -Kendall (diablo_rojo) > > On Tue, Aug 14, 2018 at 9:33 AM Tobias Urdin > wrote: > > Hello all incredible Puppeters, > > I've tested setting up an Storyboard instance and test migrated > puppet-ceph and it went without any issues there using the > documentation > [1] [2] > with just one minor issue during the SB setup [3]. > > My goal is that we will be able to swap to Storyboard during the > Stein > cycle but considering that we have a low activity on > bugs my opinion is that we could do this swap very easily anything > soon > as long as everybody is in favor of it. > > Please let me know what you think about moving to Storyboard? > If everybody is in favor of it we can request a migration to infra > according to documentation [2]. > > I will continue to test the import of all our project while people > are > collecting their thoughts and feedback :) > > Best regards > Tobias > > [1] > https://docs.openstack.org/infra/storyboard/install/development.html > [2] https://docs.openstack.org/infra/storyboard/migration.html > > [3] It failed with an error about launchpadlib not being installed, > solved with `tox -e venv pip install launchpadlib` > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschadin at sbcloud.ru Wed Aug 15 07:11:27 2018 From: aschadin at sbcloud.ru (=?utf-8?B?0KfQsNC00LjQvSDQkNC70LXQutGB0LDQvdC00YAg0KHQtdGA0LPQtdC10LI=?= =?utf-8?B?0LjRhw==?=) Date: Wed, 15 Aug 2018 07:11:27 +0000 Subject: [openstack-dev] [watcher] weekly meeting Message-ID: Greetings, We’ll have meeting today at 8:00 UTC on #openstack-meeting-3 channel. Best Regards, ____ Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Wed Aug 15 07:28:51 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 15 Aug 2018 09:28:51 +0200 Subject: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2 In-Reply-To: <20180815052528.GA27536@thor.bakeyournoodle.com> References: <20180813184055.a846b4a4d5a513722dbcc4ae@redhat.com> <197ac3a5-df13-8003-16eb-d2cedee9ec7f@suse.com> <20180813211730.cze4vpknwncpqg3b@gentoo.org> <1534199243-sup-5500@lrrr.local> <1534278107-sup-8825@lrrr.local> <20180815052528.GA27536@thor.bakeyournoodle.com> Message-ID: On 08/15/2018 07:25 AM, Tony Breeds wrote: > On Tue, Aug 14, 2018 at 04:22:16PM -0400, Doug Hellmann wrote: > >> Now that https://review.openstack.org/#/c/591671/ has landed, we need >> someone to propose the backports of the constraint updates to all of the >> existing stable branches. > > Done: > https://review.openstack.org/#/q/owner:tonyb+topic:openstackdocstheme+project:openstack/requirements > > I'm not entirely convinced such a new release will work on older > branches but I guess that's what CI is for :) openstackdocsstheme has: sphinx!=1.6.6,!=1.6.7,>=1.6.2 So, we cannot use it on branches that constraint sphinx to an older version, Sorry, can't check this right now from where I am, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From artem.goncharov at gmail.com Wed Aug 15 07:32:45 2018 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Wed, 15 Aug 2018 09:32:45 +0200 Subject: [openstack-dev] [api] [cinder] backup restore in api v2 Message-ID: Hi all, I have recently faced an interesting question: is there no backup restore functionality in block-storage api v2? There is possibility to create backup, but not to restore it according to https://developer.openstack.org/api-ref/block-storage/v2/index.html#backups-backups. Version v3 ref contain restore function. What is also interesting, that cinderclient contain the restore function for v2. Is this just a v2 documentation bug (what I assume) or was it an unsupported function in v2? Thanks, Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Wed Aug 15 08:33:12 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 15 Aug 2018 18:33:12 +1000 Subject: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2 In-Reply-To: References: <20180813184055.a846b4a4d5a513722dbcc4ae@redhat.com> <197ac3a5-df13-8003-16eb-d2cedee9ec7f@suse.com> <20180813211730.cze4vpknwncpqg3b@gentoo.org> <1534199243-sup-5500@lrrr.local> <1534278107-sup-8825@lrrr.local> <20180815052528.GA27536@thor.bakeyournoodle.com> Message-ID: <20180815083312.GB27536@thor.bakeyournoodle.com> On Wed, Aug 15, 2018 at 09:28:51AM +0200, Andreas Jaeger wrote: > openstackdocsstheme has: > sphinx!=1.6.6,!=1.6.7,>=1.6.2 > > So, we cannot use it on branches that constraint sphinx to an older version, > > Sorry, can't check this right now from where I am, Constraints ----------- origin/master : Sphinx===1.7.6 origin/stable/newton : Sphinx===1.2.3 origin/stable/ocata : Sphinx===1.3.6 origin/stable/pike : Sphinx===1.6.3 origin/stable/queens : Sphinx===1.6.5 Looks ok to me. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From cdent+os at anticdent.org Wed Aug 15 08:36:47 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 15 Aug 2018 09:36:47 +0100 (BST) Subject: [openstack-dev] [placement] extraction etherpad for PTG Message-ID: I've created an etherpad to prepare ideas and plans for a discussion at the PTG about extracting placement to its own thing. https://etherpad.openstack.org/p/placement-extract-stein Right now it is in a fairly long form as it gathers ideas and references. The goal is to compress it to something concise after we've had plenty of input so we have a (small) series of discussion points to resolve at the PTG. If this is a topic you think is important or you have an interest in, please add your thoughts to the etherpad. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From cjeanner at redhat.com Wed Aug 15 09:32:10 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Wed, 15 Aug 2018 11:32:10 +0200 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls Message-ID: Dear Community, As you may know, a move toward Podman as replacement of Docker is starting. One of the issues with podman is the lack of daemon, precisely the lack of a socket allowing to send commands and get a "computer formatted output" (like JSON or YAML or...). In order to work that out, Podman has added support for varlink¹, using the "socket activation" feature in Systemd. On my side, I would like to push forward the integration of varlink in TripleO deployed containers, especially since it will allow the following: # proper interface with Paunch (via python link) # a way to manage containers from within specific containers (think "healthcheck", "monitoring") by mounting the socket as a shared volume # a way to get container statistics (think "metrics") # a way, if needed, to get an ansible module being able to talk to podman (JSON is always better than plain text) # a way to secure the accesses to Podman management (we have to define how varlink talks to Podman, maybe providing dedicated socket with dedicated rights so that we can have dedicated users for specific tasks) That said, I have some questions: ° Does any of you have some experience with varlink and podman interface? ° What do you think about that integration wish? ° Does any of you have concern with this possible addition? Thank you for your feedback and ideas. Have a great day (or evening, or whatever suits the time you're reading this ;))! C. ¹ https://www.projectatomic.io/blog/2018/05/podman-varlink/ -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From xliviux at gmail.com Wed Aug 15 10:58:54 2018 From: xliviux at gmail.com (Liviu Popescu) Date: Wed, 15 Aug 2018 13:58:54 +0300 Subject: [openstack-dev] oracle rac on openstack: openfiler as shared storage Message-ID: Hello, regarding: openstack for having "shared-storage" needed for Oracle RAC: I have used openfiler in a vmware environment, for creating iscsi targets and accessing iscsi luns on target machines with iscsi-initiator. On db machines, LUNs were used via multipath.conf and udev rules, to be visible by Oracle RAC nodes, for ASM (asm disks). Please advice me on how to get an openfiler image suitable for openstack, in order to simulate a shared-storage. Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Aug 15 12:52:47 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 15 Aug 2018 07:52:47 -0500 Subject: [openstack-dev] [api] [cinder] backup restore in api v2 In-Reply-To: References: Message-ID: <20180815125247.GA7870@sm-workstation> On Wed, Aug 15, 2018 at 09:32:45AM +0200, Artem Goncharov wrote: > Hi all, > > I have recently faced an interesting question: is there no backup restore > functionality in block-storage api v2? There is possibility to create > backup, but not to restore it according to > https://developer.openstack.org/api-ref/block-storage/v2/index.html#backups-backups. > Version v3 ref contain restore function. What is also interesting, that > cinderclient contain the restore function for v2. Is this just a v2 > documentation bug (what I assume) or was it an unsupported function in v2? > > Thanks, > Artem Thanks for pointing that out Artem. That does appear to be a documentation bug. The backup API has not changed, so v2 and v3 should be identical in that regard. We will need to update our docs to reflect that. Sean From sean.mcginnis at gmx.com Wed Aug 15 13:00:11 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 15 Aug 2018 08:00:11 -0500 Subject: [openstack-dev] [api] [cinder] backup restore in api v2 In-Reply-To: <20180815125247.GA7870@sm-workstation> References: <20180815125247.GA7870@sm-workstation> Message-ID: <20180815130011.GB7870@sm-workstation> On Wed, Aug 15, 2018 at 07:52:47AM -0500, Sean McGinnis wrote: > On Wed, Aug 15, 2018 at 09:32:45AM +0200, Artem Goncharov wrote: > > Hi all, > > > > I have recently faced an interesting question: is there no backup restore > > functionality in block-storage api v2? There is possibility to create > > backup, but not to restore it according to > > https://developer.openstack.org/api-ref/block-storage/v2/index.html#backups-backups. > > Version v3 ref contain restore function. What is also interesting, that > > cinderclient contain the restore function for v2. Is this just a v2 > > documentation bug (what I assume) or was it an unsupported function in v2? > > > > Thanks, > > Artem > > Thanks for pointing that out Artem. That does appear to be a documentation bug. > The backup API has not changed, so v2 and v3 should be identical in that > regard. We will need to update our docs to reflect that. > > Sean Ah, we just did a really good job of hiding it: https://developer.openstack.org/api-ref/block-storage/v2/index.html#restore-backup I see the formatting for that document is off so that it actually appears as a section under the delete documentation. I will get that fixed. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Wed Aug 15 13:07:53 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 15 Aug 2018 09:07:53 -0400 Subject: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2 In-Reply-To: <20180815052528.GA27536@thor.bakeyournoodle.com> References: <20180813184055.a846b4a4d5a513722dbcc4ae@redhat.com> <197ac3a5-df13-8003-16eb-d2cedee9ec7f@suse.com> <20180813211730.cze4vpknwncpqg3b@gentoo.org> <1534199243-sup-5500@lrrr.local> <1534278107-sup-8825@lrrr.local> <20180815052528.GA27536@thor.bakeyournoodle.com> Message-ID: <1534338467-sup-4973@lrrr.local> Excerpts from Tony Breeds's message of 2018-08-15 15:25:29 +1000: > On Tue, Aug 14, 2018 at 04:22:16PM -0400, Doug Hellmann wrote: > > > Now that https://review.openstack.org/#/c/591671/ has landed, we need > > someone to propose the backports of the constraint updates to all of the > > existing stable branches. > > Done: > https://review.openstack.org/#/q/owner:tonyb+topic:openstackdocstheme+project:openstack/requirements > > I'm not entirely convinced such a new release will work on older > branches but I guess that's what CI is for :) > > Yours Tony. Thanks, Tony! From doug at doughellmann.com Wed Aug 15 13:10:18 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 15 Aug 2018 09:10:18 -0400 Subject: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2 In-Reply-To: References: <20180813184055.a846b4a4d5a513722dbcc4ae@redhat.com> <197ac3a5-df13-8003-16eb-d2cedee9ec7f@suse.com> <20180813211730.cze4vpknwncpqg3b@gentoo.org> <1534199243-sup-5500@lrrr.local> <1534278107-sup-8825@lrrr.local> <20180815052528.GA27536@thor.bakeyournoodle.com> Message-ID: <1534338477-sup-7813@lrrr.local> Excerpts from Andreas Jaeger's message of 2018-08-15 09:28:51 +0200: > On 08/15/2018 07:25 AM, Tony Breeds wrote: > > On Tue, Aug 14, 2018 at 04:22:16PM -0400, Doug Hellmann wrote: > > > >> Now that https://review.openstack.org/#/c/591671/ has landed, we need > >> someone to propose the backports of the constraint updates to all of the > >> existing stable branches. > > > > Done: > > https://review.openstack.org/#/q/owner:tonyb+topic:openstackdocstheme+project:openstack/requirements > > > > I'm not entirely convinced such a new release will work on older > > branches but I guess that's what CI is for :) > > openstackdocsstheme has: > sphinx!=1.6.6,!=1.6.7,>=1.6.2 > > So, we cannot use it on branches that constraint sphinx to an older version, > > Sorry, can't check this right now from where I am, > Andreas That's a good point. We should give it a try, though. I don't think pip's constraints resolver takes version specifiers into account, so we should get the older sphinx and the newer theme. If those do happen to work together, it should be OK. If not, we need another solution. We may have to do more work to backport the theme change into an older version of the library to make it work in the old branches. Doug From doug at doughellmann.com Wed Aug 15 13:30:46 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 15 Aug 2018 09:30:46 -0400 Subject: [openstack-dev] [oslo][castellan][python3][goal] need help debugging stable/queens failure in castellan functional tests Message-ID: <1534339715-sup-1877@lrrr.local> The patch to add the zuul job settings to the castellan stable/queens branch is failing the castellan-functional-devstack job. It looks like the job fails to set up some of the services correctly (I see lots of messages about services not responding or not starting). When was that job added to castellan? Should it be running on stable/queens? If we need it, can someone familiar with the job look into the failure and try to fix the problem? Thanks, Doug https://review.openstack.org/#/c/588780/ From jaypipes at gmail.com Wed Aug 15 13:58:22 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 15 Aug 2018 09:58:22 -0400 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: References: Message-ID: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> On 08/15/2018 05:32 AM, Cédric Jeanneret wrote: > Dear Community, > > As you may know, a move toward Podman as replacement of Docker is starting. This was news to me. Is this just a triple-o thing? -jay From openstack at nemebean.com Wed Aug 15 14:20:23 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 15 Aug 2018 09:20:23 -0500 Subject: [openstack-dev] [barbican][oslo][release] FFE request for castellan In-Reply-To: <20180814185634.GA26658@sm-workstation> References: <1533914109.23178.37.camel@redhat.com> <20180814185634.GA26658@sm-workstation> Message-ID: On 08/14/2018 01:56 PM, Sean McGinnis wrote: >> On 08/10/2018 10:15 AM, Ade Lee wrote: >>> Hi all, >>> >>> I'd like to request a feature freeze exception to get the following >>> change in for castellan. >>> >>> https://review.openstack.org/#/c/575800/ >>> >>> This extends the functionality of the vault backend to provide >>> previously uninmplemented functionality, so it should not break anyone. >>> >>> The castellan vault plugin is used behind barbican in the barbican- >>> vault plugin. We'd like to get this change into Rocky so that we can >>> release Barbican with complete functionality on this backend (along >>> with a complete set of passing functional tests). >> >> This does seem fairly low risk since it's just implementing a function that >> previously raised a NotImplemented exception. However, with it being so >> late in the cycle I think we need the release team's input on whether this >> is possible. Most of the release FFE's I've seen have been for critical >> bugs, not actual new features. I've added that tag to this thread so >> hopefully they can weigh in. >> > > As far as releases go, this should be fine. If this doesn't affect any other > projects and would just be a late merging feature, as long as the castellan > team has considered the risk of adding code so late and is comfortable with > that, this is OK. > > Castellan follows the cycle-with-intermediary release model, so the final Rocky > release just needs to be done by next Thursday. I do see the stable/rocky > branch has already been created for this repo, so it would need to merge to > master first (technically stein), then get cherry-picked to stable/rocky. Okay, sounds good. It's already merged to master so we're good there. Ade, can you get the backport proposed? From emilien at redhat.com Wed Aug 15 15:22:45 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 15 Aug 2018 17:22:45 +0200 Subject: [openstack-dev] [puppet] migrating to storyboard In-Reply-To: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> Message-ID: On Tue, Aug 14, 2018 at 6:33 PM Tobias Urdin wrote: > Please let me know what you think about moving to Storyboard? > Go for it. AFIK we don't have specific blockers to make that migration happening. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Wed Aug 15 15:31:46 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 15 Aug 2018 17:31:46 +0200 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> References: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> Message-ID: Hi Jay, On Wed, Aug 15, 2018 at 3:59 PM Jay Pipes wrote: > This was news to me. Is this just a triple-o thing? > It's in the newspapers! https://www.serverwatch.com/server-news/red-hat-looks-beyond-docker-for-container-technology.html More seriously here: there is an ongoing effort to converge the tools around containerization within Red Hat, and we, TripleO are interested to continue the containerization of our services (which was initially done with Docker & Docker-Distribution). We're looking at how these containers could be managed by k8s one day but way before that we plan to swap out Docker and join CRI-O efforts, which seem to be using Podman + Buildah (among other things). The work done at this time (so far) is pure investigation, but feedback is always welcome. We're tracking our efforts here: https://etherpad.openstack.org/p/tripleo-podman Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed Aug 15 16:04:47 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 15 Aug 2018 12:04:47 -0400 Subject: [openstack-dev] [puppet] migrating to storyboard In-Reply-To: References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> Message-ID: It's a +1 from me. I don't think there is anything linked specifically to it. On Wed, Aug 15, 2018 at 11:22 AM, Emilien Macchi wrote: > On Tue, Aug 14, 2018 at 6:33 PM Tobias Urdin wrote: >> >> Please let me know what you think about moving to Storyboard? > > Go for it. AFIK we don't have specific blockers to make that migration > happening. > > Thanks, > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From chris.friesen at windriver.com Wed Aug 15 16:43:00 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 15 Aug 2018 10:43:00 -0600 Subject: [openstack-dev] [puppet] migrating to storyboard In-Reply-To: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> Message-ID: <5B745814.1040008@windriver.com> On 08/14/2018 10:33 AM, Tobias Urdin wrote: > My goal is that we will be able to swap to Storyboard during the Stein cycle but > considering that we have a low activity on > bugs my opinion is that we could do this swap very easily anything soon as long > as everybody is in favor of it. > > Please let me know what you think about moving to Storyboard? Not a puppet dev, but am currently using Storyboard. One of the things we've run into is that there is no way to attach log files for bug reports to a story. There's an open story on this[1] but it's not assigned to anyone. Chris [1] https://storyboard.openstack.org/#!/story/2003071 From alee at redhat.com Wed Aug 15 16:58:33 2018 From: alee at redhat.com (Ade Lee) Date: Wed, 15 Aug 2018 12:58:33 -0400 Subject: [openstack-dev] [barbican][oslo][release] FFE request for castellan In-Reply-To: References: <1533914109.23178.37.camel@redhat.com> <20180814185634.GA26658@sm-workstation> Message-ID: <1534352313.5705.35.camel@redhat.com> Done. https://review.openstack.org/#/c/592154/ Thanks, Ade On Wed, 2018-08-15 at 09:20 -0500, Ben Nemec wrote: > > On 08/14/2018 01:56 PM, Sean McGinnis wrote: > > > On 08/10/2018 10:15 AM, Ade Lee wrote: > > > > Hi all, > > > > > > > > I'd like to request a feature freeze exception to get the > > > > following > > > > change in for castellan. > > > > > > > > https://review.openstack.org/#/c/575800/ > > > > > > > > This extends the functionality of the vault backend to provide > > > > previously uninmplemented functionality, so it should not break > > > > anyone. > > > > > > > > The castellan vault plugin is used behind barbican in the > > > > barbican- > > > > vault plugin. We'd like to get this change into Rocky so that > > > > we can > > > > release Barbican with complete functionality on this backend > > > > (along > > > > with a complete set of passing functional tests). > > > > > > This does seem fairly low risk since it's just implementing a > > > function that > > > previously raised a NotImplemented exception. However, with it > > > being so > > > late in the cycle I think we need the release team's input on > > > whether this > > > is possible. Most of the release FFE's I've seen have been for > > > critical > > > bugs, not actual new features. I've added that tag to this > > > thread so > > > hopefully they can weigh in. > > > > > > > As far as releases go, this should be fine. If this doesn't affect > > any other > > projects and would just be a late merging feature, as long as the > > castellan > > team has considered the risk of adding code so late and is > > comfortable with > > that, this is OK. > > > > Castellan follows the cycle-with-intermediary release model, so the > > final Rocky > > release just needs to be done by next Thursday. I do see the > > stable/rocky > > branch has already been created for this repo, so it would need to > > merge to > > master first (technically stein), then get cherry-picked to > > stable/rocky. > > Okay, sounds good. It's already merged to master so we're good > there. > > Ade, can you get the backport proposed? > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tpb at dyncloud.net Wed Aug 15 17:09:29 2018 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 15 Aug 2018 13:09:29 -0400 Subject: [openstack-dev] [puppet] migrating to storyboard In-Reply-To: <5B745814.1040008@windriver.com> References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> <5B745814.1040008@windriver.com> Message-ID: <20180815170929.dkj7cvmcudk62f63@barron.net> On 15/08/18 10:43 -0600, Chris Friesen wrote: >On 08/14/2018 10:33 AM, Tobias Urdin wrote: > >>My goal is that we will be able to swap to Storyboard during the Stein cycle but >>considering that we have a low activity on >>bugs my opinion is that we could do this swap very easily anything soon as long >>as everybody is in favor of it. >> >>Please let me know what you think about moving to Storyboard? > >Not a puppet dev, but am currently using Storyboard. > >One of the things we've run into is that there is no way to attach log >files for bug reports to a story. There's an open story on this[1] >but it's not assigned to anyone. > Yeah, given that gerrit logs are ephemeral and given that users often don't have the savvy to cut and paste exactly the right log fragments for their issues I think this is a pretty big deal. When I triage bugs I often ask for logs to be uploaded. This may be less of a big deal for puppet than for projects like manila or cinder where there are a set of ongoing services in a custom configuration and there's no often no clear way for the bug triager to set up a reproducer. We're waiting on resolution of [1] before moving ahead with Storyboard for manila. -- Tom >Chris > > >[1] https://storyboard.openstack.org/#!/story/2003071 > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From aj at suse.com Wed Aug 15 18:00:11 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 15 Aug 2018 20:00:11 +0200 Subject: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2 In-Reply-To: <20180815083312.GB27536@thor.bakeyournoodle.com> References: <20180813184055.a846b4a4d5a513722dbcc4ae@redhat.com> <197ac3a5-df13-8003-16eb-d2cedee9ec7f@suse.com> <20180813211730.cze4vpknwncpqg3b@gentoo.org> <1534199243-sup-5500@lrrr.local> <1534278107-sup-8825@lrrr.local> <20180815052528.GA27536@thor.bakeyournoodle.com> <20180815083312.GB27536@thor.bakeyournoodle.com> Message-ID: <301d92c8-73a0-9978-3584-f7aee3f4684d@suse.com> On 2018-08-15 10:33, Tony Breeds wrote: > On Wed, Aug 15, 2018 at 09:28:51AM +0200, Andreas Jaeger wrote: > >> openstackdocsstheme has: >> sphinx!=1.6.6,!=1.6.7,>=1.6.2 >> >> So, we cannot use it on branches that constraint sphinx to an older version, >> >> Sorry, can't check this right now from where I am, > > Constraints > ----------- > origin/master : Sphinx===1.7.6 > origin/stable/newton : Sphinx===1.2.3 > origin/stable/ocata : Sphinx===1.3.6 > origin/stable/pike : Sphinx===1.6.3 > origin/stable/queens : Sphinx===1.6.5 > > Looks ok to me. Great, thanks! Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From ekcs.openstack at gmail.com Wed Aug 15 18:07:02 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Wed, 15 Aug 2018 11:07:02 -0700 Subject: [openstack-dev] [tempest][qa][congress] tempest test conditioning on release version In-Reply-To: <1653bbd3b63.e17ebae814465.6079495611933806399@ghanshyammann.com> References: <1653bbd3b63.e17ebae814465.6079495611933806399@ghanshyammann.com> Message-ID: On Tue, Aug 14, 2018 at 8:59 PM, Ghanshyam Mann wrote: > ---- On Wed, 15 Aug 2018 06:40:57 +0900 Eric K wrote ---- > > Anyone have an example handy of a tempest test conditioning on service > > release version (because new features not available in past versions)? > > Seems like it could get pretty messy and haphazard, so I'm curious to > > see best practices. Thanks lots! > > Thanks Eric for query. We do it in many times in Tempest and similar approach can be adopt by tempest plugins. There are 2 ways we can handle this- > > 1. Using feature flag. Tempest documentation is here [1]. > Step1- This is simply adding a config options(feature flag) for new/old feature. > Example- https://review.openstack.org/#/c/545627/ https://github.com/openstack/tempest/blob/6a8d495192632fd18dce4baf1a4b213f401a0167/tempest/config.py#L242 > Step2- Based on that flag you can skip the tests where that feature is not available. > Example- https://github.com/openstack/tempest/blob/d5058a8a9c8c1c5383699d04296087b6d5a24efd/tempest/api/identity/base.py#L315 > Step3- For gate, devstack plugin on project side (congress is your case [2]) which is branch aware can set that flag to true and false based on which branch that test is running. For tempest we do the same from devstack/lib/tempest > Example - https://review.openstack.org/#/c/545680/ https://github.com/openstack-dev/devstack/blob/8c1052001629d62f001d04c182500fa293858f47/lib/tempest#L308 > Step4- For cloud testing(non-gate), tester can manually configure the those flag based on what service version they are testing. > > 2. Detecting service version via version API > - If you can get the service version info from API then you can use that while skipping the tests. > - One example if for compute where based on microversion, it can be detected that test running against which release. > - Example- https://github.com/openstack/tempest/blob/d5058a8a9c8c1c5383699d04296087b6d5a24efd/tempest/api/compute/base.py#L114 > > > [1] https://docs.openstack.org/tempest/latest/HACKING.html#branchless-tempest-considerations > [2] https://github.com/openstack/congress/blob/014361c809517661264d0364eaf1e261e449ea80/devstack/plugin.sh#L88 > > > > > Eric Kao Thank you so much, Ghanshyam! From aschultz at redhat.com Wed Aug 15 19:08:27 2018 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 15 Aug 2018 13:08:27 -0600 Subject: [openstack-dev] [tripleo] ansible roles in tripleo In-Reply-To: <1534269113.6400.11.camel@redhat.com> References: <1534269113.6400.11.camel@redhat.com> Message-ID: On Tue, Aug 14, 2018 at 11:51 AM, Jill Rouleau wrote: > Hey folks, > > Like Alex mentioned[0] earlier, we've created a bunch of ansible roles > for tripleo specific bits. The idea is to start putting some basic > cookiecutter type things in them to get things started, then move some > low-hanging fruit out of tripleo-heat-templates and into the appropriate > roles. For example, docker/services/keystone.yaml could have > upgrade_tasks and fast_forward_upgrade_tasks moved into ansible-role- > tripleo-keystone/tasks/(upgrade.yml|fast_forward_upgrade.yml), and the > t-h-t updated to > include_role: ansible-role-tripleo-keystone > tasks_from: upgrade.yml > without having to modify any puppet or heat directives. > Do we have any examples of what the upgrade.yml would be or what type of variables (naming conventions or otherwise) which would need to be handled as part of this transtion? I assume we may want to continue passing in some variable to indicate the current deployment step. Is there something along these lines that we will be proposing or need to handle? We're already doing something similar with the host_prep_tasks for the docker registry[0] but we have a set_fact block to pass parameters in. I'm assuming we'll need to define something similar. Thanks, -Alex [0] http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/puppet/services/docker-registry.yaml#n54 > This would let us define some patterns for implementing these tripleo > roles during Stein while looking at how we can make use of ansible for > things like core config. > > t-h-t and config-download will still drive the vast majority of playbook > creation for now, but for new playbooks (such as for operations tasks) > tripleo-ansible[1] would be our project directory. > > So in addition to the larger conversation about how deployers can start > to standardize how we're all using ansible, I'd like to also have a > tripleo-specific conversation at PTG on how we can break out some of our > ansible that's currently embedded in t-h-t into more modular and > flexible roles. > > Cheers, > Jill > > [0] http://lists.openstack.org/pipermail/openstack-dev/2018-August/13311 > 9.html > [1] https://git.openstack.org/cgit/openstack/tripleo-ansible/tree/ > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tony at bakeyournoodle.com Wed Aug 15 19:28:02 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 16 Aug 2018 05:28:02 +1000 Subject: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2 In-Reply-To: <1534338477-sup-7813@lrrr.local> References: <20180813184055.a846b4a4d5a513722dbcc4ae@redhat.com> <197ac3a5-df13-8003-16eb-d2cedee9ec7f@suse.com> <20180813211730.cze4vpknwncpqg3b@gentoo.org> <1534199243-sup-5500@lrrr.local> <1534278107-sup-8825@lrrr.local> <20180815052528.GA27536@thor.bakeyournoodle.com> <1534338477-sup-7813@lrrr.local> Message-ID: <20180815192801.GC27536@thor.bakeyournoodle.com> On Wed, Aug 15, 2018 at 09:10:18AM -0400, Doug Hellmann wrote: > Excerpts from Andreas Jaeger's message of 2018-08-15 09:28:51 +0200: > > On 08/15/2018 07:25 AM, Tony Breeds wrote: > > > On Tue, Aug 14, 2018 at 04:22:16PM -0400, Doug Hellmann wrote: > > > > > >> Now that https://review.openstack.org/#/c/591671/ has landed, we need > > >> someone to propose the backports of the constraint updates to all of the > > >> existing stable branches. > > > > > > Done: > > > https://review.openstack.org/#/q/owner:tonyb+topic:openstackdocstheme+project:openstack/requirements > > > > > > I'm not entirely convinced such a new release will work on older > > > branches but I guess that's what CI is for :) > > > > openstackdocsstheme has: > > sphinx!=1.6.6,!=1.6.7,>=1.6.2 > > > > So, we cannot use it on branches that constraint sphinx to an older version, > > > > Sorry, can't check this right now from where I am, > > Andreas > > That's a good point. We should give it a try, though. I don't think > pip's constraints resolver takes version specifiers into account, so we > should get the older sphinx and the newer theme. If those do happen to > work together, it should be OK. > > If not, we need another solution. We may have to do more work to > backport the theme change into an older version of the library to > make it work in the old branches. The queens and pike backports have merged but ocata filed with[1] ContextualVersionConflict: (pbr 1.10.0 (/home/zuul/src/git.openstack.org/openstack/requirements/.tox/py27-check-uc/lib/python2.7/site-packages), Requirement.parse('pbr!=2.1.0,>=2.0.0'), set(['openstackdocstheme'])) So we can't use the rocky release on ocata. I assume we need to do something to ensure the docs are generated correctly. Tony. [1] http://logs.openstack.org/96/591896/1/check/requirements-tox-py27-check-uc/ff17c54/job-output.txt.gz#_2018-08-15_05_28_25_148515 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From amy at demarco.com Wed Aug 15 20:00:34 2018 From: amy at demarco.com (Amy Marrich) Date: Wed, 15 Aug 2018 15:00:34 -0500 Subject: [openstack-dev] OpenStack Diversity and Inclusion Survey Message-ID: The Diversity and Inclusion WG is asking for your assistance. We have revised the Diversity Survey that was originally distributed to the Community in the Fall of 2015 and are looking to update our view of the OpenStack community and it's diversity. We are pleased to be working with members of the CHAOSS project who have signed confidentiality agreements in order to assist us in the following ways: 1) Assistance in analyzing the results 2) And feeding the results into the CHAOSS software and metrics development work so that we can help other Open Source projects Please take the time to fill out the survey and share it with others in the community. The survey can be found at: https://www.surveymonkey.com/r/OpenStackDiversity Thank you for assisting us in this important task! Amy Marrich (spotz) Diversity and Inclusion Working Group Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Wed Aug 15 20:01:09 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 15 Aug 2018 22:01:09 +0200 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: References: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> Message-ID: On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi wrote: > More seriously here: there is an ongoing effort to converge the tools > around containerization within Red Hat, and we, TripleO are interested to > continue the containerization of our services (which was initially done > with Docker & Docker-Distribution). > We're looking at how these containers could be managed by k8s one day but > way before that we plan to swap out Docker and join CRI-O efforts, which > seem to be using Podman + Buildah (among other things). > I guess my wording wasn't the best but Alex explained way better here: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52 If I may have a chance to rephrase, I guess our current intention is to continue our containerization and investigate how we can improve our tooling to better orchestrate the containers. We have a nice interface (openstack/paunch) that allows us to run multiple container backends, and we're currently looking outside of Docker to see how we could solve our current challenges with the new tools. We're looking at CRI-O because it happens to be a project with a great community, focusing on some problems that we, TripleO have been facing since we containerized our services. We're doing all of this in the open, so feel free to ask any question. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Wed Aug 15 20:31:18 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 15 Aug 2018 14:31:18 -0600 Subject: [openstack-dev] [nova] How to debug no valid host failures with placement In-Reply-To: <3a56c071-dc17-fe88-a63f-832f907bd6bd@gmail.com> References: <5b6499c5-989b-1357-4144-3a42f71954a8@nemebean.com> <5B61DE6A.8030705@windriver.com> <38cd7981-cfad-e975-5eda-b91402634e0f@nemebean.com> <5B61F59A.1080502@windriver.com> <156b4688-aba3-2a01-3917-b42a209f0ecd@gmail.com> <6c636d43-d0cc-f805-cf20-96acbc61b139@fried.cc> <625fd356-c5a1-5818-80f1-a8f8c570d830@gmail.com> <171010d9-0cc8-da77-b51f-292ad8e2cb26@gmail.com> <5B6363BA.9000900@windriver.com> <97bfe7dc-eb25-bf30-7a84-6ef29105324e@gmail.com> <5B646336.8070001@windriver.com> <3a56c071-dc17-fe88-a63f-832f907bd6bd@gmail.com> Message-ID: <5B748D96.4010308@windriver.com> On 08/04/2018 05:18 PM, Matt Riedemann wrote: > On 8/3/2018 9:14 AM, Chris Friesen wrote: >> I'm of two minds here. >> >> On the one hand, you have the case where the end user has accidentally >> requested some combination of things that isn't normally available, and they >> need to be able to ask the provider what they did wrong. I agree that this >> case is not really an exception, those resources were never available in the >> first place. >> >> On the other hand, suppose the customer issues a valid request and it works, >> and then issues the same request again and it fails, leading to a violation of >> that customers SLA. In this case I would suggest that it could be considered >> an exception since the system is not delivering the service that it was >> intended to deliver. > > As I'm sure you're aware Chris, it looks like StarlingX has a kind of > post-mortem query utility to try and figure out where requested resources didn't > end up yielding a resource provider (for a compute node): > > https://github.com/starlingx-staging/stx-nova/commit/71acfeae0d1c59fdc77704527d763bd85a276f9a#diff-94f87e728df6465becce5241f3da53c8R330 > > > But as you noted way earlier in this thread, it might not be the actual reasons > at the time of the failure and in a busy cloud could quickly change. Just noticed this email, sorry for the delay. The bit you point out isn't a post-mortem query but rather a way of printing out the rejection reasons that were stored (via calls to filter_reject()) at the time the request was processed by each filter. Chris From melwittt at gmail.com Wed Aug 15 20:47:45 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 15 Aug 2018 13:47:45 -0700 Subject: [openstack-dev] [nova] Rocky blueprint burndown chart Message-ID: <045ab2da-8784-03e6-ad82-8d013a95d2d7@gmail.com> Howdy everyone, Keeping with the tradition of Matt's burndown charts from previous cycles [1][2], I have a burndown chart for the Rocky cycle [3] to share with you. Apologies for the gap in the data -- I had an issue keeping up with the count for that time period. I also focused on only Approved vs Completed counts this time. And finally, there are overlapping labels for "Spec Review Sprint" on June 5 and "r-2, spec freeze" on June 7 that are hard to read, and I didn't find a way to adjust their position in google sheets. Comparing final numbers to Queens --------------------------------- Max approved for Queens: 53 Max approved for Rocky: 72 Final completed for Queens: 42 Final completed for Rocky: 59 Our completion percentage of approved blueprints in Queens was 79.2% and in Rocky it was 81.9%. We approved far more blueprints in Rocky than we did in Queens, but the completion rate was similar. With runways, we were looking to increase our completion percentage by focusing on reviewing the same approved things at the same time but we simultaneously were more ambitious with what we approved. So we ended up with a similar completion percentage. This doesn't seem like a bad thing in that, we completed more blueprints than we did last cycle (and presumably got more done overall), but we're still having trouble with our approval rate of blueprints that we can realistically finish in one cycle. I think part of the miss on the number of approvals might be because we extended the spec freeze date to milestone r-2 because of runways, thinking that if we completed enough things, we could approve more things. We didn't predict that accurately but with the experience, my hope is we can do better in Stein. We could consider moving spec freeze back to milestone s-1 or have rough criteria on whether to approve more blueprints close to s-2 (for example, if 30%? of approved blueprints have been completed, OK to approve more). If you have feedback or thoughts on any of this, feel free to reply to this thread or add your comments to the Rocky retrospective etherpad [4] and we can discuss at the PTG. Cheers, -melanie [1] http://lists.openstack.org/pipermail/openstack-dev/2017-September/121875.html [2] http://lists.openstack.org/pipermail/openstack-dev/2018-February/127402.html [3] https://docs.google.com/spreadsheets/d/e/2PACX-1vQicKStmnQFcOdnZU56ynJmn8e0__jYsr4FWXs3GrDsDzg1hwHofvJnuSieCH3ExbPngoebmEeY0waH/pubhtml?gid=128173249&single=true [4] https://etherpad.openstack.org/p/nova-rocky-retrospective From jungleboyj at gmail.com Wed Aug 15 20:49:43 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Wed, 15 Aug 2018 15:49:43 -0500 Subject: [openstack-dev] [puppet] migrating to storyboard In-Reply-To: <5B745814.1040008@windriver.com> References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> <5B745814.1040008@windriver.com> Message-ID: On 8/15/2018 11:43 AM, Chris Friesen wrote: > On 08/14/2018 10:33 AM, Tobias Urdin wrote: > >> My goal is that we will be able to swap to Storyboard during the >> Stein cycle but >> considering that we have a low activity on >> bugs my opinion is that we could do this swap very easily anything >> soon as long >> as everybody is in favor of it. >> >> Please let me know what you think about moving to Storyboard? > > Not a puppet dev, but am currently using Storyboard. > > One of the things we've run into is that there is no way to attach log > files for bug reports to a story.  There's an open story on this[1] > but it's not assigned to anyone. > > Chris > > > [1] https://storyboard.openstack.org/#!/story/2003071 > Cinder is planning on holding on any migration, like Manila, until the file attachment issue is resolved. Jay > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jungleboyj at gmail.com Wed Aug 15 20:54:55 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Wed, 15 Aug 2018 15:54:55 -0500 Subject: [openstack-dev] [api] [cinder] backup restore in api v2 In-Reply-To: <20180815130011.GB7870@sm-workstation> References: <20180815125247.GA7870@sm-workstation> <20180815130011.GB7870@sm-workstation> Message-ID: On 8/15/2018 8:00 AM, Sean McGinnis wrote: > On Wed, Aug 15, 2018 at 07:52:47AM -0500, Sean McGinnis wrote: >> On Wed, Aug 15, 2018 at 09:32:45AM +0200, Artem Goncharov wrote: >>> Hi all, >>> >>> I have recently faced an interesting question: is there no backup restore >>> functionality in block-storage api v2? There is possibility to create >>> backup, but not to restore it according to >>> https://developer.openstack.org/api-ref/block-storage/v2/index.html#backups-backups. >>> Version v3 ref contain restore function. What is also interesting, that >>> cinderclient contain the restore function for v2. Is this just a v2 >>> documentation bug (what I assume) or was it an unsupported function in v2? >>> >>> Thanks, >>> Artem >> Thanks for pointing that out Artem. That does appear to be a documentation bug. >> The backup API has not changed, so v2 and v3 should be identical in that >> regard. We will need to update our docs to reflect that. >> >> Sean > Ah, we just did a really good job of hiding it: > > https://developer.openstack.org/api-ref/block-storage/v2/index.html#restore-backup > > I see the formatting for that document is off so that it actually appears as a > section under the delete documentation. I will get that fixed. Artem and Sean, Thanks for finding that and fixing it. Jay >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From prometheanfire at gentoo.org Wed Aug 15 21:03:07 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 15 Aug 2018 16:03:07 -0500 Subject: [openstack-dev] [releases][requirements][cycle-with-intermediary][cycle-trailing] requirements is going to branch stable/rocky at ~08-15-2018 2100Z In-Reply-To: <20180814161313.tvtdg6ife7q3anyf@gentoo.org> References: <20180814161313.tvtdg6ife7q3anyf@gentoo.org> Message-ID: <20180815210307.fd5suunz223fwopu@gentoo.org> On 18-08-14 11:13:13, Matthew Thode wrote: > This is to warn and call out all those projects that do not have a > stable/rocky branch yet. > > If you are in the folloing list your project will need to realize that > your master is testing against the requirements/constraints from stein, > not rocky. Any branching / tests you do will need to keep that in mind. > > ansible-role-container-registry > ansible-role-redhat-subscription > ansible-role-tripleo-modify-image > barbican-tempest-plugin > blazar-tempest-plugin > cinder-tempest-plugin > cloudkitty-dashboard > cloudkitty-tempest-plugin > cloudkitty > congress-tempest-plugin > designate-tempest-plugin > ec2api-tempest-plugin > heat-agents > heat-dashboard > heat-tempest-plugin > instack-undercloud > ironic-tempest-plugin > karbor-dashboard > karbor > keystone-tempest-plugin > kuryr-kubernetes > kuryr-libnetwork > kuryr-tempest-plugin > magnum-tempest-plugin > magnum-ui > manila-tempest-plugin > mistral-tempest-plugin > monasca-kibana-plugin > monasca-tempest-plugin > murano-tempest-plugin > networking-generic-switch-tempest-plugin > networking-hyperv > neutron-tempest-plugin > octavia-tempest-plugin > os-apply-config > os-collect-config > os-net-config > os-refresh-config > oswin-tempest-plugin > paunch > python-tricircleclient > sahara-tests > senlin-tempest-plugin > solum-tempest-plugin > swift > tacker-horizon > tacker > telemetry-tempest-plugin > tempest-tripleo-ui > tempest > tripleo-common-tempest-plugin > tripleo-ipsec > tripleo-ui > tripleo-validations > trove-tempest-plugin > vitrage-tempest-plugin > watcher-tempest-plugin > zaqar-tempest-plugin > zun-tempest-plugin > zun-ui > zun > > kolla-ansible > kolla > puppet-aodh > puppet-barbican > puppet-ceilometer > puppet-cinder > puppet-cloudkitty > puppet-congress > puppet-designate > puppet-ec2api > puppet-freezer > puppet-glance > puppet-glare > puppet-gnocchi > puppet-heat > puppet-horizon > puppet-ironic > puppet-keystone > puppet-magnum > puppet-manila > puppet-mistral > puppet-monasca > puppet-murano > puppet-neutron > puppet-nova > puppet-octavia > puppet-openstack_extras > puppet-openstacklib > puppet-oslo > puppet-ovn > puppet-panko > puppet-qdr > puppet-rally > puppet-sahara > puppet-swift > puppet-tacker > puppet-tempest > puppet-tripleo > puppet-trove > puppet-vitrage > puppet-vswitch > puppet-watcher > puppet-zaqar > python-tripleoclient > tripleo-common > tripleo-heat-templates > tripleo-image-elements > tripleo-puppet-elements > > So please branch :D > we branched, also we are working on migrating to storyboard -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From sean.mcginnis at gmx.com Wed Aug 15 21:05:44 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 15 Aug 2018 16:05:44 -0500 Subject: [openstack-dev] [releases][requirements][cycle-with-intermediary][cycle-trailing] requirements is going to branch stable/rocky at ~08-15-2018 2100Z In-Reply-To: <20180814161313.tvtdg6ife7q3anyf@gentoo.org> References: <20180814161313.tvtdg6ife7q3anyf@gentoo.org> Message-ID: <20180815210543.GA27479@sm-workstation> On Tue, Aug 14, 2018 at 11:13:13AM -0500, Matthew Thode wrote: > This is to warn and call out all those projects that do not have a > stable/rocky branch yet. > > If you are in the folloing list your project will need to realize that > your master is testing against the requirements/constraints from stein, > not rocky. Any branching / tests you do will need to keep that in mind. > I have just processed the branching request for the openstack/requirements repo. Projects that have branched and have had the bot-proposed patch to update the stable/rocky tox settings to use the stable/rocky upper constraints can now approve those patches. At the moment, requirements are matching between rocky and stein, but there's usually only a small window where that is the case. If you have not branched yet, be aware that at some point your testing could be running against the wrong constraints. From openstack at nemebean.com Wed Aug 15 21:34:43 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 15 Aug 2018 16:34:43 -0500 Subject: [openstack-dev] [oslo] Proposing Zane Bitter as oslo.service core In-Reply-To: References: Message-ID: <7145463d-929b-1cd3-3580-30400bd68c37@nemebean.com> Since there were no objections, I've added Zane to the oslo.service core team. Thanks and welcome, Zane! On 08/03/2018 11:58 AM, Ben Nemec wrote: > Hi, > > Zane has been doing some good work in oslo.service recently and I would > like to add him to the core team.  I know he's got a lot on his plate > already, but he has taken the time to propose and review patches in > oslo.service and has demonstrated an understanding of the code. > > Please respond with +1 or any concerns you may have.  Thanks. > > -Ben > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From melwittt at gmail.com Wed Aug 15 21:41:23 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 15 Aug 2018 14:41:23 -0700 Subject: [openstack-dev] [nova] Stein subteam tracking etherpad now available Message-ID: <4aebc639-fc34-cdfa-5a04-1d01509ce367@gmail.com> Hi all, For those of us who use the global team etherpad for helping organize reviews for subteams, I just wanted to give a heads up that I've copied over the content from the Rocky etherpad [1] and created a new etherpad for us to use for Stein [2]. I've removed some things that appeared completely unused. Please feel free to start using the Stein etherpad to help organize review for your subteam (note: this is separate from runways and is just a way for subteams to coordinate review of non-runway work, like bug fixes, etc). If you have a subteam or topic that is missing from the etherpad, feel free to add it and use the space for organizing your subteam reviews. Let me know if you have any questions. Best, -melanie [1] https://etherpad.openstack.org/p/rocky-nova-priorities-tracking [2] https://etherpad.openstack.org/p/stein-nova-subteam-tracking From jaypipes at gmail.com Wed Aug 15 21:48:58 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 15 Aug 2018 17:48:58 -0400 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: References: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> Message-ID: <363d36a2-c25a-d7ba-3f45-c8b3aa4e6cce@gmail.com> On 08/15/2018 04:01 PM, Emilien Macchi wrote: > On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi > wrote: > > More seriously here: there is an ongoing effort to converge the > tools around containerization within Red Hat, and we, TripleO are > interested to continue the containerization of our services (which > was initially done with Docker & Docker-Distribution). > We're looking at how these containers could be managed by k8s one > day but way before that we plan to swap out Docker and join CRI-O > efforts, which seem to be using Podman + Buildah (among other things). > > I guess my wording wasn't the best but Alex explained way better here: > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52 > > If I may have a chance to rephrase, I guess our current intention is to > continue our containerization and investigate how we can improve our > tooling to better orchestrate the containers. > We have a nice interface (openstack/paunch) that allows us to run > multiple container backends, and we're currently looking outside of > Docker to see how we could solve our current challenges with the new tools. > We're looking at CRI-O because it happens to be a project with a great > community, focusing on some problems that we, TripleO have been facing > since we containerized our services. > > We're doing all of this in the open, so feel free to ask any question. I appreciate your response, Emilien, thank you. Alex' responses to Jeremy on the #openstack-tc channel were informative, thank you Alex. For now, it *seems* to me that all of the chosen tooling is very Red Hat centric. Which makes sense to me, considering Triple-O is a Red Hat product. I don't know how much of the current reinvention of container runtimes and various tooling around containers is the result of politics. I don't know how much is the result of certain companies wanting to "own" the container stack from top to bottom. Or how much is a result of technical disagreements that simply cannot (or will not) be resolved among contributors in the container development ecosystem. Or is it some combination of the above? I don't know. What I *do* know is that the current "NIH du jour" mentality currently playing itself out in the container ecosystem -- reminding me very much of the Javascript ecosystem -- makes it difficult for any potential *consumers* of container libraries, runtimes or applications to be confident that any choice they make towards one of the other will be the *right* choice or even a *possible* choice next year -- or next week. Perhaps this is why things like openstack/paunch exist -- to give you options if something doesn't pan out. You have a tough job. I wish you all the luck in the world in making these decisions and hope politics and internal corporate management decisions play as little a role in them as possible. Best, -jay From jrist at redhat.com Wed Aug 15 22:10:26 2018 From: jrist at redhat.com (Jason E. Rist) Date: Wed, 15 Aug 2018 16:10:26 -0600 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: References: Message-ID: On 08/15/2018 03:32 AM, Cédric Jeanneret wrote: > Dear Community, > > As you may know, a move toward Podman as replacement of Docker is starting. > > One of the issues with podman is the lack of daemon, precisely the lack > of a socket allowing to send commands and get a "computer formatted > output" (like JSON or YAML or...). > > In order to work that out, Podman has added support for varlink¹, using > the "socket activation" feature in Systemd. > > On my side, I would like to push forward the integration of varlink in > TripleO deployed containers, especially since it will allow the following: > # proper interface with Paunch (via python link) > > # a way to manage containers from within specific containers (think > "healthcheck", "monitoring") by mounting the socket as a shared volume > > # a way to get container statistics (think "metrics") > > # a way, if needed, to get an ansible module being able to talk to > podman (JSON is always better than plain text) > > # a way to secure the accesses to Podman management (we have to define > how varlink talks to Podman, maybe providing dedicated socket with > dedicated rights so that we can have dedicated users for specific tasks) > > That said, I have some questions: > ° Does any of you have some experience with varlink and podman interface? > ° What do you think about that integration wish? > ° Does any of you have concern with this possible addition? > > Thank you for your feedback and ideas. > > Have a great day (or evening, or whatever suits the time you're reading > this ;))! > > C. > > > ¹ https://www.projectatomic.io/blog/2018/05/podman-varlink/ > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > How might this effect upgrades? -J -- Jason E. Rist Senior Software Engineer OpenStack User Interfaces Red Hat, Inc. Freenode: jrist github/twitter: knowncitizen From aschultz at redhat.com Wed Aug 15 23:11:58 2018 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 15 Aug 2018 17:11:58 -0600 Subject: [openstack-dev] [tripleo] CI is blocked Message-ID: Please do not approve or recheck anything until further notice. We've got a few issues that have basically broken all the jobs. https://bugs.launchpad.net/tripleo/+bug/1786764 https://bugs.launchpad.net/tripleo/+bug/1787226 https://bugs.launchpad.net/tripleo/+bug/1787244 https://bugs.launchpad.net/tripleo/+bug/1787268 Thanks, -Alex From prometheanfire at gentoo.org Wed Aug 15 23:13:29 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 15 Aug 2018 18:13:29 -0500 Subject: [openstack-dev] [requirements] moved to [storyboard] Message-ID: <20180815231329.fjr5t4m6z5iip3nr@gentoo.org> We've moved, please forward future correspondence to the following location. https://storyboard.openstack.org/#!/project/openstack/requirements -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jillr at redhat.com Wed Aug 15 23:15:19 2018 From: jillr at redhat.com (Jill Rouleau) Date: Wed, 15 Aug 2018 16:15:19 -0700 Subject: [openstack-dev] [tripleo] ansible roles in tripleo In-Reply-To: References: <1534269113.6400.11.camel@redhat.com> Message-ID: <1534374919.5848.9.camel@redhat.com> On Wed, 2018-08-15 at 13:08 -0600, Alex Schultz wrote: > On Tue, Aug 14, 2018 at 11:51 AM, Jill Rouleau > wrote: > > > > Hey folks, > > > > Like Alex mentioned[0] earlier, we've created a bunch of ansible > > roles > > for tripleo specific bits.  The idea is to start putting some basic > > cookiecutter type things in them to get things started, then move > > some > > low-hanging fruit out of tripleo-heat-templates and into the > > appropriate > > roles.  For example, docker/services/keystone.yaml could have > > upgrade_tasks and fast_forward_upgrade_tasks moved into ansible- > > role- > > tripleo-keystone/tasks/(upgrade.yml|fast_forward_upgrade.yml), and > > the > > t-h-t updated to > > include_role: ansible-role-tripleo-keystone > >   tasks_from: upgrade.yml > > without having to modify any puppet or heat directives. > > > Do we have any examples of what the upgrade.yml would be or what type > of variables (naming conventions or otherwise) which would need to be > handled as part of this transtion?  I assume we may want to continue > passing in some variable to indicate the current deployment step.  Is > there something along these lines that we will be proposing or need to > handle?  We're already doing something similar with the > host_prep_tasks for the docker registry[0] but we have a set_fact > block to pass parameters in.   I'm assuming we'll need to define > something similar. The task file would look very much like the task as it exists in t-h-t today.  For example if we took the FFU task out of docker/services.keystone.yaml and write that to ansible-role-tripleo- keystone/tasks/fast_forward_upgrade.yml, and update any necessary vars (like steps) to be role vars, like tripleo_keystone_step. Then docker/services.keystone.yaml could be updated like: fast_forward_upgrade_tasks:   include_role:      name: ansible-role-tripleo-keystone     tasks_from: fast_forward_upgrade    vars:     tripleo_keystone_step: step|int - Jill > > Thanks, > -Alex > > [0] http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tre > e/puppet/services/docker-registry.yaml#n54 > > > > > This would let us define some patterns for implementing these > > tripleo > > roles during Stein while looking at how we can make use of ansible > > for > > things like core config. > > > > t-h-t and config-download will still drive the vast majority of > > playbook > > creation for now, but for new playbooks (such as for operations > > tasks) > > tripleo-ansible[1] would be our project directory. > > > > So in addition to the larger conversation about how deployers can > > start > > to standardize how we're all using ansible, I'd like to also have a > > tripleo-specific conversation at PTG on how we can break out some of > > our > > ansible that's currently embedded in t-h-t into more modular and > > flexible roles. > > > > Cheers, > > Jill > > > > [0] http://lists.openstack.org/pipermail/openstack-dev/2018-August/1 > > 3311 > > 9.html > > [1] https://git.openstack.org/cgit/openstack/tripleo-ansible/tree/ > > ____________________________________________________________________ > > ______ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsub > > scribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > ______________________________________________________________________ > ____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubsc > ribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From davanum at gmail.com Thu Aug 16 00:53:15 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Thu, 16 Aug 2018 08:53:15 +0800 Subject: [openstack-dev] [oslo] Proposing Zane Bitter as oslo.service core In-Reply-To: <7145463d-929b-1cd3-3580-30400bd68c37@nemebean.com> References: <7145463d-929b-1cd3-3580-30400bd68c37@nemebean.com> Message-ID: Belated +1. Welcome Zane! On Thu, Aug 16, 2018 at 5:35 AM Ben Nemec wrote: > Since there were no objections, I've added Zane to the oslo.service core > team. Thanks and welcome, Zane! > > On 08/03/2018 11:58 AM, Ben Nemec wrote: > > Hi, > > > > Zane has been doing some good work in oslo.service recently and I would > > like to add him to the core team. I know he's got a lot on his plate > > already, but he has taken the time to propose and review patches in > > oslo.service and has demonstrated an understanding of the code. > > > > Please respond with +1 or any concerns you may have. Thanks. > > > > -Ben > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu Aug 16 02:13:12 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 15 Aug 2018 20:13:12 -0600 Subject: [openstack-dev] [tripleo] CI is blocked In-Reply-To: References: Message-ID: On Wed, Aug 15, 2018 at 7:13 PM Alex Schultz wrote: > Please do not approve or recheck anything until further notice. We've > got a few issues that have basically broken all the jobs. > > https://bugs.launchpad.net/tripleo/+bug/1786764 > https://bugs.launchpad.net/tripleo/+bug/1787226 > https://bugs.launchpad.net/tripleo/+bug/1787244 > https://bugs.launchpad.net/tripleo/+bug/1787268 https://bugs.launchpad.net/tripleo/+bug/1736950 weeee > > > Thanks, > -Alex > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Wes Hayutin Associate MANAGER Red Hat whayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Aug 16 02:38:20 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 16 Aug 2018 11:38:20 +0900 Subject: [openstack-dev] [release][qa][devstack][all] Pre-Notifiaction for DevStack branch cut for Rocky Message-ID: <16540991b37.c6e846f421543.25852349471037865@ghanshyammann.com> Hi All, We are in process of cutting the Rocky branch for Devstack[1]. As per process[2], we need to wait for minimum set of project (which needed branch) used by Devstack to be branched first. As dhellmann mentioned on patch, All of the cycle-with-milestone projects are branched and we are waiting to hear go ahead from swift team. Other than Swift, if any other project needs more work or to be branched before we branched devstack, feel free to reply here or on gerrit patch. [1] https://review.openstack.org/#/c/591563/ [2] https://releases.openstack.org/reference/process.html#rc1 -gmann From aj at suse.com Thu Aug 16 04:27:39 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 16 Aug 2018 06:27:39 +0200 Subject: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2 In-Reply-To: <20180815192801.GC27536@thor.bakeyournoodle.com> References: <20180813184055.a846b4a4d5a513722dbcc4ae@redhat.com> <197ac3a5-df13-8003-16eb-d2cedee9ec7f@suse.com> <20180813211730.cze4vpknwncpqg3b@gentoo.org> <1534199243-sup-5500@lrrr.local> <1534278107-sup-8825@lrrr.local> <20180815052528.GA27536@thor.bakeyournoodle.com> <1534338477-sup-7813@lrrr.local> <20180815192801.GC27536@thor.bakeyournoodle.com> Message-ID: <38413aa0-4624-228c-1034-79fcbff49896@suse.com> On 2018-08-15 21:28, Tony Breeds wrote: > On Wed, Aug 15, 2018 at 09:10:18AM -0400, Doug Hellmann wrote: >> Excerpts from Andreas Jaeger's message of 2018-08-15 09:28:51 +0200: >>> On 08/15/2018 07:25 AM, Tony Breeds wrote: >>>> On Tue, Aug 14, 2018 at 04:22:16PM -0400, Doug Hellmann wrote: >>>> >>>>> Now that https://review.openstack.org/#/c/591671/ has landed, we need >>>>> someone to propose the backports of the constraint updates to all of the >>>>> existing stable branches. >>>> >>>> Done: >>>> https://review.openstack.org/#/q/owner:tonyb+topic:openstackdocstheme+project:openstack/requirements >>>> >>>> I'm not entirely convinced such a new release will work on older >>>> branches but I guess that's what CI is for :) >>> >>> openstackdocsstheme has: >>> sphinx!=1.6.6,!=1.6.7,>=1.6.2 >>> >>> So, we cannot use it on branches that constraint sphinx to an older version, >>> >>> Sorry, can't check this right now from where I am, >>> Andreas >> >> That's a good point. We should give it a try, though. I don't think >> pip's constraints resolver takes version specifiers into account, so we >> should get the older sphinx and the newer theme. If those do happen to >> work together, it should be OK. >> >> If not, we need another solution. We may have to do more work to >> backport the theme change into an older version of the library to >> make it work in the old branches. > > The queens and pike backports have merged but ocata filed with[1] > > ContextualVersionConflict: (pbr 1.10.0 (/home/zuul/src/git.openstack.org/openstack/requirements/.tox/py27-check-uc/lib/python2.7/site-packages), Requirement.parse('pbr!=2.1.0,>=2.0.0'), set(['openstackdocstheme'])) > > So we can't use the rocky release on ocata. I assume we need to do > something to ensure the docs are generated correctly. Ocata should be retired by now ;) Let's drop it... thanks, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From tony at bakeyournoodle.com Thu Aug 16 05:38:04 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 16 Aug 2018 15:38:04 +1000 Subject: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2 In-Reply-To: <38413aa0-4624-228c-1034-79fcbff49896@suse.com> References: <197ac3a5-df13-8003-16eb-d2cedee9ec7f@suse.com> <20180813211730.cze4vpknwncpqg3b@gentoo.org> <1534199243-sup-5500@lrrr.local> <1534278107-sup-8825@lrrr.local> <20180815052528.GA27536@thor.bakeyournoodle.com> <1534338477-sup-7813@lrrr.local> <20180815192801.GC27536@thor.bakeyournoodle.com> <38413aa0-4624-228c-1034-79fcbff49896@suse.com> Message-ID: <20180816053804.GF27536@thor.bakeyournoodle.com> On Thu, Aug 16, 2018 at 06:27:39AM +0200, Andreas Jaeger wrote: > Ocata should be retired by now ;) Let's drop it... *cough* extended maintenance *cough* ;P So we don't need the Ocata docs to be rebuilt with this version? Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From cjeanner at redhat.com Thu Aug 16 05:39:51 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Thu, 16 Aug 2018 07:39:51 +0200 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: References: Message-ID: <9f71a2e1-a014-9faa-b8d8-792db584e53d@redhat.com> On 08/16/2018 12:10 AM, Jason E. Rist wrote: > On 08/15/2018 03:32 AM, Cédric Jeanneret wrote: >> Dear Community, >> >> As you may know, a move toward Podman as replacement of Docker is starting. >> >> One of the issues with podman is the lack of daemon, precisely the lack >> of a socket allowing to send commands and get a "computer formatted >> output" (like JSON or YAML or...). >> >> In order to work that out, Podman has added support for varlink¹, using >> the "socket activation" feature in Systemd. >> >> On my side, I would like to push forward the integration of varlink in >> TripleO deployed containers, especially since it will allow the following: >> # proper interface with Paunch (via python link) >> >> # a way to manage containers from within specific containers (think >> "healthcheck", "monitoring") by mounting the socket as a shared volume >> >> # a way to get container statistics (think "metrics") >> >> # a way, if needed, to get an ansible module being able to talk to >> podman (JSON is always better than plain text) >> >> # a way to secure the accesses to Podman management (we have to define >> how varlink talks to Podman, maybe providing dedicated socket with >> dedicated rights so that we can have dedicated users for specific tasks) >> >> That said, I have some questions: >> ° Does any of you have some experience with varlink and podman interface? >> ° What do you think about that integration wish? >> ° Does any of you have concern with this possible addition? >> >> Thank you for your feedback and ideas. >> >> Have a great day (or evening, or whatever suits the time you're reading >> this ;))! >> >> C. >> >> >> ¹ https://www.projectatomic.io/blog/2018/05/podman-varlink/ >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > How might this effect upgrades? What exactly? addition of varlink, or the whole podman thingy? The question was more about "varlink" than "podman" in fact - I should maybe have worded things otherwise... ? > > -J > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From aj at suse.com Thu Aug 16 06:42:22 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 16 Aug 2018 08:42:22 +0200 Subject: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2 In-Reply-To: <20180816053804.GF27536@thor.bakeyournoodle.com> References: <197ac3a5-df13-8003-16eb-d2cedee9ec7f@suse.com> <20180813211730.cze4vpknwncpqg3b@gentoo.org> <1534199243-sup-5500@lrrr.local> <1534278107-sup-8825@lrrr.local> <20180815052528.GA27536@thor.bakeyournoodle.com> <1534338477-sup-7813@lrrr.local> <20180815192801.GC27536@thor.bakeyournoodle.com> <38413aa0-4624-228c-1034-79fcbff49896@suse.com> <20180816053804.GF27536@thor.bakeyournoodle.com> Message-ID: <377c955d-b1a5-2d29-1ab6-333e0909e990@suse.com> On 2018-08-16 07:38, Tony Breeds wrote: > On Thu, Aug 16, 2018 at 06:27:39AM +0200, Andreas Jaeger wrote: > >> Ocata should be retired by now ;) Let's drop it... > > *cough* extended maintenance *cough* ;P Ah, forget about that. > So we don't need the Ocata docs to be rebuilt with this version? Ocata uses older sphinx etc. It would be nice - but not sure about the effort, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From aj at suse.com Thu Aug 16 07:07:01 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 16 Aug 2018 09:07:01 +0200 Subject: [openstack-dev] [docs] Retire rst2bash Message-ID: <4ec71dce-8dbb-42dd-f95b-238dc063b1af@suse.com> The rst2bash repo is dead and unused. It was created to help us with Install Guide testing - and this implementation was not finished and the Install Guide looks completely different now. I propose to retire it, see https://review.openstack.org/#/q/topic:retire-rst2bash for changes, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From lijie at unitedstack.com Thu Aug 16 07:18:32 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Thu, 16 Aug 2018 15:18:32 +0800 Subject: [openstack-dev] [docs][nova] about update flavor Message-ID: Hi,all I find it is supported that we can update the flavor name, VCPUs, RAM, root disk, ephemeral disk and so on in doc.openstack.org[1].But only can change the flavor propertity in fact.Is the document wrong?Can you tell me more about this ?Thank you very much. [1]https://docs.openstack.org/horizon/latest/admin/manage-flavors.html Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhengzhenyulixi at gmail.com Thu Aug 16 07:53:41 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Thu, 16 Aug 2018 15:53:41 +0800 Subject: [openstack-dev] [Nova] A multi-cell instance-list performance test Message-ID: Hi, Nova As the Cells v2 architecture is getting mature, and CERN used it and seems worked well, *Huawei *is also willing to consider using this in our Public Cloud deployments. As we still have concerns about the performance when doing multi-cell listing, recently *Yikun Jiang* and I have done a performance test for ``instance list`` across multi-cell deployment, we would like share our test results and findings. First, I want to point out our testing environment, as we(Yikun and I) are doing this as a concept test(to show the ratio between time consumptions for query data from DB and sorting etc.) so we are doing it on our own machine, the machine has 16 CPUs and 80 GB RAM, as it is old, so the Disk might be slow. So we will not judging the time consumption data itself, but the overall logic and the ratios between different steps. We are doing it with a devstack deployment on this single machine. Then I would like to share our test plan, we will setup 10 cells (cell1~cell10) and we will generate 10000 instance records in those cells (considering 20 instances per host, it would be like 500 hosts, which seems a good size for a cell), cell0 is kept empty as the number for errored instance could be very less and it doesn't really matter. We will test the time consumption for listing instances across 1,2,5, and 10 cells(cell0 will be always queried, so it is actually 2, 3, 6 and 11 cells) with the limit of 100, 200, 500 and 1000, as the default maximum limit is 1000. In order to get more general results, we tested the list with default sort key and dir, sort by instance_uuid and sort by uuid & name, this should provide a more general result. This is what we got(the time unit is second): *Default sort* *Uuid* *Sort* *uuid+name* *Sort* *Cell* *Num* *Limit* *Total* *Cost* *Data Gather Cost* *Merge Sort Cost* *Construct View* *Total* *Cost* *Data Gather Cost* *Merge Sort Cost* *Construct View* *Total* *Cost* *Data Gather Cost* *Merge Sort Cost* *Construct View* 10 100 2.3313 2.1306 0.1145 0.0672 2.3693 2.1343 0.1148 0.1016 2.3284 2.1264 0.1145 0.0679 200 3.5979 3.2137 0.2287 0.1265 3.5316 3.1509 0.2265 0.1255 3.481 3.054 0.2697 0.1284 500 7.1952 6.2597 0.5704 0.3029 7.5057 6.4761 0.6263 0.341 7.4885 6.4623 0.6239 0.3404 1000 13.5745 11.7012 1.1511 0.5966 13.8408 11.9007 1.2268 0.5939 13.8813 11.913 1.2301 0.6187 5 100 1.3142 1.1003 0.1163 0.0706 1.2458 1.0498 0.1163 0.0665 1.2528 1.0579 0.1161 0.066 200 2.0151 1.6063 0.2645 0.1255 1.9866 1.5386 0.2668 0.1615 2.0352 1.6246 0.2646 0.1262 500 4.2109 3.1358 0.7033 0.3343 4.1605 3.0893 0.6951 0.3384 4.1972 3.2461 0.6104 0.3028 1000 7.841 5.8881 1.2027 0.6802 7.7135 5.9121 1.1363 0.5969 7.8377 5.9385 1.1936 0.6376 2 100 0.6736 0.4727 0.1113 0.0822 0.605 0.4192 0.1105 0.0656 0.688 0.4613 0.1126 0.0682 200 1.1226 0.7229 0.2577 0.1255 1.0268 0.6671 0.2255 0.1254 1.2805 0.8171 0.2222 0.1258 500 2.2358 1.3506 0.5595 0.3026 2.3307 1.2748 0.6581 0.3362 2.741 1.6023 0.633 0.3365 1000 4.2079 2.3367 1.2053 0.5986 4.2384 2.4071 1.2017 0.633 4.3437 2.4136 1.217 0.6394 1 100 0.4857 0.2869 0.1097 0.069 0.4205 0.233 0.1131 0.0672 0.6372 0.3305 0.196 0.0681 200 0.6835 0.3236 0.2212 0.1256 0.7777 0.3754 0.261 0.13 0.9245 0.4527 0.227 0.129 500 1.5848 0.6415 0.6251 0.3043 1.6472 0.6554 0.6292 0.3053 1.9455 0.8201 0.5918 0.3447 1000 3.1692 1.2124 1.2246 0.6762 3.0836 1.2286 1.2055 0.643 3.0991 1.2248 1.2615 0.6028 Our conclusions from the data are: 1. The time consumption for *MERGE SORT* process has strong correlation with the *LIMIT*, and seems *not *effected by *number of cells;* 2. The major time consumption part of the whole process is actually the data gathering process, so we will have a closer look on this With we added some audit log in the code, and from the log we can saw: 02:24:53.376705 db begin, nova_cell0 02:24:53.425836 db end, nova_cell0: 0.0487968921661 02:24:53.426622 db begin, nova_cell1 02:24:54.451235 db end, nova_cell1: 1.02400803566 02:24:54.451991 db begin, nova_cell2 02:24:55.715769 db end, nova_cell2: 1.26333093643 02:24:55.716575 db begin, nova_cell3 02:24:56.963428 db end, nova_cell3: 1.24626398087 02:24:56.964202 db begin, nova_cell4 02:24:57.980187 db end, nova_cell4: 1.01546406746 02:24:57.980970 db begin, nova_cell5 02:24:59.279139 db end, nova_cell5: 1.29762792587 02:24:59.279904 db begin, nova_cell6 02:25:00.311717 db end, nova_cell6: 1.03130197525 02:25:00.312427 db begin, nova_cell7 02:25:01.654819 db end, nova_cell7: 1.34187483788 02:25:01.655643 db begin, nova_cell8 02:25:02.689731 db end, nova_cell8: 1.03352093697 02:25:02.690502 db begin, nova_cell9 02:25:04.076885 db end, nova_cell9: 1.38588285446 yes, the DB query was in serial, after some investigation, it seems that we are unable to perform eventlet.mockey_patch in uWSGI mode, so Yikun made this fix: https://review.openstack.org/#/c/592285/ After making this change, we test again, and we got this kind of data: total collect sort view before monkey_patch 13.5745 11.7012 1.1511 0.5966 after monkey_patch 12.8367 10.5471 1.5642 0.6041 The performance improved a little, and from the log we can saw: Aug 16 02:14:46.383081 begin detail api Aug 16 02:14:46.406766 begin cell gather begin Aug 16 02:14:46.419346 db begin, nova_cell0 Aug 16 02:14:46.425065 db begin, nova_cell1 Aug 16 02:14:46.430151 db begin, nova_cell2 Aug 16 02:14:46.435012 db begin, nova_cell3 Aug 16 02:14:46.440634 db begin, nova_cell4 Aug 16 02:14:46.446191 db begin, nova_cell5 Aug 16 02:14:46.450749 db begin, nova_cell6 Aug 16 02:14:46.455461 db begin, nova_cell7 Aug 16 02:14:46.459959 db begin, nova_cell8 Aug 16 02:14:46.466066 db begin, nova_cell9 Aug 16 02:14:46.470550 db begin, ova_cell10 Aug 16 02:14:46.731882 db end, nova_cell0: 0.311906099319 Aug 16 02:14:52.667791 db end, nova_cell5: 6.22100400925 Aug 16 02:14:54.065655 db end, nova_cell1: 7.63998198509 Aug 16 02:14:54.939856 db end, nova_cell3: 8.50425100327 Aug 16 02:14:55.309017 db end, nova_cell6: 8.85762405396 Aug 16 02:14:55.309623 db end, nova_cell8: 8.84928393364 Aug 16 02:14:55.310240 db end, nova_cell2: 8.87976694107 Aug 16 02:14:56.057487 db end, ova_cell10: 9.58636116982 Aug 16 02:14:56.058001 db end, nova_cell4: 9.61698698997 Aug 16 02:14:56.058547 db end, nova_cell9: 9.59216403961 Aug 16 02:14:56.954209 db end, nova_cell7: 10.4981210232 Aug 16 02:14:56.954665 end cell gather end: 10.5480799675 Aug 16 02:14:56.955010 begin heaq.merge Aug 16 02:14:58.527040 end heaq.merge: 1.57150006294 so, now the queries are in parallel, but the whole thing still seems serial. We tried to adjust the database configs like: max_thread_pool, use_tpool, etc. And we also tried to use a separate DB for some of the cells, but the result seems to be no big difference. So, the above are what we have now, and feel free to ping us if you have any questions or suggestions. BR, Zhenyu Zheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhengzhenyulixi at gmail.com Thu Aug 16 07:56:00 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Thu, 16 Aug 2018 15:56:00 +0800 Subject: [openstack-dev] [docs][nova] about update flavor In-Reply-To: References: Message-ID: We only allow update flavor descriptions(added in microversion 2.55) in Nova and what the horizon did was just delete the old one and create a new one, and I think it has been removed in last year. On Thu, Aug 16, 2018 at 3:19 PM Rambo wrote: > Hi,all > > I find it is supported that we can update the flavor name, VCPUs, > RAM, root disk, ephemeral disk and so on in doc.openstack.org[1].But only > can change the flavor propertity in fact.Is the document wrong?Can you tell > me more about this ?Thank you very much. > > [1]https://docs.openstack.org/horizon/latest/admin/manage-flavors.html > > > > > > > > > Best Regards > Rambo > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ameetcloud at gmail.com Thu Aug 16 08:18:12 2018 From: ameetcloud at gmail.com (Ameet Gandhare) Date: Thu, 16 Aug 2018 13:48:12 +0530 Subject: [openstack-dev] QA work Message-ID: Hi Everybody, Does anybody wants some QA/testing work to be done on their modules? -Regards, Ameet -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Thu Aug 16 08:29:25 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Thu, 16 Aug 2018 16:29:25 +0800 Subject: [openstack-dev] [docs][nova] about update flavor In-Reply-To: References: Message-ID: Sorry,I don't understand what has been removed,the docs about the update the flavor's cpu?Otherwise,why we don't consider to add the function that we can update the flavor's cpu? ------------------ Original ------------------ From: "Zhenyu Zheng"; Date: 2018年8月16日(星期四) 下午3:56 To: "OpenStack Developmen"; Subject: Re: [openstack-dev] [docs][nova] about update flavor We only allow update flavor descriptions(added in microversion 2.55) in Nova and what the horizon did was just delete the old one and create a new one, and I think it has been removed in last year. On Thu, Aug 16, 2018 at 3:19 PM Rambo wrote: Hi,all I find it is supported that we can update the flavor name, VCPUs, RAM, root disk, ephemeral disk and so on in doc.openstack.org[1].But only can change the flavor propertity in fact.Is the document wrong?Can you tell me more about this ?Thank you very much. [1]https://docs.openstack.org/horizon/latest/admin/manage-flavors.html Best Regards Rambo __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From shardy at redhat.com Thu Aug 16 08:38:02 2018 From: shardy at redhat.com (Steven Hardy) Date: Thu, 16 Aug 2018 09:38:02 +0100 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: <363d36a2-c25a-d7ba-3f45-c8b3aa4e6cce@gmail.com> References: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> <363d36a2-c25a-d7ba-3f45-c8b3aa4e6cce@gmail.com> Message-ID: On Wed, Aug 15, 2018 at 10:48 PM, Jay Pipes wrote: > On 08/15/2018 04:01 PM, Emilien Macchi wrote: >> >> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi > > wrote: >> >> More seriously here: there is an ongoing effort to converge the >> tools around containerization within Red Hat, and we, TripleO are >> interested to continue the containerization of our services (which >> was initially done with Docker & Docker-Distribution). >> We're looking at how these containers could be managed by k8s one >> day but way before that we plan to swap out Docker and join CRI-O >> efforts, which seem to be using Podman + Buildah (among other things). >> >> I guess my wording wasn't the best but Alex explained way better here: >> >> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52 >> >> If I may have a chance to rephrase, I guess our current intention is to >> continue our containerization and investigate how we can improve our tooling >> to better orchestrate the containers. >> We have a nice interface (openstack/paunch) that allows us to run multiple >> container backends, and we're currently looking outside of Docker to see how >> we could solve our current challenges with the new tools. >> We're looking at CRI-O because it happens to be a project with a great >> community, focusing on some problems that we, TripleO have been facing since >> we containerized our services. >> >> We're doing all of this in the open, so feel free to ask any question. > > > I appreciate your response, Emilien, thank you. Alex' responses to Jeremy on > the #openstack-tc channel were informative, thank you Alex. > > For now, it *seems* to me that all of the chosen tooling is very Red Hat > centric. Which makes sense to me, considering Triple-O is a Red Hat product. Just as a point of clarification - TripleO is an OpenStack project, and yes there is a downstream product derived from it, but we could e.g support multiple container backends in TripleO if there was community interest in supporting that. Also I think Alex already explained that fairly clearly in the IRC link that this is initially about proving our existing abstractions work to enable alternate container backends. Thanks, Steve From zhengzhenyulixi at gmail.com Thu Aug 16 08:41:34 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Thu, 16 Aug 2018 16:41:34 +0800 Subject: [openstack-dev] [docs][nova] about update flavor In-Reply-To: References: Message-ID: I mean this https://review.openstack.org/#/c/491442/ and the related ML http://lists.openstack.org/pipermail/openstack-dev/2017-August/120540.html On Thu, Aug 16, 2018 at 4:30 PM Rambo wrote: > Sorry,I don't understand what has been removed,the docs about the update > the flavor's cpu?Otherwise,why we don't consider to add the function that > we can update the flavor's cpu? > > > ------------------ Original ------------------ > *From:* "Zhenyu Zheng"; > *Date:* 2018年8月16日(星期四) 下午3:56 > *To:* "OpenStack Developmen"; > *Subject:* Re: [openstack-dev] [docs][nova] about update flavor > > We only allow update flavor descriptions(added in microversion 2.55) in > Nova and what the horizon did was just delete the old one and create a new > one, and I think it has been removed in last year. > > On Thu, Aug 16, 2018 at 3:19 PM Rambo wrote: > >> Hi,all >> >> I find it is supported that we can update the flavor name, VCPUs, >> RAM, root disk, ephemeral disk and so on in doc.openstack.org[1].But >> only can change the flavor propertity in fact.Is the document wrong?Can you >> tell me more about this ?Thank you very much. >> >> [1]https://docs.openstack.org/horizon/latest/admin/manage-flavors.html >> >> >> >> >> >> >> >> >> Best Regards >> Rambo >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yikunkero at gmail.com Thu Aug 16 08:53:09 2018 From: yikunkero at gmail.com (Yikun Jiang) Date: Thu, 16 Aug 2018 16:53:09 +0800 Subject: [openstack-dev] [Nova] A multi-cell instance-list performance test In-Reply-To: References: Message-ID: Some more information: *1. How did we record the time when listing?* you can see all our changes in: http://paste.openstack.org/show/728162/ Total cost: L26 Construct view: L43 Data gather per cell cost: L152 Data gather all cells cost: L174 Merge Sort cost: L198 *2. Why it is not parallel in the first result?* The root reason of gathering data in first table is not in parallel because we don’t enable eventlet.monkey_patch (especially, time flag is not True) under the uswgi mode. Then the oslo_db’s thread yield [2] doesn’t work, and all db data gathering threads are blocked until they get all data from db[1]. Finally the gathering process looks like is executed in serial, so we fix it in [2] but after fix[2], it still has no more improvement as we expected, looks like every thread is influenced by each other, so we need your idea. : ) [1] https://github.com/openstack/oslo.db/blob/256ebc3/oslo_db/sqlalchemy/engines.py#L51 [2] https://review.openstack.org/#/c/592285/ Regards, Yikun ---------------------------------------- Jiang Yikun(Kero) Mail: yikunkero at gmail.com Zhenyu Zheng 于2018年8月16日周四 下午3:54写道: > Hi, Nova > > As the Cells v2 architecture is getting mature, and CERN used it and seems > worked well, *Huawei *is also willing to consider using this in our > Public Cloud deployments. > As we still have concerns about the performance when doing multi-cell > listing, recently *Yikun Jiang* and I have done a performance test for > ``instance list`` across > multi-cell deployment, we would like share our test results and findings. > > First, I want to point out our testing environment, as we(Yikun and I) are > doing this as a concept test(to show the ratio between time consumptions > for query data from > DB and sorting etc.) so we are doing it on our own machine, the machine > has 16 CPUs and 80 GB RAM, as it is old, so the Disk might be slow. So we > will not judging > the time consumption data itself, but the overall logic and the ratios > between different steps. We are doing it with a devstack deployment on this > single machine. > > Then I would like to share our test plan, we will setup 10 cells > (cell1~cell10) and we will generate 10000 instance records in those cells > (considering 20 instances per > host, it would be like 500 hosts, which seems a good size for a cell), > cell0 is kept empty as the number for errored instance could be very less > and it doesn't really matter. > We will test the time consumption for listing instances across 1,2,5, and > 10 cells(cell0 will be always queried, so it is actually 2, 3, 6 and 11 > cells) with the limit of > 100, 200, 500 and 1000, as the default maximum limit is 1000. In order to > get more general results, we tested the list with default sort key and dir, > sort by > instance_uuid and sort by uuid & name, this should provide a more general > result. > > This is what we got(the time unit is second): > > *Default sort* > > *Uuid* *Sort* > > *uuid+name* *Sort* > > *Cell* > > *Num* > > *Limit* > > > *Total* > > *Cost* > > *Data Gather Cost* > > *Merge Sort Cost* > > *Construct View* > > *Total* > > *Cost* > > *Data Gather Cost* > > *Merge Sort Cost* > > *Construct View* > > *Total* > > *Cost* > > *Data Gather Cost* > > *Merge Sort Cost* > > *Construct View* > > 10 > > 100 > > 2.3313 > > 2.1306 > > 0.1145 > > 0.0672 > > 2.3693 > > 2.1343 > > 0.1148 > > 0.1016 > > 2.3284 > > 2.1264 > > 0.1145 > > 0.0679 > > 200 > > 3.5979 > > 3.2137 > > 0.2287 > > 0.1265 > > 3.5316 > > 3.1509 > > 0.2265 > > 0.1255 > > 3.481 > > 3.054 > > 0.2697 > > 0.1284 > > 500 > > 7.1952 > > 6.2597 > > 0.5704 > > 0.3029 > > 7.5057 > > 6.4761 > > 0.6263 > > 0.341 > > 7.4885 > > 6.4623 > > 0.6239 > > 0.3404 > > 1000 > > 13.5745 > > 11.7012 > > 1.1511 > > 0.5966 > > 13.8408 > > 11.9007 > > 1.2268 > > 0.5939 > > 13.8813 > > 11.913 > > 1.2301 > > 0.6187 > > 5 > > 100 > > 1.3142 > > 1.1003 > > 0.1163 > > 0.0706 > > 1.2458 > > 1.0498 > > 0.1163 > > 0.0665 > > 1.2528 > > 1.0579 > > 0.1161 > > 0.066 > > 200 > > 2.0151 > > 1.6063 > > 0.2645 > > 0.1255 > > 1.9866 > > 1.5386 > > 0.2668 > > 0.1615 > > 2.0352 > > 1.6246 > > 0.2646 > > 0.1262 > > 500 > > 4.2109 > > 3.1358 > > 0.7033 > > 0.3343 > > 4.1605 > > 3.0893 > > 0.6951 > > 0.3384 > > 4.1972 > > 3.2461 > > 0.6104 > > 0.3028 > > 1000 > > 7.841 > > 5.8881 > > 1.2027 > > 0.6802 > > 7.7135 > > 5.9121 > > 1.1363 > > 0.5969 > > 7.8377 > > 5.9385 > > 1.1936 > > 0.6376 > > 2 > > 100 > > 0.6736 > > 0.4727 > > 0.1113 > > 0.0822 > > 0.605 > > 0.4192 > > 0.1105 > > 0.0656 > > 0.688 > > 0.4613 > > 0.1126 > > 0.0682 > > 200 > > 1.1226 > > 0.7229 > > 0.2577 > > 0.1255 > > 1.0268 > > 0.6671 > > 0.2255 > > 0.1254 > > 1.2805 > > 0.8171 > > 0.2222 > > 0.1258 > > 500 > > 2.2358 > > 1.3506 > > 0.5595 > > 0.3026 > > 2.3307 > > 1.2748 > > 0.6581 > > 0.3362 > > 2.741 > > 1.6023 > > 0.633 > > 0.3365 > > 1000 > > 4.2079 > > 2.3367 > > 1.2053 > > 0.5986 > > 4.2384 > > 2.4071 > > 1.2017 > > 0.633 > > 4.3437 > > 2.4136 > > 1.217 > > 0.6394 > > 1 > > 100 > > 0.4857 > > 0.2869 > > 0.1097 > > 0.069 > > 0.4205 > > 0.233 > > 0.1131 > > 0.0672 > > 0.6372 > > 0.3305 > > 0.196 > > 0.0681 > > 200 > > 0.6835 > > 0.3236 > > 0.2212 > > 0.1256 > > 0.7777 > > 0.3754 > > 0.261 > > 0.13 > > 0.9245 > > 0.4527 > > 0.227 > > 0.129 > > 500 > > 1.5848 > > 0.6415 > > 0.6251 > > 0.3043 > > 1.6472 > > 0.6554 > > 0.6292 > > 0.3053 > > 1.9455 > > 0.8201 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Aug 16 08:55:12 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 16 Aug 2018 17:55:12 +0900 Subject: [openstack-dev] [release][qa] QA Rocky release status Message-ID: <16541f221b2.ae38c79726900.95614780576267481@ghanshyammann.com> Hi All, QA has lot of sub-projects and this mail is to track their release status for Rocky cycle. I will be on vacation from coming Monday for next 2 weeks (visiting India) but will be online to complete the below IN-PROGRESS items and update the status here. IN-PROGRESS: 1. devstack: Branch. Patch is pushed to branch for Rocky which is in hold state - IN-PROGRESS [1] 2. grenade: Branch. Patch is pushed to branch for Rocky which is in hold state - IN-PROGRESS [1] 3. patrole: Release done, patch is under review[2] - IN-PROGRESS 4. tempest: Release done, patch is under review[3] - IN-PROGRESS COMPLETED (Done or no release required): 5. bashate: independent release | Branch-less. version 0.6.0 is released last month and no further release required in Rocky cycle. - COMPLETED 6. coverage2sql: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED 7. devstack-plugin-ceph: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED 8. devstack-plugin-cookiecutter: Branch-less. Not any release yet and no specific release required for Rocky. - COMPLETED 9. devstack-tools: Branch-less. version 0.4.0 is the latest version released and no further release required in Rocky cycle. - COMPLETED 10. devstack-vagrant: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED 11. eslint-config-openstack: Branch-less. version 4.0.1 is the latest version released. no further release required in Rocky cycle. - COMPLETED 12. hacking: Branch-less. version 11.1.0 is the latest version released. no further release required in Rocky cycle. - COMPLETED 13. karma-subunit-reporter: Branch-less. version v0.0.4 is the latest version released. no further release required in Rocky cycle. - COMPLETED 14. openstack-health: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED 15. os-performance-tools: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED 16. os-testr: Branch-less. version 1.0.0 is the latest version released. no further release required in Rocky cycle. - COMPLETED 17. qa-specs: Spec repo, no release needed. - COMPLETED 18. stackviz: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED 19. tempest-plugin-cookiecutter: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED 20. tempest-lib: Deprecated repo, No released needed for Rocky - COMPLETED 21. tempest-stress: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED 22. devstack-plugin-container: Branch. Release and Branched done[4] - COMPLETED [1] https://review.openstack.org/#/q/topic:rocky-branch-devstack-grenade+(status:open+OR+status:merged) [2] https://review.openstack.org/#/c/592277/ [3] https://review.openstack.org/#/c/592276/ [4] https://review.openstack.org/#/c/591804/ -gmann From lijie at unitedstack.com Thu Aug 16 09:58:36 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Thu, 16 Aug 2018 17:58:36 +0800 Subject: [openstack-dev] [Openstack-operators][nova] ask deployment question Message-ID: Hi,all I have some questions about deploy the large scale openstack cloud.Such as 1.Only in one region situation,How many physical machines are the biggest deployment scale in our community? Can you tell me more about these combined with own practice? Would you give me some methods to learn it?Such as the website,blog and so on. Thank you very much!Looking forward to hearing from you. Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.rydberg at citynetwork.eu Thu Aug 16 10:30:13 2018 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Thu, 16 Aug 2018 12:30:13 +0200 Subject: [openstack-dev] [publiccloud-wg] Meeting this afternoon for Public Cloud WG Message-ID: <72285db1-6f3b-9370-1539-6030e84cfb4f@citynetwork.eu> Hi folks, Time for a new meeting for the Public Cloud WG. Agenda draft can be found at https://etherpad.openstack.org/p/publiccloud-wg, feel free to add items to that list. See you all later this afternoon at IRC 1400 UTC in #openstack-publiccloud Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From balazs.gibizer at ericsson.com Thu Aug 16 11:31:49 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Thu, 16 Aug 2018 13:31:49 +0200 Subject: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict Message-ID: <1534419109.24276.3@smtp.office365.com> Hi, tl;dr: To properly use consumer generation (placement 1.28) in Nova we need to decide how to handle consumer generation conflict from Nova perspective: a) Nova reads the current consumer_generation before the allocation update operation and use that generation in the allocation update operation. If the allocation is changed between the read and the update then nova fails the server lifecycle operation and let the end user retry it. b) Like a) but in case of conflict nova blindly retries the read-and-update operation pair couple of times and if only fails the life cycle operation if run out of retries. c) Nova stores its own view of the allocation. When a consumer's allocation needs to be modified then nova reads the current state of the consumer from placement. Then nova combines the two allocations to generate the new expected consumer state. In case of generation conflict nova retries the read-combine-update operation triplet. Which way we should go now? What should be or long term goal? Details: There are plenty of affected lifecycle operations. See the patch series starting at [1]. For example: The current patch[1] that handles the delete server case implements option b). It simly reads the current consumer generation from placement and uses that to send a PUT /allocatons/{instance_uuid} with "allocations": {} in its body. Here implementing option c) would mean that during server delete nova needs: 1) to compile its own view of the resource need of the server (currently based on the flavor but in the future based on the attached port's resource requests as well) 2) then read the current allocation of the server from placement 3) then subtract the server resource needs from the current allocation and send the resulting allocation back in the update to placement In the simple case this subtraction would result in an empty allocation sent to placement. Also in this simple case c) has the same effect as b) currently implementated in [1]. However if somebody outside of nova modifies the allocation of this consumer in a way that nova does not know about such changed resource need then b) and c) will result in different placement state after server delete. I only know of one example, the change of neutron port's resource request while the port is attached. (Note, it is out of scope in the first step of bandwidth implementation.) In this specific example option c) can work if nova re-reads the port's resource request during delete when recalculates its own view of the server resource needs. But I don't know if every other resource (e.g. accelerators) used by a server can be / will be handled this way. Other examples of affected lifecycle operations: During a server migration moving the source host allocation from the instance_uuid to a the migration_uuid fails with consumer generation conflict because of the instance_uuid consumer generation. [2] Confirming a migration fails as the deletion of the source host allocation fails due to the consumer generation conflict of the migration_uuid consumer that is being emptied.[3] During scheduling of a new server putting allocation to instance_uuid fails as the scheduler assumes that it is a new consumer and therefore uses consumer_generation: None for the allocation, but placement reports generation conflict. [4] During a non-forced evacuation the scheduler tries to claim the resource on the destination host with the instance_uuid, but that consumer already holds the source allocation therefore the scheduler cannot assume that the instance_uuid is a new consumer. [4] Cheers, gibi [1] https://review.openstack.org/#/c/591597 [2] https://review.openstack.org/#/c/591810 [3] https://review.openstack.org/#/c/591811 [4] https://review.openstack.org/#/c/583667 From balazs.gibizer at ericsson.com Thu Aug 16 11:43:23 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Thu, 16 Aug 2018 13:43:23 +0200 Subject: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict In-Reply-To: <1534419109.24276.3@smtp.office365.com> References: <1534419109.24276.3@smtp.office365.com> Message-ID: <1534419803.3149.0@smtp.office365.com> reformatted for readabiliy, sorry: Hi, tl;dr: To properly use consumer generation (placement 1.28) in Nova we need to decide how to handle consumer generation conflict from Nova perspective: a) Nova reads the current consumer_generation before the allocation update operation and use that generation in the allocation update operation. If the allocation is changed between the read and the update then nova fails the server lifecycle operation and let the end user retry it. b) Like a) but in case of conflict nova blindly retries the read-and-update operation pair couple of times and if only fails the life cycle operation if run out of retries. c) Nova stores its own view of the allocation. When a consumer's allocation needs to be modified then nova reads the current state of the consumer from placement. Then nova combines the two allocations to generate the new expected consumer state. In case of generation conflict nova retries the read-combine-update operation triplet. Which way we should go now? What should be or long term goal? Details: There are plenty of affected lifecycle operations. See the patch series starting at [1]. For example: The current patch[1] that handles the delete server case implements option b). It simly reads the current consumer generation from placement and uses that to send a PUT /allocatons/{instance_uuid} with "allocations": {} in its body. Here implementing option c) would mean that during server delete nova needs: 1) to compile its own view of the resource need of the server (currently based on the flavor but in the future based on the attached port's resource requests as well) 2) then read the current allocation of the server from placement 3) then subtract the server resource needs from the current allocation and send the resulting allocation back in the update to placement In the simple case this subtraction would result in an empty allocation sent to placement. Also in this simple case c) has the same effect as b) currently implementated in [1]. However if somebody outside of nova modifies the allocation of this consumer in a way that nova does not know about such changed resource need then b) and c) will result in different placement state after server delete. I only know of one example, the change of neutron port's resource request while the port is attached. (Note, it is out of scope in the first step of bandwidth implementation.) In this specific example option c) can work if nova re-reads the port's resource request during delete when recalculates its own view of the server resource needs. But I don't know if every other resource (e.g. accelerators) used by a server can be / will be handled this way. Other examples of affected lifecycle operations: During a server migration moving the source host allocation from the instance_uuid to a the migration_uuid fails with consumer generation conflict because of the instance_uuid consumer generation. [2] Confirming a migration fails as the deletion of the source host allocation fails due to the consumer generation conflict of the migration_uuid consumer that is being emptied.[3] During scheduling of a new server putting allocation to instance_uuid fails as the scheduler assumes that it is a new consumer and therefore uses consumer_generation: None for the allocation, but placement reports generation conflict. [4] During a non-forced evacuation the scheduler tries to claim the resource on the destination host with the instance_uuid, but that consumer already holds the source allocation therefore the scheduler cannot assume that the instance_uuid is a new consumer. [4] [1] https://review.openstack.org/#/c/591597 [2] https://review.openstack.org/#/c/591810 [3] https://review.openstack.org/#/c/591811 [4] https://review.openstack.org/#/c/583667 From jim at jimrollenhagen.com Thu Aug 16 11:48:30 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Thu, 16 Aug 2018 07:48:30 -0400 Subject: [openstack-dev] [Nova] A multi-cell instance-list performance test In-Reply-To: References: Message-ID: The data as a table, for anyone reading this in plain text (and the archives): https://imgur.com/2NC45n9.png // jim On Thu, Aug 16, 2018 at 3:53 AM, Zhenyu Zheng wrote: > Hi, Nova > > As the Cells v2 architecture is getting mature, and CERN used it and seems > worked well, *Huawei *is also willing to consider using this in our > Public Cloud deployments. > As we still have concerns about the performance when doing multi-cell > listing, recently *Yikun Jiang* and I have done a performance test for > ``instance list`` across > multi-cell deployment, we would like share our test results and findings. > > First, I want to point out our testing environment, as we(Yikun and I) are > doing this as a concept test(to show the ratio between time consumptions > for query data from > DB and sorting etc.) so we are doing it on our own machine, the machine > has 16 CPUs and 80 GB RAM, as it is old, so the Disk might be slow. So we > will not judging > the time consumption data itself, but the overall logic and the ratios > between different steps. We are doing it with a devstack deployment on this > single machine. > > Then I would like to share our test plan, we will setup 10 cells > (cell1~cell10) and we will generate 10000 instance records in those cells > (considering 20 instances per > host, it would be like 500 hosts, which seems a good size for a cell), > cell0 is kept empty as the number for errored instance could be very less > and it doesn't really matter. > We will test the time consumption for listing instances across 1,2,5, and > 10 cells(cell0 will be always queried, so it is actually 2, 3, 6 and 11 > cells) with the limit of > 100, 200, 500 and 1000, as the default maximum limit is 1000. In order to > get more general results, we tested the list with default sort key and dir, > sort by > instance_uuid and sort by uuid & name, this should provide a more general > result. > > This is what we got(the time unit is second): > > *Default sort* > > *Uuid* *Sort* > > *uuid+name* *Sort* > > *Cell* > > *Num* > > *Limit* > > > *Total* > > *Cost* > > *Data Gather Cost* > > *Merge Sort Cost* > > *Construct View* > > *Total* > > *Cost* > > *Data Gather Cost* > > *Merge Sort Cost* > > *Construct View* > > *Total* > > *Cost* > > *Data Gather Cost* > > *Merge Sort Cost* > > *Construct View* > > 10 > > 100 > > 2.3313 > > 2.1306 > > 0.1145 > > 0.0672 > > 2.3693 > > 2.1343 > > 0.1148 > > 0.1016 > > 2.3284 > > 2.1264 > > 0.1145 > > 0.0679 > > 200 > > 3.5979 > > 3.2137 > > 0.2287 > > 0.1265 > > 3.5316 > > 3.1509 > > 0.2265 > > 0.1255 > > 3.481 > > 3.054 > > 0.2697 > > 0.1284 > > 500 > > 7.1952 > > 6.2597 > > 0.5704 > > 0.3029 > > 7.5057 > > 6.4761 > > 0.6263 > > 0.341 > > 7.4885 > > 6.4623 > > 0.6239 > > 0.3404 > > 1000 > > 13.5745 > > 11.7012 > > 1.1511 > > 0.5966 > > 13.8408 > > 11.9007 > > 1.2268 > > 0.5939 > > 13.8813 > > 11.913 > > 1.2301 > > 0.6187 > > 5 > > 100 > > 1.3142 > > 1.1003 > > 0.1163 > > 0.0706 > > 1.2458 > > 1.0498 > > 0.1163 > > 0.0665 > > 1.2528 > > 1.0579 > > 0.1161 > > 0.066 > > 200 > > 2.0151 > > 1.6063 > > 0.2645 > > 0.1255 > > 1.9866 > > 1.5386 > > 0.2668 > > 0.1615 > > 2.0352 > > 1.6246 > > 0.2646 > > 0.1262 > > 500 > > 4.2109 > > 3.1358 > > 0.7033 > > 0.3343 > > 4.1605 > > 3.0893 > > 0.6951 > > 0.3384 > > 4.1972 > > 3.2461 > > 0.6104 > > 0.3028 > > 1000 > > 7.841 > > 5.8881 > > 1.2027 > > 0.6802 > > 7.7135 > > 5.9121 > > 1.1363 > > 0.5969 > > 7.8377 > > 5.9385 > > 1.1936 > > 0.6376 > > 2 > > 100 > > 0.6736 > > 0.4727 > > 0.1113 > > 0.0822 > > 0.605 > > 0.4192 > > 0.1105 > > 0.0656 > > 0.688 > > 0.4613 > > 0.1126 > > 0.0682 > > 200 > > 1.1226 > > 0.7229 > > 0.2577 > > 0.1255 > > 1.0268 > > 0.6671 > > 0.2255 > > 0.1254 > > 1.2805 > > 0.8171 > > 0.2222 > > 0.1258 > > 500 > > 2.2358 > > 1.3506 > > 0.5595 > > 0.3026 > > 2.3307 > > 1.2748 > > 0.6581 > > 0.3362 > > 2.741 > > 1.6023 > > 0.633 > > 0.3365 > > 1000 > > 4.2079 > > 2.3367 > > 1.2053 > > 0.5986 > > 4.2384 > > 2.4071 > > 1.2017 > > 0.633 > > 4.3437 > > 2.4136 > > 1.217 > > 0.6394 > > 1 > > 100 > > 0.4857 > > 0.2869 > > 0.1097 > > 0.069 > > 0.4205 > > 0.233 > > 0.1131 > > 0.0672 > > 0.6372 > > 0.3305 > > 0.196 > > 0.0681 > > 200 > > 0.6835 > > 0.3236 > > 0.2212 > > 0.1256 > > 0.7777 > > 0.3754 > > 0.261 > > 0.13 > > 0.9245 > > 0.4527 > > 0.227 > > 0.129 > > 500 > > 1.5848 > > 0.6415 > > 0.6251 > > 0.3043 > > 1.6472 > > 0.6554 > > 0.6292 > > 0.3053 > > 1.9455 > > 0.8201 > > 0.5918 > > 0.3447 > > 1000 > > 3.1692 > > 1.2124 > > 1.2246 > > 0.6762 > > 3.0836 > > 1.2286 > > 1.2055 > > 0.643 > > 3.0991 > > 1.2248 > > 1.2615 > > 0.6028 > > Our conclusions from the data are: > 1. The time consumption for *MERGE SORT* process has strong correlation > with the *LIMIT*, and seems *not *effected by *number of cells;* > 2. The major time consumption part of the whole process is actually the > data gathering process, so we will have a closer look on this > > With we added some audit log in the code, and from the log we can saw: > > 02:24:53.376705 db begin, nova_cell0 > > 02:24:53.425836 db end, nova_cell0: 0.0487968921661 > > 02:24:53.426622 db begin, nova_cell1 > > 02:24:54.451235 db end, nova_cell1: 1.02400803566 > > 02:24:54.451991 db begin, nova_cell2 > > 02:24:55.715769 db end, nova_cell2: 1.26333093643 > > 02:24:55.716575 db begin, nova_cell3 > > 02:24:56.963428 db end, nova_cell3: 1.24626398087 > > 02:24:56.964202 db begin, nova_cell4 > > 02:24:57.980187 db end, nova_cell4: 1.01546406746 > > 02:24:57.980970 db begin, nova_cell5 > > 02:24:59.279139 db end, nova_cell5: 1.29762792587 > > 02:24:59.279904 db begin, nova_cell6 > > 02:25:00.311717 db end, nova_cell6: 1.03130197525 > > 02:25:00.312427 db begin, nova_cell7 > > 02:25:01.654819 db end, nova_cell7: 1.34187483788 > > 02:25:01.655643 db begin, nova_cell8 > > 02:25:02.689731 db end, nova_cell8: 1.03352093697 > > 02:25:02.690502 db begin, nova_cell9 > > 02:25:04.076885 db end, nova_cell9: 1.38588285446 > > > yes, the DB query was in serial, after some investigation, it seems that > we are unable to perform eventlet.mockey_patch in uWSGI mode, so Yikun made > this fix: > > https://review.openstack.org/#/c/592285/ > > > After making this change, we test again, and we got this kind of data: > > > total > > collect > > sort > > view > > before monkey_patch > > 13.5745 > > 11.7012 > > 1.1511 > > 0.5966 > > after monkey_patch > > 12.8367 > > 10.5471 > > 1.5642 > > 0.6041 > > The performance improved a little, and from the log we can saw: > > Aug 16 02:14:46.383081 begin detail api > > Aug 16 02:14:46.406766 begin cell gather begin > > Aug 16 02:14:46.419346 db begin, nova_cell0 > > Aug 16 02:14:46.425065 db begin, nova_cell1 > > Aug 16 02:14:46.430151 db begin, nova_cell2 > > Aug 16 02:14:46.435012 db begin, nova_cell3 > > Aug 16 02:14:46.440634 db begin, nova_cell4 > > Aug 16 02:14:46.446191 db begin, nova_cell5 > > Aug 16 02:14:46.450749 db begin, nova_cell6 > > Aug 16 02:14:46.455461 db begin, nova_cell7 > > Aug 16 02:14:46.459959 db begin, nova_cell8 > > Aug 16 02:14:46.466066 db begin, nova_cell9 > > Aug 16 02:14:46.470550 db begin, ova_cell10 > > Aug 16 02:14:46.731882 db end, nova_cell0: 0.311906099319 > > Aug 16 02:14:52.667791 db end, nova_cell5: 6.22100400925 > > Aug 16 02:14:54.065655 db end, nova_cell1: 7.63998198509 > > Aug 16 02:14:54.939856 db end, nova_cell3: 8.50425100327 > > Aug 16 02:14:55.309017 db end, nova_cell6: 8.85762405396 > > Aug 16 02:14:55.309623 db end, nova_cell8: 8.84928393364 > > Aug 16 02:14:55.310240 db end, nova_cell2: 8.87976694107 > > Aug 16 02:14:56.057487 db end, ova_cell10: 9.58636116982 > > Aug 16 02:14:56.058001 db end, nova_cell4: 9.61698698997 > > Aug 16 02:14:56.058547 db end, nova_cell9: 9.59216403961 > > Aug 16 02:14:56.954209 db end, nova_cell7: 10.4981210232 > > Aug 16 02:14:56.954665 end cell gather end: 10.5480799675 > > Aug 16 02:14:56.955010 begin heaq.merge > > Aug 16 02:14:58.527040 end heaq.merge: 1.57150006294 > > > so, now the queries are in parallel, but the whole thing still seems > serial. > > > We tried to adjust the database configs like: max_thread_pool, use_tpool, > etc. And we also tried to use a separate DB for some of the cells, but the > result > > seems to be no big difference. > > > So, the above are what we have now, and feel free to ping us if you have > any questions or suggestions. > > > BR, > > > Zhenyu Zheng > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Thu Aug 16 12:34:34 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Thu, 16 Aug 2018 15:34:34 +0300 Subject: [openstack-dev] [infra][horizon][osc] ClientImpact tag automation In-Reply-To: <20180802203826.3lo2j6u7jlcdoyrk@yuggoth.org> References: <20180802150947.GA1359@sm-workstation> <20180802175622.p775m644j4ehm7gd@yuggoth.org> <20180802191610.GA11956@sm-workstation> <20180802203826.3lo2j6u7jlcdoyrk@yuggoth.org> Message-ID: Hi all, Auto-created bugs could help in this effort. Honestly, nobody can say will it work or not before we try it. >From a Horizon's perspective, we need to have some solution which helps us to know about new features that would be good to add them to UI in the future. We started with an etherpad [1] as a first step for now. [1] https://etherpad.openstack.org/p/horizon-feature-gap Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Thu, Aug 2, 2018 at 11:38 PM, Jeremy Stanley wrote: > On 2018-08-02 14:16:10 -0500 (-0500), Sean McGinnis wrote: > [...] > > Interesting... I hadn't looked into Gerrit functionality enough to know > about > > these. Looks like this is probably what you are referring to? > > > > https://gerrit.googlesource.com/plugins/its-storyboard/ > > Yes, that. Khai Do (zaro) did the bulk of the work implementing it > for us but isn't around as much these days (we miss you!). > > > It's been awhile since I did anything significant with Java, but that > might be > > an option. Maybe a fun weekend project at least to see what it would > take to > > create an its-launchpad plugin. > [...] > > Careful; if you let anyone know you've touched a Gerrit plug-in the > requests for more help will never end. > -- > Jeremy Stanley > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Thu Aug 16 12:38:07 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Thu, 16 Aug 2018 15:38:07 +0300 Subject: [openstack-dev] [freezer][tc] removing freezer from governance In-Reply-To: References: <1533323716-sup-9361@lrrr.local> Message-ID: Hi Rog, If your company uses Freezer in production clouds, maybe you can help the community with supporting it? I know, that it could be a long and not easy decision but it's the only way you can be sure that the project is alive. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Sat, Aug 4, 2018 at 5:33 AM, Rong Zhu wrote: > Hi, all > > I think backup restore and disaster recovery is one the import things in > OpenStack, And our > company(ZTE) has already integrated freezer in our production. And did > some features base on > freezer, we could push those features to community. Could you give us a > chance to take over > freezer in Stein cycle, If things still no progress, we cloud do this > action after Stein cycle. > > Thank you for your consideration. > > -- > Thanks, > Rong Zhu > > On Sat, Aug 4, 2018 at 3:16 AM Doug Hellmann > wrote: > >> Based on the fact that the Freezer team missed the Rocky release and >> Stein PTL elections, I have proposed a patch to remove the project from >> governance. If the project is still being actively maintained and >> someone wants to take over leadership, please let us know here in this >> thread or on the patch. >> >> Doug >> >> https://review.openstack.org/#/c/588645/ >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jistr at redhat.com Thu Aug 16 13:32:26 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Thu, 16 Aug 2018 15:32:26 +0200 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: References: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> <363d36a2-c25a-d7ba-3f45-c8b3aa4e6cce@gmail.com> Message-ID: <3bba05bb-0b4b-863b-5def-244489f0659b@redhat.com> On 16.8.2018 10:38, Steven Hardy wrote: > On Wed, Aug 15, 2018 at 10:48 PM, Jay Pipes wrote: >> On 08/15/2018 04:01 PM, Emilien Macchi wrote: >>> >>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi >> > wrote: >>> >>> More seriously here: there is an ongoing effort to converge the >>> tools around containerization within Red Hat, and we, TripleO are >>> interested to continue the containerization of our services (which >>> was initially done with Docker & Docker-Distribution). >>> We're looking at how these containers could be managed by k8s one >>> day but way before that we plan to swap out Docker and join CRI-O >>> efforts, which seem to be using Podman + Buildah (among other things). >>> >>> I guess my wording wasn't the best but Alex explained way better here: >>> >>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52 >>> >>> If I may have a chance to rephrase, I guess our current intention is to >>> continue our containerization and investigate how we can improve our tooling >>> to better orchestrate the containers. >>> We have a nice interface (openstack/paunch) that allows us to run multiple >>> container backends, and we're currently looking outside of Docker to see how >>> we could solve our current challenges with the new tools. >>> We're looking at CRI-O because it happens to be a project with a great >>> community, focusing on some problems that we, TripleO have been facing since >>> we containerized our services. >>> >>> We're doing all of this in the open, so feel free to ask any question. >> >> >> I appreciate your response, Emilien, thank you. Alex' responses to Jeremy on >> the #openstack-tc channel were informative, thank you Alex. >> >> For now, it *seems* to me that all of the chosen tooling is very Red Hat >> centric. Which makes sense to me, considering Triple-O is a Red Hat product. > > Just as a point of clarification - TripleO is an OpenStack project, > and yes there is a downstream product derived from it, but we could > e.g support multiple container backends in TripleO if there was > community interest in supporting that. > > Also I think Alex already explained that fairly clearly in the IRC > link that this is initially about proving our existing abstractions > work to enable alternate container backends. +1, and with my upgrade-centric hat on, we've had a fair share of trouble with Docker -- update of the daemon causing otherwise needless downtime of services and sometimes data plane too. Most recent example i can think of is here [1][2] -- satisfactory solution still doesn't exist. So my 2 cents: i am very interested in exploring alternative container runtimes, and daemon-less sounds to me like a promising direction. Jirka [1] https://bugs.launchpad.net/tripleo/+bug/1777146 [2] https://review.openstack.org/#/c/575758/1/puppet/services/docker.yaml > > Thanks, > > Steve > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sean.mcginnis at gmx.com Thu Aug 16 13:34:21 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 16 Aug 2018 08:34:21 -0500 Subject: [openstack-dev] [release] Release countdown for week R-1, August 20-24 Message-ID: <20180816133420.GA29040@sm-workstation> The end is near! Development Focus ----------------- Teams should be working on release critical bugs in preparation of the final release candidate deadline this Thursday the 23rd. Teams attending the PTG should also be preparing for those discussions and capturing information in the etherpads: https://wiki.openstack.org/wiki/PTG/Stein/Etherpads General Information ------------------- Thursday, August 23 is the deadline for final Rocky release candidates. We will then enter a quiet period until we tag the final release on August 29. Actions --------- Watch for any translation patches coming through and merge them quickly. If your project has a stable/rocky branch created, please make sure those patches are also getting merged there. (Do not backport the ones from master) Liaisons for projects with independent deliverables should import the release history by preparing patches to openstack/releases. Projects following the cycle-trailing model should be getting ready for the cycle-trailing RC deadline coming up on August 30. Please drop by #openstack-release with any questions or concerns about the upcoming release. Upcoming Deadlines & Dates -------------------------- Final RC deadline: August 23 Rocky Release: August 29 Cycle trailing RC deadline: August 30 Cycle trailing Rocky release: November 28 -- Sean McGinnis (smcginnis) From tenobreg at redhat.com Thu Aug 16 13:43:22 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Thu, 16 Aug 2018 10:43:22 -0300 Subject: [openstack-dev] [sahara] No meeting today Message-ID: Hi folks, since a couple of our core reviewers are on PTO today we have decided not to host a meeting today. If you have any questions just ping us at #openstack-sahara Thanks, -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jistr at redhat.com Thu Aug 16 14:17:10 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Thu, 16 Aug 2018 16:17:10 +0200 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: References: Message-ID: <5f4f72a3-5f89-0712-0397-5513b833b70a@redhat.com> On 15.8.2018 11:32, Cédric Jeanneret wrote: > Dear Community, > > As you may know, a move toward Podman as replacement of Docker is starting. > > One of the issues with podman is the lack of daemon, precisely the lack > of a socket allowing to send commands and get a "computer formatted > output" (like JSON or YAML or...). > > In order to work that out, Podman has added support for varlink¹, using > the "socket activation" feature in Systemd. > > On my side, I would like to push forward the integration of varlink in > TripleO deployed containers, especially since it will allow the following: > # proper interface with Paunch (via python link) "integration of varlink in TripleO deployed containers" sounds like we'd need to make some changes to the containers themselves, but is that the case? As i read the docs, it seems like a management API wrapper for Podman, so just an alternative interface to Podman CLI. I'd expect we'd use varlink from Paunch, but probably not from the containers themselves? (Perhaps that's what you meant, just making sure we're on the same page.) > > # a way to manage containers from within specific containers (think > "healthcheck", "monitoring") by mounting the socket as a shared volume I think healthchecks are currently quite Docker-specific, so we could have a Podman-specific alternative here. We should be careful about how much container runtime specificity we introduce and keep though, and we'll probably have to amend our tools (e.g. pre-upgrade validations [2]) to work with both, at least until we decide whether to really make a full transition to Podman or not. > > # a way to get container statistics (think "metrics") > > # a way, if needed, to get an ansible module being able to talk to > podman (JSON is always better than plain text) > > # a way to secure the accesses to Podman management (we have to define > how varlink talks to Podman, maybe providing dedicated socket with > dedicated rights so that we can have dedicated users for specific tasks) > > That said, I have some questions: > ° Does any of you have some experience with varlink and podman interface? > ° What do you think about that integration wish? > ° Does any of you have concern with this possible addition? I like it, but we should probably sync up with Podman community if they consider varlink a "supported" interface for controlling Podman, and it's not just an experiment which will vanish. To me it certainly looks like a much better programmable interface than composing CLI calls and parsing their output, but we should make sure Podman folks think so too :) Thanks for looking into this Jirka [2] https://review.openstack.org/#/c/582502/ > > Thank you for your feedback and ideas. > > Have a great day (or evening, or whatever suits the time you're reading > this ;))! > > C. > > > ¹ https://www.projectatomic.io/blog/2018/05/podman-varlink/ > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jistr at redhat.com Thu Aug 16 14:29:47 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Thu, 16 Aug 2018 16:29:47 +0200 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: <9f71a2e1-a014-9faa-b8d8-792db584e53d@redhat.com> References: <9f71a2e1-a014-9faa-b8d8-792db584e53d@redhat.com> Message-ID: <24d04363-343e-dd5d-1e9a-a1e12ed40f54@redhat.com> On 16.8.2018 07:39, Cédric Jeanneret wrote: > > > On 08/16/2018 12:10 AM, Jason E. Rist wrote: >> On 08/15/2018 03:32 AM, Cédric Jeanneret wrote: >>> Dear Community, >>> >>> As you may know, a move toward Podman as replacement of Docker is starting. >>> >>> One of the issues with podman is the lack of daemon, precisely the lack >>> of a socket allowing to send commands and get a "computer formatted >>> output" (like JSON or YAML or...). >>> >>> In order to work that out, Podman has added support for varlink¹, using >>> the "socket activation" feature in Systemd. >>> >>> On my side, I would like to push forward the integration of varlink in >>> TripleO deployed containers, especially since it will allow the following: >>> # proper interface with Paunch (via python link) >>> >>> # a way to manage containers from within specific containers (think >>> "healthcheck", "monitoring") by mounting the socket as a shared volume >>> >>> # a way to get container statistics (think "metrics") >>> >>> # a way, if needed, to get an ansible module being able to talk to >>> podman (JSON is always better than plain text) >>> >>> # a way to secure the accesses to Podman management (we have to define >>> how varlink talks to Podman, maybe providing dedicated socket with >>> dedicated rights so that we can have dedicated users for specific tasks) >>> >>> That said, I have some questions: >>> ° Does any of you have some experience with varlink and podman interface? >>> ° What do you think about that integration wish? >>> ° Does any of you have concern with this possible addition? >>> >>> Thank you for your feedback and ideas. >>> >>> Have a great day (or evening, or whatever suits the time you're reading >>> this ;))! >>> >>> C. >>> >>> >>> ¹ https://www.projectatomic.io/blog/2018/05/podman-varlink/ >>> >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> How might this effect upgrades? > > What exactly? addition of varlink, or the whole podman thingy? The > question was more about "varlink" than "podman" in fact - I should maybe > have worded things otherwise... ? Varlink shouldn't be a problem as it's just an additive interface. Switching container runtime might be a bit difficult though :) When running any upgrade, we stop any containers that need updating, and replace them with new ones. In theory we could just as well start the new ones using a different runtime, all we need is to keep the same bind mounts etc. What would need to be investigated is whether support for this (stopping on one runtime, starting on another) needs to be implemented directly into tools like Paunch and Pacemaker, or if we can handle this one-time scenario just with additional code in upgrade_tasks. It might be a combination of both. Problem might come with sidecar containers for Neutron, which generally don't like being restarted (it can induce data plane downtime). Advanced hackery might be needed on this front... :) Either way i think we'd have to do some PoC of such migration before fully committing to it. Jirka > >> >> -J >> > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From smonderer at vasonanetworks.com Thu Aug 16 14:43:54 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Thu, 16 Aug 2018 17:43:54 +0300 Subject: [openstack-dev] [tripleo] network isolation!!! do we still need to configure VLAN , CIDR, ... in network-environment.yaml Message-ID: Hi, In ocata we used network environment file to configure network parameters as following InternalApiNetCidr: '172.16.2.0/24' TenantNetCidr: '172.16.0.0/24' ExternalNetCidr: '192.168.204.0/24' # Customize the VLAN IDs to match the local environment InternalApiNetworkVlanID: 711 TenantNetworkVlanID: 714 ExternalNetworkVlanID: 204 InternalApiAllocationPools: [{'start': '172.16.2.4', 'end': '172.16.2.250'}] TenantAllocationPools: [{'start': '172.16.0.4', 'end': '172.16.0.250'}] # Leave room if the external network is also used for floating IPs ExternalAllocationPools: [{'start': '192.168.204.6', 'end': '192.168.204.99'}] In queens now that we use nerwork_data.yaml do we still need to set the parameters above? Samuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Aug 16 14:45:32 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 16 Aug 2018 10:45:32 -0400 Subject: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2 In-Reply-To: <377c955d-b1a5-2d29-1ab6-333e0909e990@suse.com> References: <197ac3a5-df13-8003-16eb-d2cedee9ec7f@suse.com> <20180813211730.cze4vpknwncpqg3b@gentoo.org> <1534199243-sup-5500@lrrr.local> <1534278107-sup-8825@lrrr.local> <20180815052528.GA27536@thor.bakeyournoodle.com> <1534338477-sup-7813@lrrr.local> <20180815192801.GC27536@thor.bakeyournoodle.com> <38413aa0-4624-228c-1034-79fcbff49896@suse.com> <20180816053804.GF27536@thor.bakeyournoodle.com> <377c955d-b1a5-2d29-1ab6-333e0909e990@suse.com> Message-ID: <1534430677-sup-1211@lrrr.local> Excerpts from Andreas Jaeger's message of 2018-08-16 08:42:22 +0200: > On 2018-08-16 07:38, Tony Breeds wrote: > > On Thu, Aug 16, 2018 at 06:27:39AM +0200, Andreas Jaeger wrote: > > > >> Ocata should be retired by now ;) Let's drop it... > > > > *cough* extended maintenance *cough* ;P > > Ah, forget about that. > > > So we don't need the Ocata docs to be rebuilt with this version? > > Ocata uses older sphinx etc. It would be nice - but not sure about the > effort, We want *all* of the docs rebuilt with this version. Is there any reason we can't uncap pbr, at least within the CI jobs? Doug From openstack at fried.cc Thu Aug 16 15:34:46 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 16 Aug 2018 10:34:46 -0500 Subject: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict In-Reply-To: <1534419803.3149.0@smtp.office365.com> References: <1534419109.24276.3@smtp.office365.com> <1534419803.3149.0@smtp.office365.com> Message-ID: Thanks for this, gibi. TL;DR: a). I didn't look, but I'm pretty sure we're not caching allocations in the report client. Today, nobody outside of nova (specifically the resource tracker via the report client) is supposed to be mucking with instance allocations, right? And given the global lock in the resource tracker, it should be pretty difficult to race e.g. a resize and a delete in any meaningful way. So short term, IMO it is reasonable to treat any generation conflict as an error. No retries. Possible wrinkle on delete, where it should be a failure unless forced. Long term, I also can't come up with any scenario where it would be appropriate to do a narrowly-focused GET+merge/replace+retry. But implementing the above short-term plan shouldn't prevent us from adding retries for individual scenarios later if we do uncover places where it makes sense. Here's some stream-of-consciousness that led me to the above opinions: - On spawn, we send the allocation with a consumer gen of None because we expect the consumer not to exist. If it exists, that should be a hard fail. (Hopefully the only way this happens is a true UUID conflict.) - On migration, when we create the migration UUID, ditto above ^ - On migration, when we transfer the allocations in either direction, a conflict means someone managed to resize (or otherwise change allocations?) since the last time we pulled data. Given the global lock in the report client, this should have been tough to do. If it does happen, I would think any retry would need to be done all the way back at the claim, which I imagine is higher up than we should go. So again, I think we should fail the migration and make the user retry. - On destroy, a conflict again means someone managed a resize despite the global lock. If I'm deleting an instance and something about it changes, I would think I want the opportunity to reevaluate my decision to delete it. That said, I would definitely want a way to force it (in which case we can just use the DELETE call explicitly). But neither case should be a retry, and certainly there is no destroy scenario where I would want a "merging" of allocations to happen. Thanks, efried On 08/16/2018 06:43 AM, Balázs Gibizer wrote: > reformatted for readabiliy, sorry: > > Hi, > > tl;dr: To properly use consumer generation (placement 1.28) in Nova we > need to decide how to handle consumer generation conflict from Nova > perspective: > a) Nova reads the current consumer_generation before the allocation >   update operation and use that generation in the allocation update >   operation.  If the allocation is changed between the read and the >   update then nova fails the server lifecycle operation and let the >   end user retry it. > b) Like a) but in case of conflict nova blindly retries the >   read-and-update operation pair couple of times and if only fails >   the life cycle operation if run out of retries. > c) Nova stores its own view of the allocation. When a consumer's >   allocation needs to be modified then nova reads the current state >   of the consumer from placement. Then nova combines the two >   allocations to generate the new expected consumer state. In case >   of generation conflict nova retries the read-combine-update >   operation triplet. > > Which way we should go now? > > What should be or long term goal? > > > Details: > > There are plenty of affected lifecycle operations. See the patch series > starting at [1]. > > For example: > > The current patch[1] that handles the delete server case implements > option b).  It simly reads the current consumer generation from > placement and uses that to send a PUT /allocatons/{instance_uuid} with > "allocations": {} in its body. > > Here implementing option c) would mean that during server delete nova > needs: > 1) to compile its own view of the resource need of the server >   (currently based on the flavor but in the future based on the >   attached port's resource requests as well) > 2) then read the current allocation of the server from placement > 3) then subtract the server resource needs from the current allocation >   and send the resulting allocation back in the update to placement > > In the simple case this subtraction would result in an empty allocation > sent to placement. Also in this simple case c) has the same effect as > b) currently implementated in [1]. > > However if somebody outside of nova modifies the allocation of this > consumer in a way that nova does not know about such changed resource > need then b) and c) will result in different placement state after > server delete. > > I only know of one example, the change of neutron port's resource > request while the port is attached. (Note, it is out of scope in the > first step of bandwidth implementation.) In this specific example > option c) can work if nova re-reads the port's resource request during > delete when recalculates its own view of the server resource needs. But > I don't know if every other resource (e.g.  accelerators) used by a > server can be / will be handled this way. > > > Other examples of affected lifecycle operations: > > During a server migration moving the source host allocation from the > instance_uuid to a the migration_uuid fails with consumer generation > conflict because of the instance_uuid consumer generation. [2] > > Confirming a migration fails as the deletion of the source host > allocation fails due to the consumer generation conflict of the > migration_uuid consumer that is being emptied.[3] > > During scheduling of a new server putting allocation to instance_uuid > fails as the scheduler assumes that it is a new consumer and therefore > uses consumer_generation: None for the allocation, but placement > reports generation conflict. [4] > > During a non-forced evacuation the scheduler tries to claim the > resource on the destination host with the instance_uuid, but that > consumer already holds the source allocation therefore the scheduler > cannot assume that the instance_uuid is a new consumer. [4] > > > [1] https://review.openstack.org/#/c/591597 > [2] https://review.openstack.org/#/c/591810 > [3] https://review.openstack.org/#/c/591811 > [4] https://review.openstack.org/#/c/583667 > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zbitter at redhat.com Thu Aug 16 15:59:22 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 16 Aug 2018 10:59:22 -0500 Subject: [openstack-dev] [oslo] Proposing Zane Bitter as oslo.service core In-Reply-To: <7145463d-929b-1cd3-3580-30400bd68c37@nemebean.com> References: <7145463d-929b-1cd3-3580-30400bd68c37@nemebean.com> Message-ID: <99baab05-9897-320b-a64d-af72c541c05c@redhat.com> On 15/08/18 16:34, Ben Nemec wrote: > Since there were no objections, I've added Zane to the oslo.service core > team.  Thanks and welcome, Zane! Thanks team! I'll try not to mess it up :) > On 08/03/2018 11:58 AM, Ben Nemec wrote: >> Hi, >> >> Zane has been doing some good work in oslo.service recently and I >> would like to add him to the core team.  I know he's got a lot on his >> plate already, but he has taken the time to propose and review patches >> in oslo.service and has demonstrated an understanding of the code. >> >> Please respond with +1 or any concerns you may have.  Thanks. >> >> -Ben >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From anne at openstack.org Thu Aug 16 16:46:44 2018 From: anne at openstack.org (Anne Bertucio) Date: Thu, 16 Aug 2018 09:46:44 -0700 Subject: [openstack-dev] [community][Rocky] Save the Date: Community Meeting: Rocky + project updates Message-ID: <87363388-E7B9-499B-AC96-D2751504DAEB@openstack.org> Hi all, Save the date for an OpenStack community meeting on August 30 at 3pm UTC. This is the evolution of the “Marketing Community Release Preview” meeting that we’ve had each cycle. While that meeting has always been open to all, we wanted to expand the topics and encourage anyone who was interested in getting updates on the Rocky release or the newer projects at OSF to attend. We’ll cover: —What’s new in Rocky (This info will still be at a fairly high level, so might not be new information if you’re someone who stays up to date in the dev ML or is actively involved in upstream work) —Updates from Airship, Kata Containers, StarlingX, and Zuul —What you can expect at the Berlin Summit in November This meeting will be run over Zoom (look for info closer to the 30th) and will be recorded, so if you can’t make the time, don’t panic! Cheers, Anne Bertucio OpenStack Foundation anne at openstack.org | irc: annabelleB -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu Aug 16 17:03:12 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 16 Aug 2018 11:03:12 -0600 Subject: [openstack-dev] [tripleo] CI is blocked In-Reply-To: References: Message-ID: On Wed, Aug 15, 2018 at 10:13 PM Wesley Hayutin wrote: > On Wed, Aug 15, 2018 at 7:13 PM Alex Schultz wrote: > >> Please do not approve or recheck anything until further notice. We've >> got a few issues that have basically broken all the jobs. >> >> https://bugs.launchpad.net/tripleo/+bug/1786764 > > fix posted: https://review.openstack.org/#/c/592577/ > >> https://bugs.launchpad.net/tripleo/+bug/1787226 > > Dupe of 1786764 > >> https://bugs.launchpad.net/tripleo/+bug/1787244 > > Fixed Released: https://review.openstack.org/592146 > >> https://bugs.launchpad.net/tripleo/+bug/1787268 > > Proposed: https://review.openstack.org/#/c/592233/ https://review.openstack.org/#/c/592275/ > https://bugs.launchpad.net/tripleo/+bug/1736950 > > weeee > Will post a patch to skip the above tempest test. Also the patch to re-enable build-test-packages, the code that injects your change into a rpm is about to merge. https://review.openstack.org/#/c/592218/ Thanks Steve, Alex, Jistr and others :) > >> >> Thanks, >> -Alex >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -- > > Wes Hayutin > > Associate MANAGER > > Red Hat > > > > whayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay > > > View my calendar and check my availability for meetings HERE > > -- Wes Hayutin Associate MANAGER Red Hat whayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Thu Aug 16 17:09:06 2018 From: abishop at redhat.com (Alan Bishop) Date: Thu, 16 Aug 2018 13:09:06 -0400 Subject: [openstack-dev] [tripleo][Edge][FEMDC] Edge clouds and controlplane updates In-Reply-To: <0a519cf3-41b6-d040-3759-0c036a44f869@redhat.com> References: <41de3af6-5f7e-94e5-cfe3-a9090fb8218f@redhat.com> <0a519cf3-41b6-d040-3759-0c036a44f869@redhat.com> Message-ID: On Tue, Aug 14, 2018 at 9:20 AM Bogdan Dobrelya wrote: > On 8/13/18 9:47 PM, Giulio Fidente wrote: > > Hello, > > > > I'd like to get some feedback regarding the remaining > > work for the split controlplane spec implementation [1] > > > > Specifically, while for some services like nova-compute it is not > > necessary to update the controlplane nodes after an edge cloud is > > deployed, for other services, like cinder (or glance, probably > > others), it is necessary to do an update of the config files on the > > controlplane when a new edge cloud is deployed. > > > > In fact for services like cinder or glance, which are hosted in the > > controlplane, we need to pull data from the edge clouds (for example > > the newly deployed ceph cluster keyrings and fsid) to configure cinder > > (or glance) with a new backend. > > > > It looks like this demands for some architectural changes to solve the > > following two: > > > > - how do we trigger/drive updates of the controlplane nodes after the > > edge cloud is deployed? > > Note, there is also a strict(?) requirement of local management > capabilities for edge clouds temporary disconnected off the central > controlplane. That complicates the updates triggering even more. We'll > need at least a notification-and-triggering system to perform required > state synchronizations, including conflicts resolving. If that's the > case, the architecture changes for TripleO deployment framework are > inevitable AFAICT. > This is another interesting point. I don't mean to disregard it, but want to highlight the issue that Giulio and I (and others, I'm sure) are focused on. As a cinder guy, I'll use cinder as an example. Cinder services running in the control plane need to be aware of the storage "backends" deployed at the Edge. So if a split-stack deployment includes edge nodes running a ceph cluster, the cinder services need to be updated to add the ceph cluster as a new cinder backend. So, not only is control plane data needed in order to deploy an additional stack at the edge, data from the edge deployment needs to be fed back into a subsequent stack update in the controlplane. Otherwise, cinder (and other storage services) will have no way of utilizing ceph clusters at the edge. > > > - how do we scale the controlplane parameters to accomodate for N > > backends of the same type? > Yes, this is also a big problem for me. Currently, TripleO can deploy cinder with multiple heterogeneous backends (e.g. one each of ceph, NFS, Vendor X, Vendor Y, etc.). However, the current THT do not let you deploy multiple instances of the same backend (e.g. more than one ceph). If the goal is to deploy multiple edge nodes consisting of Compute+Ceph, then TripleO will need the ability to deploy multiple homogeneous cinder backends. This requirement will likely apply to glance and manila as well. > > A very rough approach to the latter could be to use jinja to scale up > > the CephClient service so that we can have multiple copies of it in the > > controlplane. > > > > Each instance of CephClient should provide the ceph config file and > > keyring necessary for each cinder (or glance) backend. > > > > Also note that Ceph is only a particular example but we'd need a similar > > workflow for any backend type. > > > > The etherpad for the PTG session [2] touches this, but it'd be good to > > start this conversation before then. > > > > 1. > > > https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html > > > > 2. > https://etherpad.openstack.org/p/tripleo-ptg-queens-split-controlplane > > > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Thu Aug 16 17:17:13 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 16 Aug 2018 12:17:13 -0500 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: <57FC25C2-C6ED-41E8-A086-250F23602DB9@leafe.com> Greetings OpenStack community, Another cozy meeting today, as conferences and summer holidays reduced our attendees. We mainly focused on the agenda [7] for the upcoming Denver PTG [8]. One item we added was consideration of a spec for common healthcheck middleware across projects [9]. This had been proposed back in January, and seemed to have a lot of initial interest, but there hasn't been any activity on it since March. There does seem to be some interest in it still, but no one with enough free cycles to keep it updated. So we invite anyone who has an interest in this to come to the API-SIG session at the PTG on Monday, or, if you can't make it, add your comments to the review. Two of the patches [10][11] introduced by cdent last week were deemed to not be changes to the guidelines, but rather minor additions, so given that they were approved by the cores, they were merged. The remaining patch [12] involves a lot more thought and discussion; it could be the subject of a book all by itself! But we'd like to keep it short and to the point. We also don't have time to write a book! As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * None # API Guidelines Proposed for Freeze * None # Guidelines that are ready for wider review by the whole community. * None # Guidelines Currently Under Review [3] * Add an api-design doc with design advice https://review.openstack.org/592003 * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://etherpad.openstack.org/p/api-sig-stein-ptg [8] https://www.openstack.org/ptg/ [9] https://review.openstack.org/#/c/531456/ [10] https://review.openstack.org/589131 [11] https://review.openstack.org/589132 [12] https://review.openstack.org/592003 Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Ed Leafe From ed at leafe.com Thu Aug 16 17:21:11 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 16 Aug 2018 12:21:11 -0500 Subject: [openstack-dev] User Committee Nominations Closing Soon! Message-ID: <699C3850-848C-438B-9AFB-FD6A1197EF1D@leafe.com> As I write this, there are just over 12 hours left to get in your nominations for the OpenStack User Committee. Nominations close at August 17, 05:59 UTC. If you are an AUC and thinking about running what's stopping you? If you know of someone who would make a great committee member nominate them (with their permission, of course)! Help make a difference for Operators, Users and the Community! -- Ed Leafe From melwittt at gmail.com Thu Aug 16 17:27:02 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 16 Aug 2018 10:27:02 -0700 Subject: [openstack-dev] [nova] nova-specs is open for Stein Message-ID: <0817f378-0d5a-4679-4c6d-0eda9b419b2e@gmail.com> Hey all, Just wanted to give a quick heads up that the nova-specs repo [1] is now open for Stein spec proposals. Here's a link to the docs on the spec process: https://specs.openstack.org/openstack/nova-specs/readme.html Cheers, -melanie [1] https://github.com/openstack/nova-specs From fungi at yuggoth.org Thu Aug 16 18:24:22 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 16 Aug 2018 18:24:22 +0000 Subject: [openstack-dev] [all] Personal tool patterns in .gitignore cookiecutter Message-ID: <20180816182422.qlkcs4fr7vpiltms@yuggoth.org> In response to some recent but misguided proposals from well-meaning contributors in various projects, I've submitted a change[*] for the openstack-dev/cookiecutter .gitignore template inserting a comment which recommends against including patterns related to personal choices of tooling (arbitrary editors, IDEs, operating systems...). It includes one suggestion for a popular alternative (creating a personal excludesfile specific to the tools you use), but there are of course multiple ways it can be solved. This is not an attempt to set policy, but merely provides a recommended default for new repositories in hopes that projects can over time reduce some of the noise related to unwanted .gitignore additions. If it merges, projects who disagree with this default can of course modify or remove the comment at the top of the file as they see fit when bootstrapping content for a new repository. Projects with existing repositories on which they'd like to apply this can also easily copy the comment text or port the patch. If there seems to be some consensus that this change is appreciated, I'll remove the WIP flag and propose similar changes to our other cookiecutters for consistency. [*] https://review.openstack.org/592520 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sean.mcginnis at gmx.com Thu Aug 16 18:31:28 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 16 Aug 2018 13:31:28 -0500 Subject: [openstack-dev] [all] Personal tool patterns in .gitignore cookiecutter In-Reply-To: <20180816182422.qlkcs4fr7vpiltms@yuggoth.org> References: <20180816182422.qlkcs4fr7vpiltms@yuggoth.org> Message-ID: <20180816183127.GA8737@sm-workstation> On Thu, Aug 16, 2018 at 06:24:22PM +0000, Jeremy Stanley wrote: > In response to some recent but misguided proposals from well-meaning > contributors in various projects, I've submitted a change[*] for the > openstack-dev/cookiecutter .gitignore template inserting a comment > which recommends against including patterns related to personal > choices of tooling (arbitrary editors, IDEs, operating systems...). > It includes one suggestion for a popular alternative (creating a > personal excludesfile specific to the tools you use), but there are > of course multiple ways it can be solved. > > This is not an attempt to set policy, but merely provides a > recommended default for new repositories in hopes that projects can > over time reduce some of the noise related to unwanted .gitignore > additions. If it merges, projects who disagree with this default can > of course modify or remove the comment at the top of the file as > they see fit when bootstrapping content for a new repository. > Projects with existing repositories on which they'd like to apply > this can also easily copy the comment text or port the patch. > > If there seems to be some consensus that this change is appreciated, > I'll remove the WIP flag and propose similar changes to our other > cookiecutters for consistency. > > [*] https://review.openstack.org/592520 > -- > Jeremy Stanley The comments match my personal preference, and I do see it is just advisory, so it is not mandating any policy that must be followed by all projects. I think it is a good comment to include if for no other reason than to potentially inform folks that there are other ways to address this than copying and pasting the same change to every repo. From dms at danplanet.com Thu Aug 16 18:44:44 2018 From: dms at danplanet.com (Dan Smith) Date: Thu, 16 Aug 2018 11:44:44 -0700 Subject: [openstack-dev] [Nova] A multi-cell instance-list performance test References: Message-ID: > yes, the DB query was in serial, after some investigation, it seems that we are unable to perform eventlet.mockey_patch in uWSGI mode, so > Yikun made this fix: > > https://review.openstack.org/#/c/592285/ Cool, good catch :) > > After making this change, we test again, and we got this kind of data: > >   total collect sort view > before monkey_patch 13.5745 11.7012 1.1511 0.5966 > after monkey_patch 12.8367 10.5471 1.5642 0.6041 > > The performance improved a little, and from the log we can saw: Since these all took ~1s when done in series, but now take ~10s in parallel, I think you must be hitting some performance bottleneck in either case, which is why the overall time barely changes. Some ideas: 1. In the real world, I think you really need to have 10x database servers or at least a DB server with plenty of cores loading from a very fast (or separate) disk in order to really ensure you're getting full parallelism of the DB work. However, because these queries all took ~1s in your serialized case, I expect this is not your problem. 2. What does the network look like between the api machine and the DB? 3. What do the memory and CPU usage of the api process look like while this is happening? Related to #3, even though we issue the requests to the DB in parallel, we still process the result of those calls in series in a single python thread on the API. That means all the work of reading the data from the socket, constructing the SQLA objects, turning those into nova objects, etc, all happens serially. It could be that the DB query is really a small part of the overall time and our serialized python handling of the result is the slow part. If you see the api process pegging a single core at 100% for ten seconds, I think that's likely what is happening. > so, now the queries are in parallel, but the whole thing still seems > serial. In your table, you show the time for "1 cell, 1000 instances" as ~3s and "10 cells, 1000 instances" as 10s. The problem with comparing those directly is that in the latter, you're actually pulling 10,000 records over the network, into memory, processing them, and then just returning the first 1000 from the sort. A closer comparison would be the "10 cells, 100 instances" with "1 cell, 1000 instances". In both of those cases, you pull 1000 instances total from the db, into memory, and return 1000 from the sort. In that case, the multi-cell situation is faster (~2.3s vs. ~3.1s). You could also compare the "10 cells, 1000 instances" case to "1 cell, 10,000 instances" just to confirm at the larger scale that it's better or at least the same. We _have_ to pull $limit instances from each cell, in case (according to the sort key) the first $limit instances are all in one cell. We _could_ try to batch the results from each cell to avoid loading so many that we don't need, but we punted this as an optimization to be done later. I'm not sure it's really worth the complexity at this point, but it's something we could investigate. --Dan From kennelson11 at gmail.com Thu Aug 16 19:18:04 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 16 Aug 2018 12:18:04 -0700 Subject: [openstack-dev] [puppet] migrating to storyboard In-Reply-To: <01cc050e-c74b-a133-4020-6e0f219b7158@binero.se> References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> <01cc050e-c74b-a133-4020-6e0f219b7158@binero.se> Message-ID: Hey :) I created all the puppet openstack repos in the storyboard-dev envrionment and made a project group[1]. I am struggling a bit with finding all of your launchpad projects to perform the migrations through, can you share a list of all of them? -Kendall (diablo_rojo) [1] https://storyboard-dev.openstack.org/#!/project_group/60 On Wed, Aug 15, 2018 at 12:08 AM Tobias Urdin wrote: > Hello Kendall, > > Thanks for your reply, that sounds awesome! > We can then dig around and see how everything looks when all project bugs > are imported to stories. > > I see no issues with being able to move to Storyboard anytime soon if the > feedback for > moving is positive. > > Best regards > > Tobias > > > On 08/14/2018 09:06 PM, Kendall Nelson wrote: > > Hello! > > The error you hit can be resolved by adding launchpadlib to your tox.ini > if I recall correctly.. > > also, if you'd like, I can run a test migration of puppet's launchpad > projects into our storyboard-dev db (where I've done a ton of other test > migrations) if you want to see how it looks/works with a larger db. Just > let me know and I can kick it off. > > As for a time to migrate, if you all are good with it, we usually schedule > for Friday's so there is even less activity. Its a small project config > change and then we just need an infra core to kick off the script once the > change merges. > > -Kendall (diablo_rojo) > > On Tue, Aug 14, 2018 at 9:33 AM Tobias Urdin > wrote: > >> Hello all incredible Puppeters, >> >> I've tested setting up an Storyboard instance and test migrated >> puppet-ceph and it went without any issues there using the documentation >> [1] [2] >> with just one minor issue during the SB setup [3]. >> >> My goal is that we will be able to swap to Storyboard during the Stein >> cycle but considering that we have a low activity on >> bugs my opinion is that we could do this swap very easily anything soon >> as long as everybody is in favor of it. >> >> Please let me know what you think about moving to Storyboard? >> If everybody is in favor of it we can request a migration to infra >> according to documentation [2]. >> >> I will continue to test the import of all our project while people are >> collecting their thoughts and feedback :) >> >> Best regards >> Tobias >> >> [1] https://docs.openstack.org/infra/storyboard/install/development.html >> [2] https://docs.openstack.org/infra/storyboard/migration.html >> [3] It failed with an error about launchpadlib not being installed, >> solved with `tox -e venv pip install launchpadlib` >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Aug 16 19:22:39 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 16 Aug 2018 12:22:39 -0700 Subject: [openstack-dev] [puppet] migrating to storyboard In-Reply-To: References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> <5B745814.1040008@windriver.com> Message-ID: Hey :) Yes, I know attachments are important to a few projects. They are on our todo list and we plan to talk about how to implement them at the upcoming PTG[1]. Unfortunately, we have had other things that are taking priority over attachments. We would really love to migrate you all, but if attachments is what is really blocking you and there is no other workable solution, I'm more than willing to review patches if you want to help out to move things along a little faster :) -Kendall Nelson (diablo_rojo) [1]https://etherpad.openstack.org/p/sb-stein-ptg-planning On Wed, Aug 15, 2018 at 1:49 PM Jay S Bryant wrote: > > > On 8/15/2018 11:43 AM, Chris Friesen wrote: > > On 08/14/2018 10:33 AM, Tobias Urdin wrote: > > > >> My goal is that we will be able to swap to Storyboard during the > >> Stein cycle but > >> considering that we have a low activity on > >> bugs my opinion is that we could do this swap very easily anything > >> soon as long > >> as everybody is in favor of it. > >> > >> Please let me know what you think about moving to Storyboard? > > > > Not a puppet dev, but am currently using Storyboard. > > > > One of the things we've run into is that there is no way to attach log > > files for bug reports to a story. There's an open story on this[1] > > but it's not assigned to anyone. > > > > Chris > > > > > > [1] https://storyboard.openstack.org/#!/story/2003071 > > > Cinder is planning on holding on any migration, like Manila, until the > file attachment issue is resolved. > > Jay > > > __________________________________________________________________________ > > > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Thu Aug 16 19:47:51 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Thu, 16 Aug 2018 14:47:51 -0500 Subject: [openstack-dev] [puppet] migrating to storyboard In-Reply-To: References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> <5B745814.1040008@windriver.com> Message-ID: Hey, Well, the attachments are one of the things holding us up along with reduced participation in the project and a number of other challenges.  Getting the time to prepare for the move has been difficult. I am planning to take some time before the PTG to look at how Ironic has been using Storyboard and take this forward to the team at the PTG to try and spur the process along. Jay Bryant - (jungleboyj) On 8/16/2018 2:22 PM, Kendall Nelson wrote: > Hey :) > > Yes, I know attachments are important to a few projects. They are on > our todo list and we plan to talk about how to implement them at the > upcoming PTG[1]. > > Unfortunately, we have had other things that are taking priority over > attachments. We would really love to migrate you all, but if > attachments is what is really blocking you and there is no other > workable solution, I'm more than willing to review patches if you want > to help out to move things along a little faster :) > > -Kendall Nelson (diablo_rojo) > > [1]https://etherpad.openstack.org/p/sb-stein-ptg-planning > > On Wed, Aug 15, 2018 at 1:49 PM Jay S Bryant > wrote: > > > > On 8/15/2018 11:43 AM, Chris Friesen wrote: > > On 08/14/2018 10:33 AM, Tobias Urdin wrote: > > > >> My goal is that we will be able to swap to Storyboard during the > >> Stein cycle but > >> considering that we have a low activity on > >> bugs my opinion is that we could do this swap very easily anything > >> soon as long > >> as everybody is in favor of it. > >> > >> Please let me know what you think about moving to Storyboard? > > > > Not a puppet dev, but am currently using Storyboard. > > > > One of the things we've run into is that there is no way to > attach log > > files for bug reports to a story.  There's an open story on this[1] > > but it's not assigned to anyone. > > > > Chris > > > > > > [1] https://storyboard.openstack.org/#!/story/2003071 > > > > Cinder is planning on holding on any migration, like Manila, until > the > file attachment issue is resolved. > > Jay > > > __________________________________________________________________________ > > > > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smonderer at vasonanetworks.com Thu Aug 16 20:29:34 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Thu, 16 Aug 2018 23:29:34 +0300 Subject: [openstack-dev] [tripleo] deployements fails when using custom nic config Message-ID: Hi, I'm using the attached file for controller nic configuration and I'm referencing to it as following resource_registry: # Network Interface templates to use (these files must exist). You can # override these by including one of the net-*.yaml environment files, # such as net-bond-with-vlans.yaml, or modifying the list here. # Port assignments for the Controller OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml and I get the following error 2018-08-16 15:51:59Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step1.0]: CREATE_FAILED Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 2 2018-08-16 15:51:59Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step1]: CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 2 2018-08-16 15:52:00Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step1]: CREATE_FAILED Error: resources.ControllerDeployment_Step1.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 2 2018-08-16 15:52:00Z [overcloud.AllNodesDeploySteps]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerDeployment_Step1.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 2 2018-08-16 15:52:01Z [overcloud.AllNodesDeploySteps]: CREATE_FAILED Error: resources.AllNodesDeploySteps.resources.ControllerDeployment_Step1.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 2 2018-08-16 15:52:01Z [overcloud]: CREATE_FAILED Resource CREATE failed: Error: resources.AllNodesDeploySteps.resources.ControllerDeployment_Step1.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 2 Stack overcloud CREATE_FAILED overcloud.AllNodesDeploySteps.ControllerDeployment_Step1.0: resource_type: OS::Heat::StructuredDeployment physical_resource_id: 8edfbb96-9b4d-4839-8b17-f8abf0644475 status: CREATE_FAILED status_reason: | Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 2 deploy_stdout: | ... "2018-08-16 18:51:54,967 ERROR: 23177 -- ERROR configuring neutron", "2018-08-16 18:51:54,967 ERROR: 23177 -- ERROR configuring horizon", "2018-08-16 18:51:54,968 ERROR: 23177 -- ERROR configuring heat_api_cfn" ] } to retry, use: --limit @/var/lib/heat-config/heat-config-ansible/48a5902a-5987-46e4-a06b-e3f5487bf3d2_playbook.retry PLAY RECAP ********************************************************************* localhost : ok=26 changed=13 unreachable=0 failed=1 (truncated, view all with --long) deploy_stderr: | Heat Stack create failed. Heat Stack create failed. (undercloud) [stack at staging-director ~]$ When I checked the controller node I found that it had no default gateway configured Regards, Samuel -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: controller.yaml Type: application/x-yaml Size: 4441 bytes Desc: not available URL: From openstack at nemebean.com Thu Aug 16 20:35:23 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 16 Aug 2018 15:35:23 -0500 Subject: [openstack-dev] [puppet] migrating to storyboard In-Reply-To: References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> <5B745814.1040008@windriver.com> Message-ID: <7fe7efa0-bb68-ed55-227c-cff2b45536a6@nemebean.com> Is there any plan to have a session where the current and future users of storyboard can get together and discuss how it's going? On 08/16/2018 02:47 PM, Jay S Bryant wrote: > Hey, > > Well, the attachments are one of the things holding us up along with > reduced participation in the project and a number of other challenges. > Getting the time to prepare for the move has been difficult. > > I am planning to take some time before the PTG to look at how Ironic has > been using Storyboard and take this forward to the team at the PTG to > try and spur the process along. > > Jay Bryant - (jungleboyj) > > > On 8/16/2018 2:22 PM, Kendall Nelson wrote: >> Hey :) >> >> Yes, I know attachments are important to a few projects. They are on >> our todo list and we plan to talk about how to implement them at the >> upcoming PTG[1]. >> >> Unfortunately, we have had other things that are taking priority over >> attachments. We would really love to migrate you all, but if >> attachments is what is really blocking you and there is no other >> workable solution, I'm more than willing to review patches if you want >> to help out to move things along a little faster :) >> >> -Kendall Nelson (diablo_rojo) >> >> [1]https://etherpad.openstack.org/p/sb-stein-ptg-planning >> >> On Wed, Aug 15, 2018 at 1:49 PM Jay S Bryant > > wrote: >> >> >> >> On 8/15/2018 11:43 AM, Chris Friesen wrote: >> > On 08/14/2018 10:33 AM, Tobias Urdin wrote: >> > >> >> My goal is that we will be able to swap to Storyboard during the >> >> Stein cycle but >> >> considering that we have a low activity on >> >> bugs my opinion is that we could do this swap very easily anything >> >> soon as long >> >> as everybody is in favor of it. >> >> >> >> Please let me know what you think about moving to Storyboard? >> > >> > Not a puppet dev, but am currently using Storyboard. >> > >> > One of the things we've run into is that there is no way to >> attach log >> > files for bug reports to a story.  There's an open story on this[1] >> > but it's not assigned to anyone. >> > >> > Chris >> > >> > >> > [1] https://storyboard.openstack.org/#!/story/2003071 >> >> > >> Cinder is planning on holding on any migration, like Manila, until >> the >> file attachment issue is resolved. >> >> Jay >> > >> __________________________________________________________________________ >> >> > >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From kennelson11 at gmail.com Thu Aug 16 20:49:57 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 16 Aug 2018 13:49:57 -0700 Subject: [openstack-dev] [puppet] migrating to storyboard In-Reply-To: <7fe7efa0-bb68-ed55-227c-cff2b45536a6@nemebean.com> References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> <5B745814.1040008@windriver.com> <7fe7efa0-bb68-ed55-227c-cff2b45536a6@nemebean.com> Message-ID: We can definitely add that to our PTG discussion agenda if you want to come by with feedback. Octavia wrote up and Etherpad and passed that along to us. Either way works. Once the PTG bot is ready to go I plan to reserve a day or part of a day dedicated to StoryBoard. -Kendall On Thu, Aug 16, 2018 at 1:35 PM Ben Nemec wrote: > Is there any plan to have a session where the current and future users > of storyboard can get together and discuss how it's going? > > On 08/16/2018 02:47 PM, Jay S Bryant wrote: > > Hey, > > > > Well, the attachments are one of the things holding us up along with > > reduced participation in the project and a number of other challenges. > > Getting the time to prepare for the move has been difficult. > > > > I am planning to take some time before the PTG to look at how Ironic has > > been using Storyboard and take this forward to the team at the PTG to > > try and spur the process along. > > > > Jay Bryant - (jungleboyj) > > > > > > On 8/16/2018 2:22 PM, Kendall Nelson wrote: > >> Hey :) > >> > >> Yes, I know attachments are important to a few projects. They are on > >> our todo list and we plan to talk about how to implement them at the > >> upcoming PTG[1]. > >> > >> Unfortunately, we have had other things that are taking priority over > >> attachments. We would really love to migrate you all, but if > >> attachments is what is really blocking you and there is no other > >> workable solution, I'm more than willing to review patches if you want > >> to help out to move things along a little faster :) > >> > >> -Kendall Nelson (diablo_rojo) > >> > >> [1]https://etherpad.openstack.org/p/sb-stein-ptg-planning > >> > >> On Wed, Aug 15, 2018 at 1:49 PM Jay S Bryant >> > wrote: > >> > >> > >> > >> On 8/15/2018 11:43 AM, Chris Friesen wrote: > >> > On 08/14/2018 10:33 AM, Tobias Urdin wrote: > >> > > >> >> My goal is that we will be able to swap to Storyboard during the > >> >> Stein cycle but > >> >> considering that we have a low activity on > >> >> bugs my opinion is that we could do this swap very easily > anything > >> >> soon as long > >> >> as everybody is in favor of it. > >> >> > >> >> Please let me know what you think about moving to Storyboard? > >> > > >> > Not a puppet dev, but am currently using Storyboard. > >> > > >> > One of the things we've run into is that there is no way to > >> attach log > >> > files for bug reports to a story. There's an open story on > this[1] > >> > but it's not assigned to anyone. > >> > > >> > Chris > >> > > >> > > >> > [1] https://storyboard.openstack.org/#!/story/2003071 > >> > >> > > >> Cinder is planning on holding on any migration, like Manila, until > >> the > >> file attachment issue is resolved. > >> > >> Jay > >> > > >> > __________________________________________________________________________ > >> > >> > > >> > OpenStack Development Mailing List (not for usage questions) > >> > Unsubscribe: > >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> < > http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe> > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> < > http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Thu Aug 16 20:56:11 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 17 Aug 2018 06:56:11 +1000 Subject: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2 In-Reply-To: <1534430677-sup-1211@lrrr.local> References: <1534199243-sup-5500@lrrr.local> <1534278107-sup-8825@lrrr.local> <20180815052528.GA27536@thor.bakeyournoodle.com> <1534338477-sup-7813@lrrr.local> <20180815192801.GC27536@thor.bakeyournoodle.com> <38413aa0-4624-228c-1034-79fcbff49896@suse.com> <20180816053804.GF27536@thor.bakeyournoodle.com> <377c955d-b1a5-2d29-1ab6-333e0909e990@suse.com> <1534430677-sup-1211@lrrr.local> Message-ID: <20180816205602.GL27536@thor.bakeyournoodle.com> On Thu, Aug 16, 2018 at 10:45:32AM -0400, Doug Hellmann wrote: > Is there any reason we can't uncap pbr, at least within the CI jobs? It might work for the docs builds but jumping a major version of pbr, which if I recall caused problems ate the time (hence the lower-bound) for all octata projects wouldn't happen. How terrible would it be to branch openstackdocstheme and backport the fix without the pbr changes? It might also be possible, though I'm not sure how we'd land it, to branch (stable/ocata) openstackdocstheme today and just revert the pbr changes to set the lower bound. If you let me know what the important changes are to functionality in oslosdocstheme I can play with it next week. Having said that I'm aware there is time pressure here so I'm happy for others to do it Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From kennelson11 at gmail.com Thu Aug 16 21:03:07 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 16 Aug 2018 14:03:07 -0700 Subject: [openstack-dev] [puppet] migrating to storyboard In-Reply-To: References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> <5B745814.1040008@windriver.com> Message-ID: Hello :) On Thu, Aug 16, 2018 at 12:47 PM Jay S Bryant wrote: > Hey, > > Well, the attachments are one of the things holding us up along with > reduced participation in the project and a number of other challenges. > Getting the time to prepare for the move has been difficult. > > I wouldn't really say we have reduced participation- we've always been a small team. In the last year, we've actually seen more involvement from new contributors (new and future users of sb) which has been awesome :) We even had/have an outreachy intern that has been working on making searching and filtering even better. Prioritizing when to invest time to migrate has been hard for several projects so Cinder isn't alone, no worries :) > I am planning to take some time before the PTG to look at how Ironic has > been using Storyboard and take this forward to the team at the PTG to try > and spur the process along. > > Glad to hear it! Once I get the SB room on the schedule, you are welcome to join the conversations there. We would love any feedback you have on what the 'other challenges' are that you mentioned above. > Jay Bryant - (jungleboyj) > > On 8/16/2018 2:22 PM, Kendall Nelson wrote: > > Hey :) > > Yes, I know attachments are important to a few projects. They are on our > todo list and we plan to talk about how to implement them at the upcoming > PTG[1]. > > Unfortunately, we have had other things that are taking priority over > attachments. We would really love to migrate you all, but if attachments is > what is really blocking you and there is no other workable solution, I'm > more than willing to review patches if you want to help out to move things > along a little faster :) > > -Kendall Nelson (diablo_rojo) > > [1]https://etherpad.openstack.org/p/sb-stein-ptg-planning > > On Wed, Aug 15, 2018 at 1:49 PM Jay S Bryant wrote: > >> >> >> On 8/15/2018 11:43 AM, Chris Friesen wrote: >> > On 08/14/2018 10:33 AM, Tobias Urdin wrote: >> > >> >> My goal is that we will be able to swap to Storyboard during the >> >> Stein cycle but >> >> considering that we have a low activity on >> >> bugs my opinion is that we could do this swap very easily anything >> >> soon as long >> >> as everybody is in favor of it. >> >> >> >> Please let me know what you think about moving to Storyboard? >> > >> > Not a puppet dev, but am currently using Storyboard. >> > >> > One of the things we've run into is that there is no way to attach log >> > files for bug reports to a story. There's an open story on this[1] >> > but it's not assigned to anyone. >> > >> > Chris >> > >> > >> > [1] https://storyboard.openstack.org/#!/story/2003071 >> > >> Cinder is planning on holding on any migration, like Manila, until the >> file attachment issue is resolved. >> >> Jay >> > >> __________________________________________________________________________ >> > >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > Thanks! - Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Thu Aug 16 21:05:53 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 16 Aug 2018 16:05:53 -0500 Subject: [openstack-dev] [all][tc] Technical Vision statement: feedback sought Message-ID: <75381239-7220-54f4-0b4b-d90db44d8898@redhat.com> The TC has undertaken to attempt to write a technical vision statement for OpenStack that documents the community's consensus on what we're trying to build. To date the only thing we've had to guide us is the mission statement[1], which is exactly one sentence long and uses undefined terms (like 'cloud'). That can lead to diverging perspectives and poor communication. No group is charged with designing OpenStack at a high level - it is the sum of what individual teams produce. So the only way we're going to end up with a coherent offering is if we're all moving in the same direction. The TC has also identified that we're having conversations about whether a new project fits with the OpenStack mission too late - only after the project applies to become official. We're hoping that updates to this document can provide a mechanism to have those conversations earlier. A first draft review is now available for comment: https://review.openstack.org/592205 We're soliciting feedback on the review, on the mailing list, on IRC during TC office hours or any time that's convenient to you in #openstack-tc, and during the PTG in Denver. If the vision as written broadly matches yours then we'd like to hear from you, and if it does not we *need* to hear from you. The goal is to have something that the entire community can buy into, and although that means not everyone will be able to get their way on every topic we are more than willing to make changes in order to find consensus. Everything is up for grabs, including the form and structure of the document itself. cheers, Zane. [1] https://docs.openstack.org/project-team-guide/introduction.html#the-mission From doug at doughellmann.com Thu Aug 16 21:09:22 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 16 Aug 2018 17:09:22 -0400 Subject: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2 In-Reply-To: <20180816205602.GL27536@thor.bakeyournoodle.com> References: <1534199243-sup-5500@lrrr.local> <1534278107-sup-8825@lrrr.local> <20180815052528.GA27536@thor.bakeyournoodle.com> <1534338477-sup-7813@lrrr.local> <20180815192801.GC27536@thor.bakeyournoodle.com> <38413aa0-4624-228c-1034-79fcbff49896@suse.com> <20180816053804.GF27536@thor.bakeyournoodle.com> <377c955d-b1a5-2d29-1ab6-333e0909e990@suse.com> <1534430677-sup-1211@lrrr.local> <20180816205602.GL27536@thor.bakeyournoodle.com> Message-ID: <1534453670-sup-8575@lrrr.local> Excerpts from Tony Breeds's message of 2018-08-17 06:56:11 +1000: > On Thu, Aug 16, 2018 at 10:45:32AM -0400, Doug Hellmann wrote: > > > Is there any reason we can't uncap pbr, at least within the CI jobs? > > It might work for the docs builds but jumping a major version of pbr, > which if I recall caused problems ate the time (hence the lower-bound) > for all octata projects wouldn't happen. > > How terrible would it be to branch openstackdocstheme and backport the fix > without the pbr changes? It might also be possible, though I'm not sure > how we'd land it, to branch (stable/ocata) openstackdocstheme today and > just revert the pbr changes to set the lower bound. > > If you let me know what the important changes are to functionality in > oslosdocstheme I can play with it next week. Having said that I'm aware > there is time pressure here so I'm happy for others to do it > > Yours Tony. The thing we need is the deprecation badge support in https://review.openstack.org/#/c/585517/ It if backporting that to an older version of the theme is going to be easier, and we don't care about adding a feature to a stable branch for that, than I'm OK with doing it that way. Doug From openstack at nemebean.com Thu Aug 16 21:34:19 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 16 Aug 2018 16:34:19 -0500 Subject: [openstack-dev] [barbican][oslo][release] FFE request for castellan In-Reply-To: <1534352313.5705.35.camel@redhat.com> References: <1533914109.23178.37.camel@redhat.com> <20180814185634.GA26658@sm-workstation> <1534352313.5705.35.camel@redhat.com> Message-ID: <8f8add49-cb63-3452-cc7c-c812bfab0877@nemebean.com> The backport has merged and I've proposed the release here: https://review.openstack.org/592746 On 08/15/2018 11:58 AM, Ade Lee wrote: > Done. > > https://review.openstack.org/#/c/592154/ > > Thanks, > Ade > > On Wed, 2018-08-15 at 09:20 -0500, Ben Nemec wrote: >> >> On 08/14/2018 01:56 PM, Sean McGinnis wrote: >>>> On 08/10/2018 10:15 AM, Ade Lee wrote: >>>>> Hi all, >>>>> >>>>> I'd like to request a feature freeze exception to get the >>>>> following >>>>> change in for castellan. >>>>> >>>>> https://review.openstack.org/#/c/575800/ >>>>> >>>>> This extends the functionality of the vault backend to provide >>>>> previously uninmplemented functionality, so it should not break >>>>> anyone. >>>>> >>>>> The castellan vault plugin is used behind barbican in the >>>>> barbican- >>>>> vault plugin. We'd like to get this change into Rocky so that >>>>> we can >>>>> release Barbican with complete functionality on this backend >>>>> (along >>>>> with a complete set of passing functional tests). >>>> >>>> This does seem fairly low risk since it's just implementing a >>>> function that >>>> previously raised a NotImplemented exception. However, with it >>>> being so >>>> late in the cycle I think we need the release team's input on >>>> whether this >>>> is possible. Most of the release FFE's I've seen have been for >>>> critical >>>> bugs, not actual new features. I've added that tag to this >>>> thread so >>>> hopefully they can weigh in. >>>> >>> >>> As far as releases go, this should be fine. If this doesn't affect >>> any other >>> projects and would just be a late merging feature, as long as the >>> castellan >>> team has considered the risk of adding code so late and is >>> comfortable with >>> that, this is OK. >>> >>> Castellan follows the cycle-with-intermediary release model, so the >>> final Rocky >>> release just needs to be done by next Thursday. I do see the >>> stable/rocky >>> branch has already been created for this repo, so it would need to >>> merge to >>> master first (technically stein), then get cherry-picked to >>> stable/rocky. >> >> Okay, sounds good. It's already merged to master so we're good >> there. >> >> Ade, can you get the backport proposed? >> >> _____________________________________________________________________ >> _____ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs >> cribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From fungi at yuggoth.org Thu Aug 16 21:35:17 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 16 Aug 2018 21:35:17 +0000 Subject: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2 In-Reply-To: <20180816205602.GL27536@thor.bakeyournoodle.com> References: <1534278107-sup-8825@lrrr.local> <20180815052528.GA27536@thor.bakeyournoodle.com> <1534338477-sup-7813@lrrr.local> <20180815192801.GC27536@thor.bakeyournoodle.com> <38413aa0-4624-228c-1034-79fcbff49896@suse.com> <20180816053804.GF27536@thor.bakeyournoodle.com> <377c955d-b1a5-2d29-1ab6-333e0909e990@suse.com> <1534430677-sup-1211@lrrr.local> <20180816205602.GL27536@thor.bakeyournoodle.com> Message-ID: <20180816213516.ta5pm2xc77js74n7@yuggoth.org> On 2018-08-17 06:56:11 +1000 (+1000), Tony Breeds wrote: [...] > How terrible would it be to branch openstackdocstheme and backport > the fix without the pbr changes? It might also be possible, > though I'm not sure how we'd land it, to branch (stable/ocata) > openstackdocstheme today and just revert the pbr changes to set > the lower bound. [...] I think it would also be entirely reasonable to just not worry about it, and let the people who asked for extended maintenance on older branches do the legwork. We previously limited the number we'd keep open because keeping those older branches updatable does in fact require quite a bit of effort. When we agreed to the suggestion of not closing branches, it was with the understanding that they won't just suddenly get taken care of by the same people who were already not doing that because they considered it a lot of extra work. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sbaker at redhat.com Thu Aug 16 22:25:00 2018 From: sbaker at redhat.com (Steve Baker) Date: Fri, 17 Aug 2018 10:25:00 +1200 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: References: Message-ID: On 15/08/18 21:32, Cédric Jeanneret wrote: > Dear Community, > > As you may know, a move toward Podman as replacement of Docker is starting. > > One of the issues with podman is the lack of daemon, precisely the lack > of a socket allowing to send commands and get a "computer formatted > output" (like JSON or YAML or...). > > In order to work that out, Podman has added support for varlink¹, using > the "socket activation" feature in Systemd. > > On my side, I would like to push forward the integration of varlink in > TripleO deployed containers, especially since it will allow the following: > # proper interface with Paunch (via python link) I'm not sure this would be desirable. If we're going to all container management via a socket I think we'd be better supported by using CRI-O. One of the advantages I see of podman is being able to manage services with systemd again. > # a way to manage containers from within specific containers (think > "healthcheck", "monitoring") by mounting the socket as a shared volume > > # a way to get container statistics (think "metrics") > > # a way, if needed, to get an ansible module being able to talk to > podman (JSON is always better than plain text) > > # a way to secure the accesses to Podman management (we have to define > how varlink talks to Podman, maybe providing dedicated socket with > dedicated rights so that we can have dedicated users for specific tasks) Some of these cases might prove to be useful, but I do wonder if just making podman calls would be just as simple without the complexity of having another host-level service to manage. We can still do podman operations inside containers by bind-mounting in the container state. > That said, I have some questions: > ° Does any of you have some experience with varlink and podman interface? > ° What do you think about that integration wish? > ° Does any of you have concern with this possible addition? I do worry a bit that it is advocating for a solution before we really understand the problems. The biggest unknown for me is what we do about healthchecks. Maybe varlink is part of the solution here, or maybe its a systemd timer which executes the healthcheck and restarts the service when required. > Thank you for your feedback and ideas. > > Have a great day (or evening, or whatever suits the time you're reading > this ;))! > > C. > > > ¹ https://www.projectatomic.io/blog/2018/05/podman-varlink/ > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Fri Aug 17 01:10:14 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 16 Aug 2018 18:10:14 -0700 Subject: [openstack-dev] [nova][vmware] need help triaging a vmware driver bug Message-ID: <45e95976-1e14-c466-8b4f-45aff35df4fb@gmail.com> Hello VMware peeps, I've been trying to triage a bug in New status for the VMware driver without success: https://bugs.launchpad.net/nova/+bug/1744182 - can not create instance when using vmware nova driver I tend to think the problem is not related to nova because the instance create fails with a message that sounds related to the VMware backend: 2018-01-18 06:40:01.738 7 ERROR nova.compute.manager [req-bc40738a-a3ee-4d9c-bd67-32e6fb32df08 32e0ed602bc549f48f7caf401420b628 7179dd1be7ef4cf2906b41b97970a0f6 - default default] [instance: b4b7cabe-f78b-40d9-8856-3b6c213efd73] Instance failed to spawn: VimFaultException: An error occurred during host configuration. Faults: ['PlatformConfigFault'] And VMware CI has been running in the gate and successfully creating instances during the tempest tests. Can anyone help triage this bug? Thanks in advance. Best, -melanie From ekcs.openstack at gmail.com Fri Aug 17 02:51:45 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Thu, 16 Aug 2018 19:51:45 -0700 Subject: [openstack-dev] [tempest][qa][congress] help with tempest plugin jobs against stable/queens In-Reply-To: <1653bdd2b84.b757c4d214571.1609333555586743410@ghanshyammann.com> References: <1653bdd2b84.b757c4d214571.1609333555586743410@ghanshyammann.com> Message-ID: On Tue, Aug 14, 2018 at 9:34 PM, Ghanshyam Mann wrote: > ---- On Wed, 15 Aug 2018 09:37:18 +0900 Eric K wrote ---- > > I'm adding jobs [1] to the tempest plugin to run tests against > > congress stable/queens. The job output seems to show stable/queens > > getting checked out [2], but I know the test is *not* run against > > queens because it's using features not available in queens. The > > expected result is for several tests to fail as seen here [3]. All > > hints and tips much appreciated! > > You are doing it in right way by 'override-checkout: stable/queens'. And as log also show, congress is checkout from stable/queens. I tried to check the results but could not get what tests should fail and why. > > If you can give me more idea, i can debug that. > > -gmann Thanks so much gmann! For example, looking at 'congress_tempest_plugin.tests.scenario.congress_datasources.test_vitrage.TestVitrageDriver' here: http://logs.openstack.org/61/591861/1/check/congress-devstack-api-mysql/36bacbe/logs/testr_results.html.gz It shows passing 1 of 1, but that feature is not in the queens branch at all. The expected result can be seen here: http://logs.openstack.org/05/591805/2/check/congress-devstack-api-mysql/7d7b28e/logs/testr_results.html.gz > > > > > [1] https://review.openstack.org/#/c/591861/1 > > [2] http://logs.openstack.org/61/591861/1/check/congress-devstack-api-mysql-queens/f7b5752/job-output.txt.gz#_2018-08-14_22_30_36_899501 > > [3] https://review.openstack.org/#/c/591805/ (the depends-on is > > irrelevant because that patch has been merged) > > From zhengzhenyulixi at gmail.com Fri Aug 17 02:55:33 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Fri, 17 Aug 2018 10:55:33 +0800 Subject: [openstack-dev] [Nova] A multi-cell instance-list performance test In-Reply-To: References: Message-ID: Hi, Thanks alot for the reply, for your question #2, we did tests with two kinds of deployments: 1. There is only 1 DB with all 10 cells(also cell0) and it is on the same server with the API; 2. We took 5 of the DBs to another machine on the same rack to test out if it matters, and it turns out there are no big differences. For question #3, we did a test with limit = 1000 and 10 cells: as we can see, the CPU workload from API process and MySQL query is both high in the first 3 seconds, but start from the 4th second, only API process occupies the CPU, and the memory consumption is low comparing to the CPU consumption. And this is tested with the patch fix posted in previous mail. [image: image.png] [image: image.png] BR, Kevin On Fri, Aug 17, 2018 at 2:45 AM Dan Smith wrote: > > yes, the DB query was in serial, after some investigation, it seems > that we are unable to perform eventlet.mockey_patch in uWSGI mode, so > > Yikun made this fix: > > > > https://review.openstack.org/#/c/592285/ > > Cool, good catch :) > > > > > After making this change, we test again, and we got this kind of data: > > > > total collect sort view > > before monkey_patch 13.5745 11.7012 1.1511 0.5966 > > after monkey_patch 12.8367 10.5471 1.5642 0.6041 > > > > The performance improved a little, and from the log we can saw: > > Since these all took ~1s when done in series, but now take ~10s in > parallel, I think you must be hitting some performance bottleneck in > either case, which is why the overall time barely changes. Some ideas: > > 1. In the real world, I think you really need to have 10x database > servers or at least a DB server with plenty of cores loading from a > very fast (or separate) disk in order to really ensure you're getting > full parallelism of the DB work. However, because these queries all > took ~1s in your serialized case, I expect this is not your problem. > > 2. What does the network look like between the api machine and the DB? > > 3. What do the memory and CPU usage of the api process look like while > this is happening? > > Related to #3, even though we issue the requests to the DB in parallel, > we still process the result of those calls in series in a single python > thread on the API. That means all the work of reading the data from the > socket, constructing the SQLA objects, turning those into nova objects, > etc, all happens serially. It could be that the DB query is really a > small part of the overall time and our serialized python handling of the > result is the slow part. If you see the api process pegging a single > core at 100% for ten seconds, I think that's likely what is happening. > > > so, now the queries are in parallel, but the whole thing still seems > > serial. > > In your table, you show the time for "1 cell, 1000 instances" as ~3s and > "10 cells, 1000 instances" as 10s. The problem with comparing those > directly is that in the latter, you're actually pulling 10,000 records > over the network, into memory, processing them, and then just returning > the first 1000 from the sort. A closer comparison would be the "10 > cells, 100 instances" with "1 cell, 1000 instances". In both of those > cases, you pull 1000 instances total from the db, into memory, and > return 1000 from the sort. In that case, the multi-cell situation is > faster (~2.3s vs. ~3.1s). You could also compare the "10 cells, 1000 > instances" case to "1 cell, 10,000 instances" just to confirm at the > larger scale that it's better or at least the same. > > We _have_ to pull $limit instances from each cell, in case (according to > the sort key) the first $limit instances are all in one cell. We _could_ > try to batch the results from each cell to avoid loading so many that we > don't need, but we punted this as an optimization to be done later. I'm > not sure it's really worth the complexity at this point, but it's > something we could investigate. > > --Dan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 30600 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 28172 bytes Desc: not available URL: From cjeanner at redhat.com Fri Aug 17 05:34:59 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Fri, 17 Aug 2018 07:34:59 +0200 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: <5f4f72a3-5f89-0712-0397-5513b833b70a@redhat.com> References: <5f4f72a3-5f89-0712-0397-5513b833b70a@redhat.com> Message-ID: >> On my side, I would like to push forward the integration of varlink in >> TripleO deployed containers, especially since it will allow the >> following: >> # proper interface with Paunch (via python link) > > "integration of varlink in TripleO deployed containers" sounds like we'd > need to make some changes to the containers themselves, but is that the > case? As i read the docs, it seems like a management API wrapper for > Podman, so just an alternative interface to Podman CLI. I'd expect we'd > use varlink from Paunch, but probably not from the containers > themselves? (Perhaps that's what you meant, just making sure we're on > the same page.) In fact, the "podman varlink thing" is already distributed with the podman package. In order to activate that socket, we just need to activate a systemd unit that will create the socket - the "service" itself is activated only when the socket is accessed. The only thing we might need to add as a package is the libvarlink-util (provides the "varlink" command) and the python3 binding (python3-libvarlink iirc). Varlink "activation" itself doesn't affect the containers. And yep, it's just an alternative to `podman' CLI, providing a nicer computer interface with python3 bindings in order to avoid "subprocess.Popen" and the like, providing a nice JSON output (well. mostly - I detected at least one output not properly formatted). > >> >> # a way to manage containers from within specific containers (think >> "healthcheck", "monitoring") by mounting the socket as a shared volume > > I think healthchecks are currently quite Docker-specific, so we could > have a Podman-specific alternative here. We should be careful about how > much container runtime specificity we introduce and keep though, and > we'll probably have to amend our tools (e.g. pre-upgrade validations > [2]) to work with both, at least until we decide whether to really make > a full transition to Podman or not. Of course - I just listed the possibilities activating varlink would provide - proper PoCs and tests are to be done ;). > >> >> # a way to get container statistics (think "metrics") >> >> # a way, if needed, to get an ansible module being able to talk to >> podman (JSON is always better than plain text) >> >> # a way to secure the accesses to Podman management (we have to define >> how varlink talks to Podman, maybe providing dedicated socket with >> dedicated rights so that we can have dedicated users for specific tasks) >> >> That said, I have some questions: >> ° Does any of you have some experience with varlink and podman interface? >> ° What do you think about that integration wish? >> ° Does any of you have concern with this possible addition? > > I like it, but we should probably sync up with Podman community if they > consider varlink a "supported" interface for controlling Podman, and > it's not just an experiment which will vanish. To me it certainly looks > like a much better programmable interface than composing CLI calls and > parsing their output, but we should make sure Podman folks think so too :) I think we can say "supported", since they provide the varlink socket and service directly in podman package. In addition, it was a request: https://trello.com/c/8RQ6ZF4A/565-8-add-podman-varlink-subcommand https://github.com/containers/libpod/pull/627 and it's pretty well followed regarding both issues and libpod API updates. I'll ping them in order to validate that feeling. > > Thanks for looking into this > > Jirka > > [2] https://review.openstack.org/#/c/582502/ > >> >> Thank you for your feedback and ideas. >> >> Have a great day (or evening, or whatever suits the time you're reading >> this ;))! >> >> C. >> >> >> ¹ https://www.projectatomic.io/blog/2018/05/podman-varlink/ >> >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From cjeanner at redhat.com Fri Aug 17 05:45:47 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Fri, 17 Aug 2018 07:45:47 +0200 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: References: Message-ID: <69cb3ecc-0fa9-d43b-7f44-e01bed9fd240@redhat.com> On 08/17/2018 12:25 AM, Steve Baker wrote: > > > On 15/08/18 21:32, Cédric Jeanneret wrote: >> Dear Community, >> >> As you may know, a move toward Podman as replacement of Docker is starting. >> >> One of the issues with podman is the lack of daemon, precisely the lack >> of a socket allowing to send commands and get a "computer formatted >> output" (like JSON or YAML or...). >> >> In order to work that out, Podman has added support for varlink¹, using >> the "socket activation" feature in Systemd. >> >> On my side, I would like to push forward the integration of varlink in >> TripleO deployed containers, especially since it will allow the following: >> # proper interface with Paunch (via python link) > I'm not sure this would be desirable. If we're going to all container > management via a socket I think we'd be better supported by using CRI-O. > One of the advantages I see of podman is being able to manage services > with systemd again. Using the socket wouldn't prevent a "per service" systemd unit. Varlink would just provide another way to manage the containers. It's NOT like the docker daemon - it will not manage the containers on startup for example. It's just an API endpoint, without any "automated powers". See it as an interesting complement to the CLI, allowing to access containers data easily with a computer-oriented language like python3. >> # a way to manage containers from within specific containers (think >> "healthcheck", "monitoring") by mounting the socket as a shared volume >> >> # a way to get container statistics (think "metrics") >> >> # a way, if needed, to get an ansible module being able to talk to >> podman (JSON is always better than plain text) >> >> # a way to secure the accesses to Podman management (we have to define >> how varlink talks to Podman, maybe providing dedicated socket with >> dedicated rights so that we can have dedicated users for specific tasks) > Some of these cases might prove to be useful, but I do wonder if just > making podman calls would be just as simple without the complexity of > having another host-level service to manage. We can still do podman > operations inside containers by bind-mounting in the container state. I wouldn't mount the container state as-is for mainly security reasons. I'd rather get the varlink abstraction rather than the plain `podman' CLI - in addition, it is far, far easier for applications to get a proper JSON instead of some random plain text - even if `podman' seems to get a "--format" option. I really dislike calling "subprocess" things when there is a nice API interface - maybe that's just me ;). In addition, apparently the state is managed by some sqlite DB - concurrent accesses to that DB isn't really a good idea, we really don't want a corruption, do we? > >> That said, I have some questions: >> ° Does any of you have some experience with varlink and podman interface? >> ° What do you think about that integration wish? >> ° Does any of you have concern with this possible addition? > I do worry a bit that it is advocating for a solution before we really > understand the problems. The biggest unknown for me is what we do about > healthchecks. Maybe varlink is part of the solution here, or maybe its a > systemd timer which executes the healthcheck and restarts the service > when required. Maybe. My main concern is: would it be interesting to compare both solutions? The Healthchecks are clearly docker-specific, no interface exists atm in the libpod for that. So we have to mimic it in the best way. Maybe the healthchecks place is in systemd, and varlink would be used only for external monitoring and metrics. That would also be a nice way to explore. I would not focus on only one of the possibilities I've listed. There are probably even more possibilities I didn't see - once we get a proper socket, anything is possible, the good and the bad ;). >> Thank you for your feedback and ideas. >> >> Have a great day (or evening, or whatever suits the time you're reading >> this ;))! >> >> C. >> >> >> ¹ https://www.projectatomic.io/blog/2018/05/podman-varlink/ >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From superuser151093 at gmail.com Fri Aug 17 06:37:03 2018 From: superuser151093 at gmail.com (super user) Date: Fri, 17 Aug 2018 15:37:03 +0900 Subject: [openstack-dev] [goal][python3] more updates to the goal tools In-Reply-To: <1533682621-sup-2284@lrrr.local> References: <1533682621-sup-2284@lrrr.local> Message-ID: Hi Doug, I'm Nguyen Hai. I proposed the python3-first patch set for designate projects. However, I have met this error to designate and designate-dashboard: === ../Output/designate/openstack/designate @ master === ./tools/python3-first/do_repo.sh ../Output/designate/openstack/designate master 24292 ++ cat ../Output/designate/openstack/designate/.gitreview ++ grep project ++ cut -f2 -d= + actual=openstack/designate.git +++ dirname ../Output/designate/openstack/designate ++ basename ../Output/designate/openstack ++ basename ../Output/designate/openstack/designate + expected=openstack/designate + '[' openstack/designate.git '!=' openstack/designate -a openstack/designate.git '!=' openstack/designate.git ']' + git -C ../Output/designate/openstack/designate review -s Creating a git remote called 'gerrit' that maps to: ssh:// nguyentrihai at review.openstack.org:29418/openstack/designate.git ++ basename master + new_branch=python3-first-master + git -C ../Output/designate/openstack/designate branch + grep -q python3-first-master + echo 'creating python3-first-master' creating python3-first-master + git -C ../Output/designate/openstack/designate checkout -- . + git -C ../Output/designate/openstack/designate clean -f -d + git -C ../Output/designate/openstack/designate checkout -q origin/master + git -C ../Output/designate/openstack/designate checkout -b python3-first-master Switched to a new branch 'python3-first-master' + python3-first -v --debug jobs update ../Output/designate/openstack/designate determining repository name from .gitreview working on openstack/designate @ master looking for zuul config in ../Output/designate/openstack/designate/.zuul.yaml using zuul config from ../Output/designate/openstack/designate/.zuul.yaml loading project settings from ../project-config/zuul.d/projects.yaml loading project templates from ../openstack-zuul-jobs/zuul.d/project-templates.yaml loading jobs from ../openstack-zuul-jobs/zuul.d/jobs.yaml looking for settings for openstack/designate looking at template 'openstack-python-jobs' looking at template 'openstack-python35-jobs' looking at template 'publish-openstack-sphinx-docs' looking at template 'periodic-stable-jobs' looking at template 'check-requirements' did not find template definition for 'check-requirements' looking at template 'translation-jobs-master-stable' looking at template 'release-notes-jobs' looking at template 'api-ref-jobs' looking at template 'install-guide-jobs' looking at template 'release-openstack-server' filtering on master merging templates adding openstack-python-jobs adding openstack-python35-jobs adding publish-openstack-sphinx-docs adding periodic-stable-jobs adding check-requirements adding release-notes-jobs adding install-guide-jobs merging pipeline check *unhashable type: 'CommentedMap'* *Traceback (most recent call last):* * File "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/app.py", line 402, in run_subcommand* * result = cmd.run(parsed_args)* * File "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/command.py", line 184, in run* * return_code = self.take_action(parsed_args) or 0* * File "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", line 531, in take_action* * entry,* * File "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", line 397, in merge_project_settings* * up.get(pipeline, comments.CommentedMap()),* * File "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", line 362, in merge_pipeline* * if job_name in job_names:* *TypeError: unhashable type: 'CommentedMap'* *Traceback (most recent call last):* * File "/home/stack/python3-first/goal-tools/.tox/venv/bin/python3-first", line 10, in * * sys.exit(main())* * File "/home/stack/python3-first/goal-tools/goal_tools/python3_first/main.py", line 42, in main* * return Python3First().run(argv)* * File "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/app.py", line 281, in run* * result = self.run_subcommand(remainder)* * File "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/app.py", line 402, in run_subcommand* * result = cmd.run(parsed_args)* * File "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/command.py", line 184, in run* * return_code = self.take_action(parsed_args) or 0* * File "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", line 531, in take_action* * entry,* * File "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", line 397, in merge_project_settings* * up.get(pipeline, comments.CommentedMap()),* * File "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", line 362, in merge_pipeline* * if job_name in job_names:* *TypeError: unhashable type: 'CommentedMap'* *+ echo 'No changes'* *No changes* *+ exit 1* On Wed, Aug 8, 2018 at 7:58 AM Doug Hellmann wrote: > Champions, > > I have made quite a few changes to the tools for generating the zuul > migration patches today. If you have any patches you generated locally > for testing, please check out the latest version of the tool (when all > of the changes merge) and regenerate them. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.se Fri Aug 17 07:14:51 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Fri, 17 Aug 2018 09:14:51 +0200 Subject: [openstack-dev] [Openstack-operators] [puppet] migrating to storyboard In-Reply-To: References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> <01cc050e-c74b-a133-4020-6e0f219b7158@binero.se> Message-ID: <7a5ea840-687b-449a-75e0-d5fb9268e46a@binero.se> Hello Kendall, I went through the list of projects [1] and could only really see two things. 1) puppet-rally and puppet-openstack-guide is missing 2) We have some support projects which doesn't really need bug tracking, where some others do.     You can remove puppet-openstack-specs and puppet-openstack-cookiecutter all others would be     nice to still have left so we can track bugs. [2] Best regards Tobias [1] https://storyboard-dev.openstack.org/#!/project_group/60 [2] Keeping puppet-openstack-integration (integration testing) and puppet-openstack_spec_helper (helper for testing).       These two usually has a lot of changes so would be good to be able to track them. On 08/16/2018 09:40 PM, Kendall Nelson wrote: > Hey :) > > I created all the puppet openstack repos in the storyboard-dev > envrionment and made a project group[1]. I am struggling a bit with > finding all of your launchpad projects to perform the migrations > through, can you share a list of all of them? > > -Kendall (diablo_rojo) > > [1] https://storyboard-dev.openstack.org/#!/project_group/60 > > > On Wed, Aug 15, 2018 at 12:08 AM Tobias Urdin > wrote: > > Hello Kendall, > > Thanks for your reply, that sounds awesome! > We can then dig around and see how everything looks when all > project bugs are imported to stories. > > I see no issues with being able to move to Storyboard anytime soon > if the feedback for > moving is positive. > > Best regards > > Tobias > > > On 08/14/2018 09:06 PM, Kendall Nelson wrote: >> Hello! >> >> The error you hit can be resolved by adding launchpadlib to your >> tox.ini if I recall correctly.. >> >> also, if you'd like, I can run a test migration of puppet's >> launchpad projects into our storyboard-dev db (where I've done a >> ton of other test migrations) if you want to see how it >> looks/works with a larger db. Just let me know and I can kick it >> off. >> >> As for a time to migrate, if you all are good with it, we usually >> schedule for Friday's so there is even less activity. Its a small >> project config change and then we just need an infra core to kick >> off the script once the change merges. >> >> -Kendall (diablo_rojo) >> >> On Tue, Aug 14, 2018 at 9:33 AM Tobias Urdin >> > wrote: >> >> Hello all incredible Puppeters, >> >> I've tested setting up an Storyboard instance and test migrated >> puppet-ceph and it went without any issues there using the >> documentation >> [1] [2] >> with just one minor issue during the SB setup [3]. >> >> My goal is that we will be able to swap to Storyboard during >> the Stein >> cycle but considering that we have a low activity on >> bugs my opinion is that we could do this swap very easily >> anything soon >> as long as everybody is in favor of it. >> >> Please let me know what you think about moving to Storyboard? >> If everybody is in favor of it we can request a migration to >> infra >> according to documentation [2]. >> >> I will continue to test the import of all our project while >> people are >> collecting their thoughts and feedback :) >> >> Best regards >> Tobias >> >> [1] >> https://docs.openstack.org/infra/storyboard/install/development.html >> [2] https://docs.openstack.org/infra/storyboard/migration.html >> [3] It failed with an error about launchpadlib not being >> installed, >> solved with `tox -e venv pip install launchpadlib` >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frode.nordahl at canonical.com Fri Aug 17 07:47:31 2018 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Fri, 17 Aug 2018 09:47:31 +0200 Subject: [openstack-dev] [charms] Deployment guide stable/rocky cut Message-ID: Hello OpenStack charmers, I am writing to inform you that a `stable/rocky` branch has been cut for the `openstack/charm-deployment-guide` repository. Should there be any further updates to the guide before the release the changes will need to be landed in `master` and then back-ported to `stable/rocky`. -- Frode Nordahl Software Engineer Canonical Ltd. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rgerganov at vmware.com Fri Aug 17 07:50:30 2018 From: rgerganov at vmware.com (Radoslav Gerganov) Date: Fri, 17 Aug 2018 10:50:30 +0300 Subject: [openstack-dev] [nova][vmware] need help triaging a vmware driver bug In-Reply-To: <45e95976-1e14-c466-8b4f-45aff35df4fb@gmail.com> References: <45e95976-1e14-c466-8b4f-45aff35df4fb@gmail.com> Message-ID: Hi, On 17.08.2018 04:10, melanie witt wrote: > > Can anyone help triage this bug? > I have requested more info from the person who submitted this and provided some tips how to correlate nova-compute logs to vCenter logs in order to better understand what went wrong. Would it be possible to include this kind of information in the Launchpad bug template for VMware related bugs? Thanks, Rado From skaplons at redhat.com Fri Aug 17 08:16:35 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Fri, 17 Aug 2018 10:16:35 +0200 Subject: [openstack-dev] [neutron] Broken pep8 job Message-ID: Hi, It looks that pep8 job in Neutron is currently broken because of new version of bandit (1.5.0). If You have in Your patch failure of pep8 job with error like [1] please don’t recheck as it will not help. I did some patch which should fix it [2]. Will let You know when it will be fixed and You will be able to rebase You patches. [1] http://logs.openstack.org/37/382037/67/check/openstack-tox-pep8/e2bbd84/job-output.txt.gz#_2018-08-16_21_45_55_366148 [2] https://review.openstack.org/#/c/592884/ — Slawek Kaplonski Senior software engineer Red Hat From dougal at redhat.com Fri Aug 17 08:36:15 2018 From: dougal at redhat.com (Dougal Matthews) Date: Fri, 17 Aug 2018 09:36:15 +0100 Subject: [openstack-dev] [mistral] Denver PTG Message-ID: Hey all, I wanted to reach out and see who is interested in attending the Mistral sessions at the Denver PTG. Unfortunately I wont be able to make it but Renat Akhmerov may be able to go and run the sessions. Most of the other Mistral cores wont be able to attend unfortunately. Please reply as soon as you can. Thanks, Dougal -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Aug 17 09:02:09 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 17 Aug 2018 11:02:09 +0200 Subject: [openstack-dev] [nova] Rocky blueprint burndown chart In-Reply-To: <045ab2da-8784-03e6-ad82-8d013a95d2d7@gmail.com> References: <045ab2da-8784-03e6-ad82-8d013a95d2d7@gmail.com> Message-ID: melanie witt wrote: > [...] > If you have feedback or thoughts on any of this, feel free to reply to > this thread or add your comments to the Rocky retrospective etherpad [4] > and we can discuss at the PTG. That is great data, thanks for compiling and publishing it ! As far as burndown charts go, it looks healthy (specs getting completed along the way, not approving a lot more than you can actually take). -- Thierry Carrez (ttx) From clayc at hpe.com Fri Aug 17 09:03:05 2018 From: clayc at hpe.com (Chang, Clay (HPS OE-Linux TDC)) Date: Fri, 17 Aug 2018 09:03:05 +0000 Subject: [openstack-dev] [Cinder] How to mount NFS volume? Message-ID: Hi, I have Cinder configured with NFS backend. On one bare metal node, I can use 'cinder create' to create the volume with specified size - I saw a volume file create on the NFS server, so I suppose the NFS was configured correctly. My question is, how could I mount the NFS volume on the bare metal node? I tried: cinder local-attach 3f66c360-e2e1-471e-aa36-57db3fcf3bdb -mountpoint /mnt/tmp it says: "ERROR: Connect to volume via protocol NFS not supported" I looked at https://github.com/openstack/python-brick-cinderclient-ext/blob/master/brick_cinderclient_ext/volume_actions.py, found only iSCSI, RBD and FIBRE_CHANNEL were supported. Wondering if there are ways to mount the NFS volume? Thanks, Clay -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhengzhenyulixi at gmail.com Fri Aug 17 09:12:44 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Fri, 17 Aug 2018 17:12:44 +0800 Subject: [openstack-dev] [Nova] A multi-cell instance-list performance test In-Reply-To: References: Message-ID: Hi We have tried out the patch: https://review.openstack.org/#/c/592698/ we also applied https://review.openstack.org/#/c/592285/ it turns out that we are able to half the overall time consumption, we did try with different sort key and dirs, the results are similar, we didn't try out paging yet: [image: image.png] BR, Kevin Zheng On Fri, Aug 17, 2018 at 10:55 AM Zhenyu Zheng wrote: > Hi, > > Thanks alot for the reply, for your question #2, we did tests with two > kinds of deployments: 1. There is only 1 DB with all 10 cells(also cell0) > and it is on the same server with > the API; 2. We took 5 of the DBs to another machine on the same rack to > test out if it matters, and it turns out there are no big differences. > > For question #3, we did a test with limit = 1000 and 10 cells: > as we can see, the CPU workload from API process and MySQL query is both > high in the first 3 seconds, but start from the 4th second, only API > process occupies the CPU, > and the memory consumption is low comparing to the CPU consumption. And > this is tested with the patch fix posted in previous mail. > > [image: image.png] > > [image: image.png] > > BR, > > Kevin > > On Fri, Aug 17, 2018 at 2:45 AM Dan Smith wrote: > >> > yes, the DB query was in serial, after some investigation, it seems >> that we are unable to perform eventlet.mockey_patch in uWSGI mode, so >> > Yikun made this fix: >> > >> > https://review.openstack.org/#/c/592285/ >> >> Cool, good catch :) >> >> > >> > After making this change, we test again, and we got this kind of data: >> > >> > total collect sort view >> > before monkey_patch 13.5745 11.7012 1.1511 0.5966 >> > after monkey_patch 12.8367 10.5471 1.5642 0.6041 >> > >> > The performance improved a little, and from the log we can saw: >> >> Since these all took ~1s when done in series, but now take ~10s in >> parallel, I think you must be hitting some performance bottleneck in >> either case, which is why the overall time barely changes. Some ideas: >> >> 1. In the real world, I think you really need to have 10x database >> servers or at least a DB server with plenty of cores loading from a >> very fast (or separate) disk in order to really ensure you're getting >> full parallelism of the DB work. However, because these queries all >> took ~1s in your serialized case, I expect this is not your problem. >> >> 2. What does the network look like between the api machine and the DB? >> >> 3. What do the memory and CPU usage of the api process look like while >> this is happening? >> >> Related to #3, even though we issue the requests to the DB in parallel, >> we still process the result of those calls in series in a single python >> thread on the API. That means all the work of reading the data from the >> socket, constructing the SQLA objects, turning those into nova objects, >> etc, all happens serially. It could be that the DB query is really a >> small part of the overall time and our serialized python handling of the >> result is the slow part. If you see the api process pegging a single >> core at 100% for ten seconds, I think that's likely what is happening. >> >> > so, now the queries are in parallel, but the whole thing still seems >> > serial. >> >> In your table, you show the time for "1 cell, 1000 instances" as ~3s and >> "10 cells, 1000 instances" as 10s. The problem with comparing those >> directly is that in the latter, you're actually pulling 10,000 records >> over the network, into memory, processing them, and then just returning >> the first 1000 from the sort. A closer comparison would be the "10 >> cells, 100 instances" with "1 cell, 1000 instances". In both of those >> cases, you pull 1000 instances total from the db, into memory, and >> return 1000 from the sort. In that case, the multi-cell situation is >> faster (~2.3s vs. ~3.1s). You could also compare the "10 cells, 1000 >> instances" case to "1 cell, 10,000 instances" just to confirm at the >> larger scale that it's better or at least the same. >> >> We _have_ to pull $limit instances from each cell, in case (according to >> the sort key) the first $limit instances are all in one cell. We _could_ >> try to batch the results from each cell to avoid loading so many that we >> don't need, but we punted this as an optimization to be done later. I'm >> not sure it's really worth the complexity at this point, but it's >> something we could investigate. >> >> --Dan >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 30600 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 28172 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 194499 bytes Desc: not available URL: From colleen at gazlene.net Fri Aug 17 09:16:38 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 17 Aug 2018 11:16:38 +0200 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 6 August 2018 In-Reply-To: References: <1533915998.2993501.1470046096.3F011E8B@webmail.messagingengine.com> Message-ID: <1534497398.1330740.1477195672.3A81ABBA@webmail.messagingengine.com> On Sat, Aug 11, 2018, at 11:14 AM, Lance Bragstad wrote: > On Fri, Aug 10, 2018, 23:47 Colleen Murphy wrote: > > > # Keystone Team Update - Week of 6 August 2018 > > > > ## News > > > > ### RC1 > > > > We released RC1 this week[1]. Please try it out and be on the lookout for > > critical bugs. As of yet we don't seem to have any showstoppers that would > > require another RC. > > > Should we rev the keystone version for the inclusion of the new default > roles? > > > > [1] https://releases.openstack.org/rocky/index.html#rocky-keystone [snipped] To close the loop on this, we discussed on IRC[2], Lance was talking about the API version, not the release version, and we decided that although the bootstrap change allows the new roles to be created at initialization, it doesn't guarantee that roles will be in every deployment, and so it's not a feature we can advertise with an API version bump. Colleen [2] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-08-13.log.html#t2018-08-13T07:56:22 From no-reply at openstack.org Fri Aug 17 09:50:42 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 17 Aug 2018 09:50:42 -0000 Subject: [openstack-dev] glance 17.0.0.0rc2 (rocky) Message-ID: Hello everyone, A new release candidate for glance for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/glance/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/glance/log/?h=stable/rocky Release notes for glance can be found at: https://docs.openstack.org/releasenotes/glance/ From balazs.gibizer at ericsson.com Fri Aug 17 10:10:37 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 17 Aug 2018 12:10:37 +0200 Subject: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict In-Reply-To: References: <1534419109.24276.3@smtp.office365.com> <1534419803.3149.0@smtp.office365.com> Message-ID: <1534500637.29318.1@smtp.office365.com> On Thu, Aug 16, 2018 at 5:34 PM, Eric Fried wrote: > Thanks for this, gibi. > > > TL;DR: a). > > I didn't look, but I'm pretty sure we're not caching allocations in > the > report client. Today, nobody outside of nova (specifically the > resource > tracker via the report client) is supposed to be mucking with instance > allocations, right? And given the global lock in the resource tracker, > it should be pretty difficult to race e.g. a resize and a delete in > any > meaningful way. So short term, IMO it is reasonable to treat any > generation conflict as an error. No retries. Possible wrinkle on > delete, > where it should be a failure unless forced. Yes, today the instance_uuid and migraton_uuid consumers in placement are only changed from nova. Right now I don't have any examples where nova is racing with itself on a instance or migration consumer. We could try hitting the Nova API in parallel with different server lifecycle operations against the same server to see if we can find races. But until such race is discovered we can go with option a) > > Long term, I also can't come up with any scenario where it would be > appropriate to do a narrowly-focused GET+merge/replace+retry. But > implementing the above short-term plan shouldn't prevent us from > adding > retries for individual scenarios later if we do uncover places where > it > makes sense. > Later when resources consumed by a server will be handled outside of nova, like bandwidth from neutron and accelerators from cyborg we might see cases when nova will not be the only module changing a instance_uuid consumer. Then we have to decide how to handle that. I think one solution could be to make sure Nova knows about the bandwidth and accelerator resource needs of a server even if it is provided by neutron or cyborg. This knowledge is anyhow necessary to support atomic resource claim in the scheduler. For neturon ports this will be done through the resource_request attribute of the port. So even if the resource need of a port changes nova can go back to neutron and query the current need. This way nova can implement the following generic algorithm for every operation where nova wants to change the instance_uuid consumer in placement: * collect the server current resource needs (might involve reading it from flavor, from neutron port, from cyborg accelerator) and apply the change nova wants to make (e.g. delete, move, resize). * GET current consumer view from placement * merge the two and push the result back to placement > Here's some stream-of-consciousness that led me to the above opinions: > > - On spawn, we send the allocation with a consumer gen of None because > we expect the consumer not to exist. If it exists, that should be a > hard > fail. (Hopefully the only way this happens is a true UUID conflict.) > > - On migration, when we create the migration UUID, ditto above ^ I agree on both. I suggest returning HTTP 500 as we need a bug report about these cases. > > - On migration, when we transfer the allocations in either direction, > a > conflict means someone managed to resize (or otherwise change > allocations?) since the last time we pulled data. Given the global > lock > in the report client, this should have been tough to do. If it does > happen, I would think any retry would need to be done all the way back > at the claim, which I imagine is higher up than we should go. So > again, > I think we should fail the migration and make the user retry. Do we want to fail the whole migration or just the migration step (e.g. confirm, revert)? The later means that failure during confirm or revert would put the instance back to VERIFY_RESIZE. While the former would mean that in case of conflict at confirm we try an automatic revert. But for a conflict at revert we can only put the instance to ERROR state. > > - On destroy, a conflict again means someone managed a resize despite > the global lock. If I'm deleting an instance and something about it > changes, I would think I want the opportunity to reevaluate my > decision > to delete it. That said, I would definitely want a way to force it (in > which case we can just use the DELETE call explicitly). But neither > case > should be a retry, and certainly there is no destroy scenario where I > would want a "merging" of allocations to happen. Good idea about allowing forcing the delete. So a simple DELETE /servers/{instance_uuid} could fail on consumer conflict but a POST /servers/{instance_uuid}/action with forceDelete body would use DELETE /allocations and therefore will ignore any consumer generation. Cheers, gibi > > Thanks, > efried > > > On 08/16/2018 06:43 AM, Balázs Gibizer wrote: >> reformatted for readabiliy, sorry: >> >> Hi, >> >> tl;dr: To properly use consumer generation (placement 1.28) in Nova >> we >> need to decide how to handle consumer generation conflict from Nova >> perspective: >> a) Nova reads the current consumer_generation before the allocation >> update operation and use that generation in the allocation update >> operation. If the allocation is changed between the read and the >> update then nova fails the server lifecycle operation and let the >> end user retry it. >> b) Like a) but in case of conflict nova blindly retries the >> read-and-update operation pair couple of times and if only fails >> the life cycle operation if run out of retries. >> c) Nova stores its own view of the allocation. When a consumer's >> allocation needs to be modified then nova reads the current state >> of the consumer from placement. Then nova combines the two >> allocations to generate the new expected consumer state. In case >> of generation conflict nova retries the read-combine-update >> operation triplet. >> >> Which way we should go now? >> >> What should be or long term goal? >> >> >> Details: >> >> There are plenty of affected lifecycle operations. See the patch >> series >> starting at [1]. >> >> For example: >> >> The current patch[1] that handles the delete server case implements >> option b). It simly reads the current consumer generation from >> placement and uses that to send a PUT /allocatons/{instance_uuid} >> with >> "allocations": {} in its body. >> >> Here implementing option c) would mean that during server delete >> nova >> needs: >> 1) to compile its own view of the resource need of the server >> (currently based on the flavor but in the future based on the >> attached port's resource requests as well) >> 2) then read the current allocation of the server from placement >> 3) then subtract the server resource needs from the current >> allocation >> and send the resulting allocation back in the update to placement >> >> In the simple case this subtraction would result in an empty >> allocation >> sent to placement. Also in this simple case c) has the same effect >> as >> b) currently implementated in [1]. >> >> However if somebody outside of nova modifies the allocation of this >> consumer in a way that nova does not know about such changed >> resource >> need then b) and c) will result in different placement state after >> server delete. >> >> I only know of one example, the change of neutron port's resource >> request while the port is attached. (Note, it is out of scope in the >> first step of bandwidth implementation.) In this specific example >> option c) can work if nova re-reads the port's resource request >> during >> delete when recalculates its own view of the server resource needs. >> But >> I don't know if every other resource (e.g. accelerators) used by a >> server can be / will be handled this way. >> >> >> Other examples of affected lifecycle operations: >> >> During a server migration moving the source host allocation from the >> instance_uuid to a the migration_uuid fails with consumer generation >> conflict because of the instance_uuid consumer generation. [2] >> >> Confirming a migration fails as the deletion of the source host >> allocation fails due to the consumer generation conflict of the >> migration_uuid consumer that is being emptied.[3] >> >> During scheduling of a new server putting allocation to >> instance_uuid >> fails as the scheduler assumes that it is a new consumer and >> therefore >> uses consumer_generation: None for the allocation, but placement >> reports generation conflict. [4] >> >> During a non-forced evacuation the scheduler tries to claim the >> resource on the destination host with the instance_uuid, but that >> consumer already holds the source allocation therefore the scheduler >> cannot assume that the instance_uuid is a new consumer. [4] >> >> >> [1] https://review.openstack.org/#/c/591597 >> [2] https://review.openstack.org/#/c/591810 >> [3] https://review.openstack.org/#/c/591811 >> [4] https://review.openstack.org/#/c/583667 >> >> >> >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dabarren at gmail.com Fri Aug 17 10:52:32 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Fri, 17 Aug 2018 12:52:32 +0200 Subject: [openstack-dev] [kolla][ptg] Denver PTG on-site or virtual Message-ID: Fellow kolleages. In september is Denver PTG, as per the etherpad [0] only 3 contributors confirmed their presence in the PTG, we expected more people to be there as previous PTGs were full of contributors and operators. In the last kolla meeting [1] with discussed if we should make a virtual PTG rather than a on-site one as we will probably reach a bigger number of attendance. This set us in a bad possition as per: If we do an on-site PTG - Small representation for a whole cycle design, being this one larger than usual. - Many people whiling to attend is not able to be there. If we do a virtual PTG - Some people already spend money to travel for kolla PTG - PTG rooms are already reserved for kolla session - No cross project discussion If there are more people who is going to Denver and haven't signed up at the etherpad, please confirm your presence as it will probably influence on this topic. Here is the though question... What kind of PTG do you prefer for this one, virtual or on-site in Denver? CC to Kendall Nelson from the foundation if she could help us on this though decission, given the small time we have until the PTG both ways have some kind of bad consecuencies for both the project and the contributors. [0] https://etherpad.openstack.org/p/kolla-stein-ptg-planning [1] http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-08-15-15.00.log.html#l-13 Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From gergely.csatari at nokia.com Fri Aug 17 11:24:24 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Fri, 17 Aug 2018 11:24:24 +0000 Subject: [openstack-dev] [tripleo][Edge][FEMDC] Edge clouds and controlplane updates In-Reply-To: References: <41de3af6-5f7e-94e5-cfe3-a9090fb8218f@redhat.com> <0a519cf3-41b6-d040-3759-0c036a44f869@redhat.com> Message-ID: Hi, Some comments inline. From: Alan Bishop Sent: Thursday, August 16, 2018 7:09 PM On Tue, Aug 14, 2018 at 9:20 AM Bogdan Dobrelya > wrote: On 8/13/18 9:47 PM, Giulio Fidente wrote: > Hello, > > I'd like to get some feedback regarding the remaining > work for the split controlplane spec implementation [1] > > Specifically, while for some services like nova-compute it is not > necessary to update the controlplane nodes after an edge cloud is > deployed, for other services, like cinder (or glance, probably > others), it is necessary to do an update of the config files on the > controlplane when a new edge cloud is deployed. [G0]: What is the reason to run a shared cinder in an edge cloud infrastructure? Maybe it is a better approach to run an individual Cinder in every edge cloud instance. > In fact for services like cinder or glance, which are hosted in the > controlplane, we need to pull data from the edge clouds (for example > the newly deployed ceph cluster keyrings and fsid) to configure cinder > (or glance) with a new backend. [G0]: Solution ideas for Glance are listed in [3]. > It looks like this demands for some architectural changes to solve the > following two: > > - how do we trigger/drive updates of the controlplane nodes after the > edge cloud is deployed? Note, there is also a strict(?) requirement of local management capabilities for edge clouds temporary disconnected off the central controlplane. That complicates the updates triggering even more. We'll need at least a notification-and-triggering system to perform required state synchronizations, including conflicts resolving. If that's the case, the architecture changes for TripleO deployment framework are inevitable AFAICT. This is another interesting point. I don't mean to disregard it, but want to highlight the issue that Giulio and I (and others, I'm sure) are focused on. As a cinder guy, I'll use cinder as an example. Cinder services running in the control plane need to be aware of the storage "backends" deployed at the Edge. So if a split-stack deployment includes edge nodes running a ceph cluster, the cinder services need to be updated to add the ceph cluster as a new cinder backend. So, not only is control plane data needed in order to deploy an additional stack at the edge, data from the edge deployment needs to be fed back into a subsequent stack update in the controlplane. Otherwise, cinder (and other storage services) will have no way of utilizing ceph clusters at the edge. > > - how do we scale the controlplane parameters to accomodate for N > backends of the same type? Yes, this is also a big problem for me. Currently, TripleO can deploy cinder with multiple heterogeneous backends (e.g. one each of ceph, NFS, Vendor X, Vendor Y, etc.). However, the current THT do not let you deploy multiple instances of the same backend (e.g. more than one ceph). If the goal is to deploy multiple edge nodes consisting of Compute+Ceph, then TripleO will need the ability to deploy multiple homogeneous cinder backends. This requirement will likely apply to glance and manila as well. > A very rough approach to the latter could be to use jinja to scale up > the CephClient service so that we can have multiple copies of it in the > controlplane. > > Each instance of CephClient should provide the ceph config file and > keyring necessary for each cinder (or glance) backend. > > Also note that Ceph is only a particular example but we'd need a similar > workflow for any backend type. > > The etherpad for the PTG session [2] touches this, but it'd be good to > start this conversation before then. > > 1. > https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html > > 2. https://etherpad.openstack.org/p/tripleo-ptg-queens-split-controlplane > [3]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment Br, Gerg0 -- Best regards, Bogdan Dobrelya, Irc #bogdando __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Fri Aug 17 11:27:04 2018 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 17 Aug 2018 12:27:04 +0100 Subject: [openstack-dev] [kolla][ptg] Denver PTG on-site or virtual In-Reply-To: References: Message-ID: As one of the lucky three kolleagues able to make the PTG, here's my position (inline). On 17 August 2018 at 11:52, Eduardo Gonzalez wrote: > Fellow kolleages. > > In september is Denver PTG, as per the etherpad [0] only 3 contributors > confirmed their presence in the PTG, we expected more people to be there as > previous PTGs were full of contributors and operators. > > In the last kolla meeting [1] with discussed if we should make a virtual > PTG rather than a on-site one as we will probably reach a bigger number of > attendance. > > This set us in a bad possition as per: > > If we do an on-site PTG > > - Small representation for a whole cycle design, being this one larger > than usual. > - Many people whiling to attend is not able to be there. > I agree that three is too small a number to justify an on-site PTG. I was planning to split my time between kolla and ironic, so being able to focus on one project would be beneficial to me, assuming the virtual PTG takes place at a different time. I could still split my time if the virtual PTG occurs at the same time. > > If we do a virtual PTG > > - Some people already spend money to travel for kolla PTG > I would be going anyway. > - PTG rooms are already reserved for kolla session > If the virtual PTG occurs at the same time, we could use the (oversized) reserved room to dial into calls. - No cross project discussion > Happy to attend on behalf of kolla and feed back to the team. > > If there are more people who is going to Denver and haven't signed up at > the etherpad, please confirm your presence as it will probably influence on > this topic. > > Here is the though question... > > What kind of PTG do you prefer for this one, virtual or on-site in Denver? > Virtual makes sense to me. > > CC to Kendall Nelson from the foundation if she could help us on this > though decission, given the small time we have until the PTG both ways have > some kind of bad consecuencies for both the project and the contributors. > > [0] https://etherpad.openstack.org/p/kolla-stein-ptg-planning > [1] http://eavesdrop.openstack.org/meetings/kolla/2018/kolla. > 2018-08-15-15.00.log.html#l-13 > > Regards > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Fri Aug 17 11:46:45 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Fri, 17 Aug 2018 14:46:45 +0300 Subject: [openstack-dev] [Cinder] How to mount NFS volume? In-Reply-To: References: Message-ID: Hi Clay, Unfortunately, local-attach doesn't support NFS-based volumes due to the security reasons. We haven't the good solution now for multi-tenant environments. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Fri, Aug 17, 2018 at 12:03 PM, Chang, Clay (HPS OE-Linux TDC) < clayc at hpe.com> wrote: > Hi, > > > > I have Cinder configured with NFS backend. On one bare metal node, I can > use ‘cinder create’ to create the volume with specified size – I saw a > volume file create on the NFS server, so I suppose the NFS was configured > correctly. > > > > My question is, how could I mount the NFS volume on the bare metal node? > > > > I tried: > > > > cinder local-attach 3f66c360-e2e1-471e-aa36-57db3fcf3bdb –mountpoint > /mnt/tmp > > > > it says: > > > > “ERROR: Connect to volume via protocol NFS not supported” > > > > I looked at https://github.com/openstack/python-brick-cinderclient-ext/b > lob/master/brick_cinderclient_ext/volume_actions.py, found only iSCSI, > RBD and FIBRE_CHANNEL were supported. > > > > Wondering if there are ways to mount the NFS volume? > > > > Thanks, > > Clay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From flux.adam at gmail.com Fri Aug 17 11:54:15 2018 From: flux.adam at gmail.com (Adam Harwell) Date: Fri, 17 Aug 2018 20:54:15 +0900 Subject: [openstack-dev] [kolla][ptg] Denver PTG on-site or virtual In-Reply-To: References: Message-ID: As one of the other two in the etherpad, I will say that I was looking forward to getting together face to face with other contributors for the first time (as I am new to the project), but I guess the majority won't actually be there, and I understand that we need to do what is best for the majority as well. I know that at least one or maybe two other people from my team were also planning to attend some Kolla sessions, so I'll see if I can get them to sign up. The other projects I'll be focused on are Octavia and Barbican, and I know both have been successful with a hybrid approach in the past (providing video of the room and allowing folks to dial in and contribute, while also having a number of people present physically). Since the room is already reserved, I don't see a huge point in avoiding its use either. --Adam On Fri, Aug 17, 2018, 20:27 Mark Goddard wrote: > As one of the lucky three kolleagues able to make the PTG, here's my > position (inline). > > On 17 August 2018 at 11:52, Eduardo Gonzalez wrote: > >> Fellow kolleages. >> >> In september is Denver PTG, as per the etherpad [0] only 3 contributors >> confirmed their presence in the PTG, we expected more people to be there as >> previous PTGs were full of contributors and operators. >> >> In the last kolla meeting [1] with discussed if we should make a virtual >> PTG rather than a on-site one as we will probably reach a bigger number of >> attendance. >> >> This set us in a bad possition as per: >> >> If we do an on-site PTG >> >> - Small representation for a whole cycle design, being this one larger >> than usual. >> - Many people whiling to attend is not able to be there. >> > > I agree that three is too small a number to justify an on-site PTG. I was > planning to split my time between kolla and ironic, so being able to focus > on one project would be beneficial to me, assuming the virtual PTG takes > place at a different time. I could still split my time if the virtual PTG > occurs at the same time. > > >> >> If we do a virtual PTG >> >> - Some people already spend money to travel for kolla PTG >> > > I would be going anyway. > > >> - PTG rooms are already reserved for kolla session >> > > If the virtual PTG occurs at the same time, we could use the (oversized) > reserved room to dial into calls. > > - No cross project discussion >> > > Happy to attend on behalf of kolla and feed back to the team. > >> >> If there are more people who is going to Denver and haven't signed up at >> the etherpad, please confirm your presence as it will probably influence on >> this topic. >> >> Here is the though question... >> >> What kind of PTG do you prefer for this one, virtual or on-site in Denver? >> > > Virtual makes sense to me. > >> >> CC to Kendall Nelson from the foundation if she could help us on this >> though decission, given the small time we have until the PTG both ways have >> some kind of bad consecuencies for both the project and the contributors. >> >> [0] https://etherpad.openstack.org/p/kolla-stein-ptg-planning >> [1] >> http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-08-15-15.00.log.html#l-13 >> >> Regards >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Fri Aug 17 11:59:19 2018 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 17 Aug 2018 12:59:19 +0100 Subject: [openstack-dev] [kolla][ptg] Denver PTG on-site or virtual In-Reply-To: References: Message-ID: Whether there is a physical PTG session or not, I'd certainly like to meet up with other folks who are using and/or contributing to Kolla, let's be sure to make time for that. Mark On 17 August 2018 at 12:54, Adam Harwell wrote: > As one of the other two in the etherpad, I will say that I was looking > forward to getting together face to face with other contributors for the > first time (as I am new to the project), but I guess the majority won't > actually be there, and I understand that we need to do what is best for the > majority as well. > I know that at least one or maybe two other people from my team were also > planning to attend some Kolla sessions, so I'll see if I can get them to > sign up. > The other projects I'll be focused on are Octavia and Barbican, and I know > both have been successful with a hybrid approach in the past (providing > video of the room and allowing folks to dial in and contribute, while also > having a number of people present physically). > Since the room is already reserved, I don't see a huge point in avoiding > its use either. > > --Adam > > > On Fri, Aug 17, 2018, 20:27 Mark Goddard wrote: > >> As one of the lucky three kolleagues able to make the PTG, here's my >> position (inline). >> >> On 17 August 2018 at 11:52, Eduardo Gonzalez wrote: >> >>> Fellow kolleages. >>> >>> In september is Denver PTG, as per the etherpad [0] only 3 contributors >>> confirmed their presence in the PTG, we expected more people to be there as >>> previous PTGs were full of contributors and operators. >>> >>> In the last kolla meeting [1] with discussed if we should make a virtual >>> PTG rather than a on-site one as we will probably reach a bigger number of >>> attendance. >>> >>> This set us in a bad possition as per: >>> >>> If we do an on-site PTG >>> >>> - Small representation for a whole cycle design, being this one larger >>> than usual. >>> - Many people whiling to attend is not able to be there. >>> >> >> I agree that three is too small a number to justify an on-site PTG. I was >> planning to split my time between kolla and ironic, so being able to focus >> on one project would be beneficial to me, assuming the virtual PTG takes >> place at a different time. I could still split my time if the virtual PTG >> occurs at the same time. >> >> >>> >>> If we do a virtual PTG >>> >>> - Some people already spend money to travel for kolla PTG >>> >> >> I would be going anyway. >> >> >>> - PTG rooms are already reserved for kolla session >>> >> >> If the virtual PTG occurs at the same time, we could use the (oversized) >> reserved room to dial into calls. >> >> - No cross project discussion >>> >> >> Happy to attend on behalf of kolla and feed back to the team. >> >>> >>> If there are more people who is going to Denver and haven't signed up at >>> the etherpad, please confirm your presence as it will probably influence on >>> this topic. >>> >>> Here is the though question... >>> >>> What kind of PTG do you prefer for this one, virtual or on-site in >>> Denver? >>> >> >> Virtual makes sense to me. >> >>> >>> CC to Kendall Nelson from the foundation if she could help us on this >>> though decission, given the small time we have until the PTG both ways have >>> some kind of bad consecuencies for both the project and the contributors. >>> >>> [0] https://etherpad.openstack.org/p/kolla-stein-ptg-planning >>> [1] http://eavesdrop.openstack.org/meetings/kolla/2018/kolla. >>> 2018-08-15-15.00.log.html#l-13 >>> >>> Regards >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Fri Aug 17 13:36:20 2018 From: dms at danplanet.com (Dan Smith) Date: Fri, 17 Aug 2018 06:36:20 -0700 Subject: [openstack-dev] [Nova] A multi-cell instance-list performance test In-Reply-To: (Zhenyu Zheng's message of "Fri, 17 Aug 2018 17:12:44 +0800") References: Message-ID: > We have tried out the patch: > https://review.openstack.org/#/c/592698/ > we also applied https://review.openstack.org/#/c/592285/ > > it turns out that we are able to half the overall time consumption, we > did try with different sort key and dirs, the results are similar, we > didn't try out paging yet: Excellent! Let's continue discussion of the batching approach in that review. There are some other things to try. Thanks! --Dan From flux.adam at gmail.com Fri Aug 17 13:51:18 2018 From: flux.adam at gmail.com (Adam Harwell) Date: Fri, 17 Aug 2018 22:51:18 +0900 Subject: [openstack-dev] [kolla][ptg] Denver PTG on-site or virtual In-Reply-To: References: Message-ID: Yeah, definitely! Worst case, we spend some time huddled around a table by the bar, and that isn't too bad in my book. ;) --Adam On Fri, Aug 17, 2018, 20:59 Mark Goddard wrote: > Whether there is a physical PTG session or not, I'd certainly like to meet > up with other folks who are using and/or contributing to Kolla, let's be > sure to make time for that. > Mark > > On 17 August 2018 at 12:54, Adam Harwell wrote: > >> As one of the other two in the etherpad, I will say that I was looking >> forward to getting together face to face with other contributors for the >> first time (as I am new to the project), but I guess the majority won't >> actually be there, and I understand that we need to do what is best for the >> majority as well. >> I know that at least one or maybe two other people from my team were also >> planning to attend some Kolla sessions, so I'll see if I can get them to >> sign up. >> The other projects I'll be focused on are Octavia and Barbican, and I >> know both have been successful with a hybrid approach in the past >> (providing video of the room and allowing folks to dial in and contribute, >> while also having a number of people present physically). >> Since the room is already reserved, I don't see a huge point in avoiding >> its use either. >> >> --Adam >> >> >> On Fri, Aug 17, 2018, 20:27 Mark Goddard wrote: >> >>> As one of the lucky three kolleagues able to make the PTG, here's my >>> position (inline). >>> >>> On 17 August 2018 at 11:52, Eduardo Gonzalez wrote: >>> >>>> Fellow kolleages. >>>> >>>> In september is Denver PTG, as per the etherpad [0] only 3 contributors >>>> confirmed their presence in the PTG, we expected more people to be there as >>>> previous PTGs were full of contributors and operators. >>>> >>>> In the last kolla meeting [1] with discussed if we should make a >>>> virtual PTG rather than a on-site one as we will probably reach a bigger >>>> number of attendance. >>>> >>>> This set us in a bad possition as per: >>>> >>>> If we do an on-site PTG >>>> >>>> - Small representation for a whole cycle design, being this one larger >>>> than usual. >>>> - Many people whiling to attend is not able to be there. >>>> >>> >>> I agree that three is too small a number to justify an on-site PTG. I >>> was planning to split my time between kolla and ironic, so being able to >>> focus on one project would be beneficial to me, assuming the virtual PTG >>> takes place at a different time. I could still split my time if the virtual >>> PTG occurs at the same time. >>> >>> >>>> >>>> If we do a virtual PTG >>>> >>>> - Some people already spend money to travel for kolla PTG >>>> >>> >>> I would be going anyway. >>> >>> >>>> - PTG rooms are already reserved for kolla session >>>> >>> >>> If the virtual PTG occurs at the same time, we could use the >>> (oversized) reserved room to dial into calls. >>> >>> - No cross project discussion >>>> >>> >>> Happy to attend on behalf of kolla and feed back to the team. >>> >>>> >>>> If there are more people who is going to Denver and haven't signed up >>>> at the etherpad, please confirm your presence as it will probably influence >>>> on this topic. >>>> >>>> Here is the though question... >>>> >>>> What kind of PTG do you prefer for this one, virtual or on-site in >>>> Denver? >>>> >>> >>> Virtual makes sense to me. >>> >>>> >>>> CC to Kendall Nelson from the foundation if she could help us on this >>>> though decission, given the small time we have until the PTG both ways have >>>> some kind of bad consecuencies for both the project and the contributors. >>>> >>>> [0] https://etherpad.openstack.org/p/kolla-stein-ptg-planning >>>> [1] >>>> http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-08-15-15.00.log.html#l-13 >>>> >>>> Regards >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Fri Aug 17 14:05:23 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Fri, 17 Aug 2018 09:05:23 -0500 Subject: [openstack-dev] [puppet] migrating to storyboard In-Reply-To: References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> <5B745814.1040008@windriver.com> Message-ID: <1b18cce2-34f1-18b8-19b2-d8e43c32c6de@gmail.com> On 8/16/2018 4:03 PM, Kendall Nelson wrote: > > Hello :) > > On Thu, Aug 16, 2018 at 12:47 PM Jay S Bryant > wrote: > > Hey, > > Well, the attachments are one of the things holding us up along > with reduced participation in the project and a number of other > challenges.  Getting the time to prepare for the move has been > difficult. > > > I wouldn't really say we have reduced participation- we've always been > a small team. In the last year, we've actually seen more involvement > from new contributors (new and future users of sb) which has been > awesome :) We even had/have an outreachy intern that has been working > on making searching and filtering even better. > > Prioritizing when to invest time to migrate has been hard for several > projects so Cinder isn't alone, no worries :) Sorry, I wasn't clear here.  I was referencing greatly reduced participation in Cinder.  I had been hoping to get more time to dig into StoryBoard and prepare the team for migration but that has been harder given an increased need to do other work in Cinder. I have noticed that the search in StoryBoard was better so that was encouraging. > > I am planning to take some time before the PTG to look at how > Ironic has been using Storyboard and take this forward to the team > at the PTG to try and spur the process along. > > > Glad to hear it! Once I get the SB room on the schedule, you are > welcome to join the conversations there.  We would love any feedback > you have on what the 'other challenges' are that you mentioned above. Yeah, I think it would be good to have time at the PTG to get Manila, Cinder, Oslo, etc. together to talk about this.  This will give me incentive to do some more experimenting before the PTG.  :-) See you in Denver.  :-) > > Jay Bryant - (jungleboyj) > > > On 8/16/2018 2:22 PM, Kendall Nelson wrote: >> Hey :) >> >> Yes, I know attachments are important to a few projects. They are >> on our todo list and we plan to talk about how to implement them >> at the upcoming PTG[1]. >> >> Unfortunately, we have had other things that are taking priority >> over attachments. We would really love to migrate you all, but if >> attachments is what is really blocking you and there is no other >> workable solution, I'm more than willing to review patches if you >> want to help out to move things along a little faster :) >> >> -Kendall Nelson (diablo_rojo) >> >> [1]https://etherpad.openstack.org/p/sb-stein-ptg-planning >> >> On Wed, Aug 15, 2018 at 1:49 PM Jay S Bryant >> > wrote: >> >> >> >> On 8/15/2018 11:43 AM, Chris Friesen wrote: >> > On 08/14/2018 10:33 AM, Tobias Urdin wrote: >> > >> >> My goal is that we will be able to swap to Storyboard >> during the >> >> Stein cycle but >> >> considering that we have a low activity on >> >> bugs my opinion is that we could do this swap very easily >> anything >> >> soon as long >> >> as everybody is in favor of it. >> >> >> >> Please let me know what you think about moving to Storyboard? >> > >> > Not a puppet dev, but am currently using Storyboard. >> > >> > One of the things we've run into is that there is no way to >> attach log >> > files for bug reports to a story. There's an open story on >> this[1] >> > but it's not assigned to anyone. >> > >> > Chris >> > >> > >> > [1] https://storyboard.openstack.org/#!/story/2003071 >> >> > >> Cinder is planning on holding on any migration, like Manila, >> until the >> file attachment issue is resolved. >> >> Jay >> > >> __________________________________________________________________________ >> >> > >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > Thanks! > > - Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Fri Aug 17 14:11:16 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 17 Aug 2018 16:11:16 +0200 Subject: [openstack-dev] Keystone Team Update - Week of 13 Auguest 2018 Message-ID: <1534515076.2416393.1477447776.3D5DAFF5@webmail.messagingengine.com> # Keystone Team Update - Week of 13 Auguest 2018 ## News Relatively quiet week with minimal fires. Prepare for the PTG by adding topics to the etherpad[1]. [1] https://etherpad.openstack.org/p/keystone-stein-ptg ## Recently Merged Changes Search query: https://bit.ly/2IACk3F We merged 27 changes this week. ## Changes that need Attention Search query: https://bit.ly/2wv7QLK There are 46 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Bugs This week we opened 2 new bugs and closed 2. Bugs opened (2) Bug #1786594 (keystone:Undecided) opened by Egor Panfilov https://bugs.launchpad.net/keystone/+bug/1786594 Bug #1787212 (keystone:Undecided) opened by tujiapeng https://bugs.launchpad.net/keystone/+bug/1787212 Bugs fixed (2) Bug #1784536 (keystone:Low) fixed by Bi wei https://bugs.launchpad.net/keystone/+bug/1784536 Bug #1785898 (ldappool:Undecided) fixed by Nick Wilburn https://bugs.launchpad.net/ldappool/+bug/1785898 ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html Next week will be the last week to release another RC if we need to. ## Shout-outs Congratulations to Nick Wilburn (orange_julius) whose first patch to OpenStack landed this week[2] which fixed a major bug in the ldappool library. Many thanks! [2] https://review.openstack.org/591174 ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From tpb at dyncloud.net Fri Aug 17 14:12:40 2018 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 17 Aug 2018 10:12:40 -0400 Subject: [openstack-dev] [puppet] migrating to storyboard In-Reply-To: <1b18cce2-34f1-18b8-19b2-d8e43c32c6de@gmail.com> References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> <5B745814.1040008@windriver.com> <1b18cce2-34f1-18b8-19b2-d8e43c32c6de@gmail.com> Message-ID: <20180817141240.2uhheb6r73ldtta5@barron.net> On 17/08/18 09:05 -0500, Jay S Bryant wrote: > > >On 8/16/2018 4:03 PM, Kendall Nelson wrote: >> >>Hello :) >> >>On Thu, Aug 16, 2018 at 12:47 PM Jay S Bryant >> wrote: >> >> Hey, >> >> Well, the attachments are one of the things holding us up along >> with reduced participation in the project and a number of other >> challenges.  Getting the time to prepare for the move has been >> difficult. >> >> >>I wouldn't really say we have reduced participation- we've always >>been a small team. In the last year, we've actually seen more >>involvement from new contributors (new and future users of sb) which >>has been awesome :) We even had/have an outreachy intern that has >>been working on making searching and filtering even better. >> >>Prioritizing when to invest time to migrate has been hard for >>several projects so Cinder isn't alone, no worries :) >Sorry, I wasn't clear here.  I was referencing greatly reduced >participation in Cinder.  I had been hoping to get more time to dig >into StoryBoard and prepare the team for migration but that has been >harder given an increased need to do other work in Cinder. > >I have noticed that the search in StoryBoard was better so that was >encouraging. >> >> I am planning to take some time before the PTG to look at how >> Ironic has been using Storyboard and take this forward to the team >> at the PTG to try and spur the process along. >> >> >>Glad to hear it! Once I get the SB room on the schedule, you are >>welcome to join the conversations there.  We would love any feedback >>you have on what the 'other challenges' are that you mentioned >>above. >Yeah, I think it would be good to have time at the PTG to get Manila, >Cinder, Oslo, etc. together to talk about this.  This will give me >incentive to do some more experimenting before the PTG.  :-) +1 - Tom Barron (tbarron) > >See you in Denver.  :-) >> >> Jay Bryant - (jungleboyj) >> >> >> On 8/16/2018 2:22 PM, Kendall Nelson wrote: >>> Hey :) >>> >>> Yes, I know attachments are important to a few projects. They are >>> on our todo list and we plan to talk about how to implement them >>> at the upcoming PTG[1]. >>> >>> Unfortunately, we have had other things that are taking priority >>> over attachments. We would really love to migrate you all, but if >>> attachments is what is really blocking you and there is no other >>> workable solution, I'm more than willing to review patches if you >>> want to help out to move things along a little faster :) >>> >>> -Kendall Nelson (diablo_rojo) >>> >>> [1]https://etherpad.openstack.org/p/sb-stein-ptg-planning >>> >>> On Wed, Aug 15, 2018 at 1:49 PM Jay S Bryant >>> > wrote: >>> >>> >>> >>> On 8/15/2018 11:43 AM, Chris Friesen wrote: >>> > On 08/14/2018 10:33 AM, Tobias Urdin wrote: >>> > >>> >> My goal is that we will be able to swap to Storyboard >>> during the >>> >> Stein cycle but >>> >> considering that we have a low activity on >>> >> bugs my opinion is that we could do this swap very easily >>> anything >>> >> soon as long >>> >> as everybody is in favor of it. >>> >> >>> >> Please let me know what you think about moving to Storyboard? >>> > >>> > Not a puppet dev, but am currently using Storyboard. >>> > >>> > One of the things we've run into is that there is no way to >>> attach log >>> > files for bug reports to a story. There's an open story on >>> this[1] >>> > but it's not assigned to anyone. >>> > >>> > Chris >>> > >>> > >>> > [1] https://storyboard.openstack.org/#!/story/2003071 >>> >>> > >>> Cinder is planning on holding on any migration, like Manila, >>> until the >>> file attachment issue is resolved. >>> >>> Jay >>> > >>> __________________________________________________________________________ >>> >>> > >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: >>> > >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >>> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> >>Thanks! >> >>- Kendall (diablo_rojo) > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From colleen at gazlene.net Fri Aug 17 14:12:57 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 17 Aug 2018 16:12:57 +0200 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 13 August 2018 In-Reply-To: <1534515076.2416393.1477447776.3D5DAFF5@webmail.messagingengine.com> References: <1534515076.2416393.1477447776.3D5DAFF5@webmail.messagingengine.com> Message-ID: <1534515177.2417084.1477449296.5821779C@webmail.messagingengine.com> Forgot the [keystone] tag. On Fri, Aug 17, 2018, at 4:11 PM, Colleen Murphy wrote: > # Keystone Team Update - Week of 13 Auguest 2018 > > ## News > > Relatively quiet week with minimal fires. Prepare for the PTG by adding > topics to the etherpad[1]. > > [1] https://etherpad.openstack.org/p/keystone-stein-ptg > > ## Recently Merged Changes > > Search query: https://bit.ly/2IACk3F > > We merged 27 changes this week. > > ## Changes that need Attention > > Search query: https://bit.ly/2wv7QLK > > There are 46 changes that are passing CI, not in merge conflict, have no > negative reviews and aren't proposed by bots. > > ## Bugs > > This week we opened 2 new bugs and closed 2. > > Bugs opened (2) > Bug #1786594 (keystone:Undecided) opened by Egor Panfilov > https://bugs.launchpad.net/keystone/+bug/1786594 > Bug #1787212 (keystone:Undecided) opened by tujiapeng > https://bugs.launchpad.net/keystone/+bug/1787212 > > Bugs fixed (2) > Bug #1784536 (keystone:Low) fixed by Bi wei > https://bugs.launchpad.net/keystone/+bug/1784536 > Bug #1785898 (ldappool:Undecided) fixed by Nick Wilburn > https://bugs.launchpad.net/ldappool/+bug/1785898 > > ## Milestone Outlook > > https://releases.openstack.org/rocky/schedule.html > > Next week will be the last week to release another RC if we need to. > > ## Shout-outs > > Congratulations to Nick Wilburn (orange_julius) whose first patch to > OpenStack landed this week[2] which fixed a major bug in the ldappool > library. Many thanks! > > [2] https://review.openstack.org/591174 > > ## Help with this newsletter > > Help contribute to this newsletter by editing the etherpad: > https://etherpad.openstack.org/p/keystone-team-newsletter > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tobias.urdin at binero.se Fri Aug 17 14:19:26 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Fri, 17 Aug 2018 16:19:26 +0200 Subject: [openstack-dev] [puppet] Puppet weekly recap - week 33 Message-ID: <1e693dfb-a6c5-102d-a828-8195af4005c3@binero.se> Hello all Puppeteers! Welcome to the weekly Puppet recap for week 33. This is a weekly overview of what has changed in the Puppet OpenStack project the past week. ============================ CHANGES ============================ Changes in all modules ------------------------------- * Removed PE requirement from metadata.json * Prepared Rocky RC1 Aodh ------- * Improved restarting Apache    Changed so that Apache more accurately restarts services on configuration changes. Glance --------- * Configure access_key and secret_key as secrets * Fixed glance_image provider    Stopped working after os_algo, os_hash_value and os_hidden was introduced. Heat ------ * Improved restarting Apache    Changed so that Apache more accurately restarts services on configuration changes. Horizon ---------- * Add wsgi_processes and wsgi_threads to horizon init * apache wsgi: Exchange defaults for workers and threads    Default values for Apache WSGI workers and threads changed. Manila --------- * Support manila-api deployment with Apache WSGI   You can now deploy manila-api under Apache WSGI Murano ---------- * Deprecated auth_uri option Nova ------- * Configure access_key and secret_key as secrets Puppet-OpenStack-Integration ----------------------------------------- * Test bgp-dragent in scenario004    Now has full testing for the BGP agent provided by neutron-dynamic-routing * Fix configure_facts.sh for RDO mirrors * Run metadata-json-lint test in lint job    Now runs metadata-json-lint in puppet-lint jobs if a metadata.json file exists. * Test Sahara API with WSGI    Sahara is now tested with API running under Apache WSGI Puppet-OpenStack-Guide ---------------------------------- * Updated latest RC1 version on release page Panko -------- * Restart API also when run with Apache    Correctly restart Apache on configuration changes Sahara ---------- * Add Sahara API WSGI support   The Sahara API can now be deployed with Apache WSGI ============================ SPECS ============================ None. ============================ OTHER ============================ * We have submitted a review for releasing RC1 of all modules https://review.openstack.org/#/c/592584/ * We have started to take a look at migrating to Storyboard    I have posted an email on the mailing list, please leave your feedback if you have any. Interested in knowing what's up? Want to help or get help? See our etherpad https://etherpad.openstack.org/p/puppet-openstack-rocky Or maybe you have some awesome ideas for next release? Let us know https://etherpad.openstack.org/p/puppet-openstack-stein ============================ Wishing you all a great weekend! Best regards Tobias From doug at doughellmann.com Fri Aug 17 14:30:29 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 17 Aug 2018 10:30:29 -0400 Subject: [openstack-dev] [goal][python3] more updates to the goal tools In-Reply-To: References: <1533682621-sup-2284@lrrr.local> Message-ID: <1534516186-sup-9875@lrrr.local> I will work on fixing this today. Has the designate team agreed to go ahead with their migration, or are you still testing the scripts? Doug Excerpts from super user's message of 2018-08-17 15:37:03 +0900: > Hi Doug, > > I'm Nguyen Hai. I proposed the python3-first patch set for > designate projects. However, I have met this error to designate and > designate-dashboard: > > === ../Output/designate/openstack/designate @ master === > > ./tools/python3-first/do_repo.sh ../Output/designate/openstack/designate > master 24292 > > ++ cat ../Output/designate/openstack/designate/.gitreview > ++ grep project > ++ cut -f2 -d= > + actual=openstack/designate.git > +++ dirname ../Output/designate/openstack/designate > ++ basename ../Output/designate/openstack > ++ basename ../Output/designate/openstack/designate > + expected=openstack/designate > + '[' openstack/designate.git '!=' openstack/designate -a > openstack/designate.git '!=' openstack/designate.git ']' > + git -C ../Output/designate/openstack/designate review -s > Creating a git remote called 'gerrit' that maps to: > ssh:// > nguyentrihai at review.openstack.org:29418/openstack/designate.git > ++ basename master > + new_branch=python3-first-master > + git -C ../Output/designate/openstack/designate branch > + grep -q python3-first-master > + echo 'creating python3-first-master' > creating python3-first-master > + git -C ../Output/designate/openstack/designate checkout -- . > + git -C ../Output/designate/openstack/designate clean -f -d > + git -C ../Output/designate/openstack/designate checkout -q origin/master > + git -C ../Output/designate/openstack/designate checkout -b > python3-first-master > Switched to a new branch 'python3-first-master' > + python3-first -v --debug jobs update > ../Output/designate/openstack/designate > determining repository name from .gitreview > working on openstack/designate @ master > looking for zuul config in > ../Output/designate/openstack/designate/.zuul.yaml > using zuul config from ../Output/designate/openstack/designate/.zuul.yaml > loading project settings from ../project-config/zuul.d/projects.yaml > loading project templates from > ../openstack-zuul-jobs/zuul.d/project-templates.yaml > loading jobs from ../openstack-zuul-jobs/zuul.d/jobs.yaml > looking for settings for openstack/designate > looking at template 'openstack-python-jobs' > looking at template 'openstack-python35-jobs' > looking at template 'publish-openstack-sphinx-docs' > looking at template 'periodic-stable-jobs' > looking at template 'check-requirements' > did not find template definition for 'check-requirements' > looking at template 'translation-jobs-master-stable' > looking at template 'release-notes-jobs' > looking at template 'api-ref-jobs' > looking at template 'install-guide-jobs' > looking at template 'release-openstack-server' > filtering on master > merging templates > adding openstack-python-jobs > adding openstack-python35-jobs > adding publish-openstack-sphinx-docs > adding periodic-stable-jobs > adding check-requirements > adding release-notes-jobs > adding install-guide-jobs > merging pipeline check > *unhashable type: 'CommentedMap'* > *Traceback (most recent call last):* > * File > "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/app.py", > line 402, in run_subcommand* > * result = cmd.run(parsed_args)* > * File > "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/command.py", > line 184, in run* > * return_code = self.take_action(parsed_args) or 0* > * File > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > line 531, in take_action* > * entry,* > * File > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > line 397, in merge_project_settings* > * up.get(pipeline, comments.CommentedMap()),* > * File > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > line 362, in merge_pipeline* > * if job_name in job_names:* > *TypeError: unhashable type: 'CommentedMap'* > *Traceback (most recent call last):* > * File "/home/stack/python3-first/goal-tools/.tox/venv/bin/python3-first", > line 10, in * > * sys.exit(main())* > * File > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/main.py", > line 42, in main* > * return Python3First().run(argv)* > * File > "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/app.py", > line 281, in run* > * result = self.run_subcommand(remainder)* > * File > "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/app.py", > line 402, in run_subcommand* > * result = cmd.run(parsed_args)* > * File > "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/command.py", > line 184, in run* > * return_code = self.take_action(parsed_args) or 0* > * File > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > line 531, in take_action* > * entry,* > * File > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > line 397, in merge_project_settings* > * up.get(pipeline, comments.CommentedMap()),* > * File > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > line 362, in merge_pipeline* > * if job_name in job_names:* > *TypeError: unhashable type: 'CommentedMap'* > *+ echo 'No changes'* > *No changes* > *+ exit 1* > > On Wed, Aug 8, 2018 at 7:58 AM Doug Hellmann wrote: > > > Champions, > > > > I have made quite a few changes to the tools for generating the zuul > > migration patches today. If you have any patches you generated locally > > for testing, please check out the latest version of the tool (when all > > of the changes merge) and regenerate them. > > > > Doug > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From doug at doughellmann.com Fri Aug 17 14:34:01 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 17 Aug 2018 10:34:01 -0400 Subject: [openstack-dev] [neutron] Broken pep8 job In-Reply-To: References: Message-ID: <1534516265-sup-9521@lrrr.local> Excerpts from Slawomir Kaplonski's message of 2018-08-17 10:16:35 +0200: > Hi, > > It looks that pep8 job in Neutron is currently broken because of new version of bandit (1.5.0). > If You have in Your patch failure of pep8 job with error like [1] please don’t recheck as it will not help. > I did some patch which should fix it [2]. Will let You know when it will be fixed and You will be able to rebase You patches. > > [1] http://logs.openstack.org/37/382037/67/check/openstack-tox-pep8/e2bbd84/job-output.txt.gz#_2018-08-16_21_45_55_366148 > [2] https://review.openstack.org/#/c/592884/ > > — > Slawek Kaplonski > Senior software engineer > Red Hat > We had this problem in oslo.concurrency, too. Because bandit is considered to be a linter and different teams may want to use different versions, it is not managed through the constraints list (there is no co-installability requirement for linters). Some of the projects using it do not have it capped, so new releases that introduce breaking changes like this can cause gate issues. In the oslo.concurrency stable branch we capped the version of bandit to avoid having to backport changes just to fix the linter errors. We made code changes in master to address them and left bandit uncapped there, for now. Doug From doug at doughellmann.com Fri Aug 17 14:47:23 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 17 Aug 2018 10:47:23 -0400 Subject: [openstack-dev] [goal][python3] more updates to the goal tools In-Reply-To: <1534516186-sup-9875@lrrr.local> References: <1533682621-sup-2284@lrrr.local> <1534516186-sup-9875@lrrr.local> Message-ID: <1534517202-sup-3596@lrrr.local> I was not able to reproduce the problem. Please test the fix in https://review.openstack.org/#/c/593068/ to see if that helps. Which version of Python are you using to run the tools? And on which OS? Excerpts from Doug Hellmann's message of 2018-08-17 10:30:29 -0400: > I will work on fixing this today. > > Has the designate team agreed to go ahead with their migration, or > are you still testing the scripts? > > Doug > > Excerpts from super user's message of 2018-08-17 15:37:03 +0900: > > Hi Doug, > > > > I'm Nguyen Hai. I proposed the python3-first patch set for > > designate projects. However, I have met this error to designate and > > designate-dashboard: > > > > === ../Output/designate/openstack/designate @ master === > > > > ./tools/python3-first/do_repo.sh ../Output/designate/openstack/designate > > master 24292 > > > > ++ cat ../Output/designate/openstack/designate/.gitreview > > ++ grep project > > ++ cut -f2 -d= > > + actual=openstack/designate.git > > +++ dirname ../Output/designate/openstack/designate > > ++ basename ../Output/designate/openstack > > ++ basename ../Output/designate/openstack/designate > > + expected=openstack/designate > > + '[' openstack/designate.git '!=' openstack/designate -a > > openstack/designate.git '!=' openstack/designate.git ']' > > + git -C ../Output/designate/openstack/designate review -s > > Creating a git remote called 'gerrit' that maps to: > > ssh:// > > nguyentrihai at review.openstack.org:29418/openstack/designate.git > > ++ basename master > > + new_branch=python3-first-master > > + git -C ../Output/designate/openstack/designate branch > > + grep -q python3-first-master > > + echo 'creating python3-first-master' > > creating python3-first-master > > + git -C ../Output/designate/openstack/designate checkout -- . > > + git -C ../Output/designate/openstack/designate clean -f -d > > + git -C ../Output/designate/openstack/designate checkout -q origin/master > > + git -C ../Output/designate/openstack/designate checkout -b > > python3-first-master > > Switched to a new branch 'python3-first-master' > > + python3-first -v --debug jobs update > > ../Output/designate/openstack/designate > > determining repository name from .gitreview > > working on openstack/designate @ master > > looking for zuul config in > > ../Output/designate/openstack/designate/.zuul.yaml > > using zuul config from ../Output/designate/openstack/designate/.zuul.yaml > > loading project settings from ../project-config/zuul.d/projects.yaml > > loading project templates from > > ../openstack-zuul-jobs/zuul.d/project-templates.yaml > > loading jobs from ../openstack-zuul-jobs/zuul.d/jobs.yaml > > looking for settings for openstack/designate > > looking at template 'openstack-python-jobs' > > looking at template 'openstack-python35-jobs' > > looking at template 'publish-openstack-sphinx-docs' > > looking at template 'periodic-stable-jobs' > > looking at template 'check-requirements' > > did not find template definition for 'check-requirements' > > looking at template 'translation-jobs-master-stable' > > looking at template 'release-notes-jobs' > > looking at template 'api-ref-jobs' > > looking at template 'install-guide-jobs' > > looking at template 'release-openstack-server' > > filtering on master > > merging templates > > adding openstack-python-jobs > > adding openstack-python35-jobs > > adding publish-openstack-sphinx-docs > > adding periodic-stable-jobs > > adding check-requirements > > adding release-notes-jobs > > adding install-guide-jobs > > merging pipeline check > > *unhashable type: 'CommentedMap'* > > *Traceback (most recent call last):* > > * File > > "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/app.py", > > line 402, in run_subcommand* > > * result = cmd.run(parsed_args)* > > * File > > "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/command.py", > > line 184, in run* > > * return_code = self.take_action(parsed_args) or 0* > > * File > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > > line 531, in take_action* > > * entry,* > > * File > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > > line 397, in merge_project_settings* > > * up.get(pipeline, comments.CommentedMap()),* > > * File > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > > line 362, in merge_pipeline* > > * if job_name in job_names:* > > *TypeError: unhashable type: 'CommentedMap'* > > *Traceback (most recent call last):* > > * File "/home/stack/python3-first/goal-tools/.tox/venv/bin/python3-first", > > line 10, in * > > * sys.exit(main())* > > * File > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/main.py", > > line 42, in main* > > * return Python3First().run(argv)* > > * File > > "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/app.py", > > line 281, in run* > > * result = self.run_subcommand(remainder)* > > * File > > "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/app.py", > > line 402, in run_subcommand* > > * result = cmd.run(parsed_args)* > > * File > > "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/command.py", > > line 184, in run* > > * return_code = self.take_action(parsed_args) or 0* > > * File > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > > line 531, in take_action* > > * entry,* > > * File > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > > line 397, in merge_project_settings* > > * up.get(pipeline, comments.CommentedMap()),* > > * File > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > > line 362, in merge_pipeline* > > * if job_name in job_names:* > > *TypeError: unhashable type: 'CommentedMap'* > > *+ echo 'No changes'* > > *No changes* > > *+ exit 1* > > > > On Wed, Aug 8, 2018 at 7:58 AM Doug Hellmann wrote: > > > > > Champions, > > > > > > I have made quite a few changes to the tools for generating the zuul > > > migration patches today. If you have any patches you generated locally > > > for testing, please check out the latest version of the tool (when all > > > of the changes merge) and regenerate them. > > > > > > Doug > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From jistr at redhat.com Fri Aug 17 15:13:45 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Fri, 17 Aug 2018 17:13:45 +0200 Subject: [openstack-dev] [tripleo][Edge][FEMDC] Edge clouds and controlplane updates In-Reply-To: <0a519cf3-41b6-d040-3759-0c036a44f869@redhat.com> References: <41de3af6-5f7e-94e5-cfe3-a9090fb8218f@redhat.com> <0a519cf3-41b6-d040-3759-0c036a44f869@redhat.com> Message-ID: <355e8647-bae1-bf81-741d-b97995b280c3@redhat.com> On 14.8.2018 15:19, Bogdan Dobrelya wrote: > On 8/13/18 9:47 PM, Giulio Fidente wrote: >> Hello, >> >> I'd like to get some feedback regarding the remaining >> work for the split controlplane spec implementation [1] >> >> Specifically, while for some services like nova-compute it is not >> necessary to update the controlplane nodes after an edge cloud is >> deployed, for other services, like cinder (or glance, probably >> others), it is necessary to do an update of the config files on the >> controlplane when a new edge cloud is deployed. >> >> In fact for services like cinder or glance, which are hosted in the >> controlplane, we need to pull data from the edge clouds (for example >> the newly deployed ceph cluster keyrings and fsid) to configure cinder >> (or glance) with a new backend. >> >> It looks like this demands for some architectural changes to solve the > following two: >> >> - how do we trigger/drive updates of the controlplane nodes after the >> edge cloud is deployed? > > Note, there is also a strict(?) requirement of local management > capabilities for edge clouds temporary disconnected off the central > controlplane. That complicates the updates triggering even more. We'll > need at least a notification-and-triggering system to perform required > state synchronizations, including conflicts resolving. If that's the > case, the architecture changes for TripleO deployment framework are > inevitable AFAICT. Indeed this would complicate things much, but IIUC the spec [1] that Giulio referenced doesn't talk about local management at all. Within the context of what the spec covers, i.e. 1 stack for Controller role and other stack(s) for Compute or *Storage roles, i hope we could address updates/upgrades workflow similarly as the deployment workflow would be addressed -- working with the stacks one by one. That would probably mean: 1. `update/upgrade prepare` on Controller stack 2. `update/upgrade prepare` on other stacks (perhaps reusing some outputs from Controller stack here) 3. `update/upgrade run` on Controller stack 4. `update/upgrade run` on other stacks 5. (`external-update/external-upgrade run` on other stacks where appropriate) 6. `update/upgrade converge` on Controller stack 7. `update/upgrade converge` on other stacks (again maybe reusing outputs from Controller stack) I'm not *sure* such approach would work, but at the moment i don't see a reason why it wouldn't :) Jirka > >> >> - how do we scale the controlplane parameters to accomodate for N >> backends of the same type? >> >> A very rough approach to the latter could be to use jinja to scale up >> the CephClient service so that we can have multiple copies of it in the >> controlplane. >> >> Each instance of CephClient should provide the ceph config file and >> keyring necessary for each cinder (or glance) backend. >> >> Also note that Ceph is only a particular example but we'd need a similar >> workflow for any backend type. >> >> The etherpad for the PTG session [2] touches this, but it'd be good to >> start this conversation before then. >> >> 1. >> https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html >> >> 2. https://etherpad.openstack.org/p/tripleo-ptg-queens-split-controlplane >> > > From skaplons at redhat.com Fri Aug 17 15:19:26 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Fri, 17 Aug 2018 17:19:26 +0200 Subject: [openstack-dev] [neutron] Broken pep8 job In-Reply-To: <1534516265-sup-9521@lrrr.local> References: <1534516265-sup-9521@lrrr.local> Message-ID: <1A4F7762-D120-4EB3-8D30-AB4001DF3BA7@redhat.com> Thx, I just did similar patches for stable/rocky [1] and stable/queens [2] in Neutron repo: [1] https://review.openstack.org/#/c/593075/ [2] https://review.openstack.org/#/c/593078/ > Wiadomość napisana przez Doug Hellmann w dniu 17.08.2018, o godz. 16:34: > > Excerpts from Slawomir Kaplonski's message of 2018-08-17 10:16:35 +0200: >> Hi, >> >> It looks that pep8 job in Neutron is currently broken because of new version of bandit (1.5.0). >> If You have in Your patch failure of pep8 job with error like [1] please don’t recheck as it will not help. >> I did some patch which should fix it [2]. Will let You know when it will be fixed and You will be able to rebase You patches. >> >> [1] http://logs.openstack.org/37/382037/67/check/openstack-tox-pep8/e2bbd84/job-output.txt.gz#_2018-08-16_21_45_55_366148 >> [2] https://review.openstack.org/#/c/592884/ >> >> — >> Slawek Kaplonski >> Senior software engineer >> Red Hat >> > > We had this problem in oslo.concurrency, too. > > Because bandit is considered to be a linter and different teams may > want to use different versions, it is not managed through the > constraints list (there is no co-installability requirement for > linters). Some of the projects using it do not have it capped, so > new releases that introduce breaking changes like this can cause > gate issues. > > In the oslo.concurrency stable branch we capped the version of > bandit to avoid having to backport changes just to fix the linter > errors. We made code changes in master to address them and left > bandit uncapped there, for now. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From openstack at fried.cc Fri Aug 17 15:40:22 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 17 Aug 2018 10:40:22 -0500 Subject: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict In-Reply-To: <1534500637.29318.1@smtp.office365.com> References: <1534419109.24276.3@smtp.office365.com> <1534419803.3149.0@smtp.office365.com> <1534500637.29318.1@smtp.office365.com> Message-ID: <7b45da6c-c8d3-c54f-89c0-9798589dfdc4@fried.cc> gibi- >> - On migration, when we transfer the allocations in either direction, a >> conflict means someone managed to resize (or otherwise change >> allocations?) since the last time we pulled data. Given the global lock >> in the report client, this should have been tough to do. If it does >> happen, I would think any retry would need to be done all the way back >> at the claim, which I imagine is higher up than we should go. So again, >> I think we should fail the migration and make the user retry. > > Do we want to fail the whole migration or just the migration step (e.g. > confirm, revert)? > The later means that failure during confirm or revert would put the > instance back to VERIFY_RESIZE. While the former would mean that in case > of conflict at confirm we try an automatic revert. But for a conflict at > revert we can only put the instance to ERROR state. This again should be "impossible" to come across. What would the behavior be if we hit, say, ValueError in this spot? -efried From cdent+os at anticdent.org Fri Aug 17 15:51:10 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 17 Aug 2018 16:51:10 +0100 (BST) Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? Message-ID: Earlier I posted a message about a planning etherpad for the extraction of placement http://lists.openstack.org/pipermail/openstack-dev/2018-August/133319.html https://etherpad.openstack.org/p/placement-extract-stein One of the goals of doing the planning and having the etherpad was to be able to get to the PTG with some of the issues resolved so that what little time we had at the PTG could be devoted to resolving any difficult technical details we uncovered in the lead up. One of the questions that has come up on the etherpad is about how placement should be positioned, as a project, after the extraction. The options are: * A repo within the compute project * Its own project, either: * working towards being official and governed * official and governed from the start The etherpad has some discussion about this, but since that etherpad is primarily for listing out the technical concerns I thought it might be useful to bring the discussion out into a wider audience, in a medium more oriented towards discussion. As placement is a service targeted to serving the entire OpenStack community, talking about it widely seems warranted. The outcome I'd like to see happen is the one that makes sure placement becomes useful to the most people and is worked on by the most people, as quickly as possible. If how it is arranged as a project will impact that, now is a good time to figure that out. If you have thoughts about this, please share them in response. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From ed at leafe.com Fri Aug 17 15:59:47 2018 From: ed at leafe.com (Ed Leafe) Date: Fri, 17 Aug 2018 10:59:47 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: Message-ID: <9ACFA0BC-B345-479F-A050-F8EFFD6D27FD@leafe.com> On Aug 17, 2018, at 10:51 AM, Chris Dent wrote: > > One of the questions that has come up on the etherpad is about how > placement should be positioned, as a project, after the extraction. > The options are: > > * A repo within the compute project > * Its own project, either: > * working towards being official and governed > * official and governed from the start I would like to hear from the Cinder and Neutron teams, especially those who were around when those compute sub-projects were split off into their own projects. Did you feel that being independent of compute helped or hindered you? And to those who are in those projects now, is there any sense that things would be better if you were still part of compute? My opinion has been that Placement should have been separate from the start. The longer we keep Placement inside of Nova, the more painful it will be to extract, and hence the likelihood of that every happening is greatly diminished. -- Ed Leafe From sean.mcginnis at gmx.com Fri Aug 17 16:03:15 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 17 Aug 2018 11:03:15 -0500 Subject: [openstack-dev] [Release][PTL] cycle-with-intermediary reminder Message-ID: <20180817160314.GA24275@sm-workstation> Just reminding folks with deliverables following the cycle-with-intermediary release model that next Thursday is the final deadline to get those out. There are a handful of deliverables that have not done a release yet in Rocky. If we do not get a release request from these teams we will need to force a release so we can have a good point to create a stable/rocky branch. There are also a few that have done a release this cycle but appear to have merged more changes since then. For these deliverables, if not requested before the final deadline, we will need to force the creation of the stable/rocky branch from the last release. Finally, we have a large list than I would like to see of tempest plugins that have not done a release. As a reminder, we need those tagged (but not branched) to have a record of which version of the plugin was part of which release cycle. This is to ensure the right plugin can be used based on the version of tempest used to make sure the plugin interface is compatible. These plugins do require some steps to get things set up before doing the release, so please keep that in mind when planning the time you will need. They need to be registered on pypi and a publish-to-pypi job added in the project-config repo before we will be able to process a release for them. Please raise any questions here or in the #openstack-release channel. Thanks! Sean From sean.mcginnis at gmx.com Fri Aug 17 16:09:37 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 17 Aug 2018 11:09:37 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: Message-ID: <20180817160937.GB24275@sm-workstation> On Fri, Aug 17, 2018 at 04:51:10PM +0100, Chris Dent wrote: > > [snip] > > One of the questions that has come up on the etherpad is about how > placement should be positioned, as a project, after the extraction. > The options are: > > * A repo within the compute project > * Its own project, either: > * working towards being official and governed > * official and governed from the start > > [snip] > > The outcome I'd like to see happen is the one that makes sure > placement becomes useful to the most people and is worked on by the > most people, as quickly as possible. If how it is arranged as a > project will impact that, now is a good time to figure that out. > > If you have thoughts about this, please share them in response. > I do think this is important if we want placement to get wider adoption. The subject of using placement in Cinder has come up, and since then I've had a few conversations with people in and outside of that team. I really think until placement is its own project outside of the nova team, there will be resistance from some to adopt it. This reluctance on having it part of Nova may be real or just perceived, but with it within Nova it will likely be an uphill battle for some time convincing other projects that it is a nicely separated common service that they can use. Sean From sean.mcginnis at gmx.com Fri Aug 17 16:12:02 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 17 Aug 2018 11:12:02 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <9ACFA0BC-B345-479F-A050-F8EFFD6D27FD@leafe.com> References: <9ACFA0BC-B345-479F-A050-F8EFFD6D27FD@leafe.com> Message-ID: <20180817161201.GC24275@sm-workstation> On Fri, Aug 17, 2018 at 10:59:47AM -0500, Ed Leafe wrote: > On Aug 17, 2018, at 10:51 AM, Chris Dent wrote: > > > > One of the questions that has come up on the etherpad is about how > > placement should be positioned, as a project, after the extraction. > > The options are: > > > > * A repo within the compute project > > * Its own project, either: > > * working towards being official and governed > > * official and governed from the start > > I would like to hear from the Cinder and Neutron teams, especially those who were around when those compute sub-projects were split off into their own projects. Did you feel that being independent of compute helped or hindered you? And to those who are in those projects now, is there any sense that things would be better if you were still part of compute? > I wasn't around at the beginning of the separation, but I don't think Cinder would be anything like it is today (you can decide if that's a good thing or not) if it had remained a component of Nova. > My opinion has been that Placement should have been separate from the start. The longer we keep Placement inside of Nova, the more painful it will be to extract, and hence the likelihood of that every happening is greatly diminished. I have to agree with this statement. > > > -- Ed Leafe > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dangtrinhnt at gmail.com Fri Aug 17 16:16:21 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Sat, 18 Aug 2018 01:16:21 +0900 Subject: [openstack-dev] [Searchlight] Reaching out to the Searchlight core members for Stein Message-ID: Dear Searchlight team, As you may know, the Searchlight project has missed several milestones, especially the Rocky cycle. The TC already has the plan to remove Searchlight from governance [1] but I volunteer to take over it [2]. But due to the unresponsive on IRC and launchpad, I send this email to reach out to all the Searchlight core members to discuss our plan in Stein as well as re-organize the team. Hopefully, this effort will work well and may bring Searchlight back to life. If anyone on the core team sees this email, please reply. My IRC is dangtrinhnt. [1] https://review.openstack.org/#/c/588644/ [2] https://review.openstack.org/#/c/590601/ Best regards, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Fri Aug 17 16:47:29 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Fri, 17 Aug 2018 11:47:29 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <9ACFA0BC-B345-479F-A050-F8EFFD6D27FD@leafe.com> References: <9ACFA0BC-B345-479F-A050-F8EFFD6D27FD@leafe.com> Message-ID: <9729c622-a39e-d2b5-2b0b-f355153d9444@gmail.com> On 8/17/2018 10:59 AM, Ed Leafe wrote: > On Aug 17, 2018, at 10:51 AM, Chris Dent wrote: >> One of the questions that has come up on the etherpad is about how >> placement should be positioned, as a project, after the extraction. >> The options are: >> >> * A repo within the compute project >> * Its own project, either: >> * working towards being official and governed >> * official and governed from the start > I would like to hear from the Cinder and Neutron teams, especially those who were around when those compute sub-projects were split off into their own projects. Did you feel that being independent of compute helped or hindered you? And to those who are in those projects now, is there any sense that things would be better if you were still part of compute? Ed, I started working with Cinder right after the split had taken place.  I have had several discussions as to how the split took place and why over the years since. In the case of Cinder we split because the pace at which things were changing in the Cinder project had exceeded what could be handled by the Nova team.  Nova has always been a busy project and the changes coming in for Nova Volume were getting lost in the larger Nova picture.  So, Nova Volume was broken out to become Cinder so that people could focus on the storage aspect of things and get change through more quickly. So, I think, for the most part that it has been something that has benefited the project.  The exception would be all the challenges that have come working cross project on changes that impact both Cinder and Nova but that has improved over time.  Given the good leadership I envision for the Placement Service I think that is less of a concern. For the placement service, I would expect that there will be a greater rate of change once more projects are using it.  This would also support splitting the service out. > My opinion has been that Placement should have been separate from the start. The longer we keep Placement inside of Nova, the more painful it will be to extract, and hence the likelihood of that every happening is greatly diminished. > I do agree that pulling the service out sooner than later is probably best. > -- Ed Leafe > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From melwittt at gmail.com Fri Aug 17 16:56:21 2018 From: melwittt at gmail.com (melanie witt) Date: Fri, 17 Aug 2018 09:56:21 -0700 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: Message-ID: <08136ba6-9acc-1bb8-d73b-e5f51c82c62a@gmail.com> On Fri, 17 Aug 2018 16:51:10 +0100 (BST), Chris Dent wrote: > > Earlier I posted a message about a planning etherpad for the > extraction of placement > > http://lists.openstack.org/pipermail/openstack-dev/2018-August/133319.html > https://etherpad.openstack.org/p/placement-extract-stein > > One of the goals of doing the planning and having the etherpad was > to be able to get to the PTG with some of the issues resolved so > that what little time we had at the PTG could be devoted to > resolving any difficult technical details we uncovered in the lead > up. > > One of the questions that has come up on the etherpad is about how > placement should be positioned, as a project, after the extraction. > The options are: > > * A repo within the compute project > * Its own project, either: > * working towards being official and governed > * official and governed from the start > > The etherpad has some discussion about this, but since that etherpad > is primarily for listing out the technical concerns I thought it > might be useful to bring the discussion out into a wider audience, > in a medium more oriented towards discussion. As placement is a > service targeted to serving the entire OpenStack community, talking > about it widely seems warranted. > > The outcome I'd like to see happen is the one that makes sure > placement becomes useful to the most people and is worked on by the > most people, as quickly as possible. If how it is arranged as a > project will impact that, now is a good time to figure that out. > > If you have thoughts about this, please share them in response. Thanks for kicking off this discussion, Chris. I'd like to see placement extracted as a repo within the compute project, as a start. My thinking is, placement was developed to solve several long-standing problems and limitations in Nova (including poor filter scheduler performance, parallel scheduling races, resource tracker issues, and shared storage accounting, just to name a few). We've seen exciting progress in finally solving a lot of these issues as we've been developing placement. But, there is still a significant amount of important work to do in Nova that depends on placement. For example, we need to integrate nested resource providers into the virt drivers in Nova to leverage it for vGPUs and NUMA modeling. We need affinity modeling in placement to properly handle affinity with multiple cells. We need shared storage accounting to properly handle disk usage for deployments on shared storage. As we've worked to develop placement and use it in Nova, we've found in most cases that we've had to develop the Nova side and the placement side together, at the same time, to make things work. This isn't really surprising, as with any brand new functionality, it's difficult to fulfill a use case completely without integrating things together and iterating until everything works. Given that, I'd rather see placement stay under compute so we can iterate quickly, as we still need to develop new features in placement and exercise them for the first time, in Nova. Once the major aforementioned efforts have been figured out and landed with close coordination, I think it would make more sense to look at placement being outside of the compute project. Cheers, -melanie From tpb at dyncloud.net Fri Aug 17 17:13:08 2018 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 17 Aug 2018 13:13:08 -0400 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <9729c622-a39e-d2b5-2b0b-f355153d9444@gmail.com> References: <9ACFA0BC-B345-479F-A050-F8EFFD6D27FD@leafe.com> <9729c622-a39e-d2b5-2b0b-f355153d9444@gmail.com> Message-ID: <20180817171307.s7hvfs6avi4mvu2d@barron.net> On 17/08/18 11:47 -0500, Jay S Bryant wrote: > > >On 8/17/2018 10:59 AM, Ed Leafe wrote: >>On Aug 17, 2018, at 10:51 AM, Chris Dent wrote: >>>One of the questions that has come up on the etherpad is about how >>>placement should be positioned, as a project, after the extraction. >>>The options are: >>> >>>* A repo within the compute project >>>* Its own project, either: >>> * working towards being official and governed >>> * official and governed from the start >>I would like to hear from the Cinder and Neutron teams, especially those who were around when those compute sub-projects were split off into their own projects. Did you feel that being independent of compute helped or hindered you? And to those who are in those projects now, is there any sense that things would be better if you were still part of compute? >Ed, > >I started working with Cinder right after the split had taken place.  >I have had several discussions as to how the split took place and why >over the years since. > >In the case of Cinder we split because the pace at which things were >changing in the Cinder project had exceeded what could be handled by >the Nova team.  Nova has always been a busy project and the changes >coming in for Nova Volume were getting lost in the larger Nova >picture.  So, Nova Volume was broken out to become Cinder so that >people could focus on the storage aspect of things and get change >through more quickly. > >So, I think, for the most part that it has been something that has >benefited the project.  The exception would be all the challenges that >have come working cross project on changes that impact both Cinder and >Nova but that has improved over time.  Given the good leadership I >envision for the Placement Service I think that is less of a concern. > >For the placement service, I would expect that there will be a greater >rate of change once more projects are using it.  This would also >support splitting the service out. >>My opinion has been that Placement should have been separate from the start. The longer we keep Placement inside of Nova, the more painful it will be to extract, and hence the likelihood of that every happening is greatly diminished. >> >I do agree that pulling the service out sooner than later is probably best. Has there been a discussion on record of how use of placement by cinder would affect "standalone" cinder (or manila) initiatives where there is a desire to be able to run cinder by itself (with no-auth) or just with keystone (where OpenStack style multi-tenancy is desired)? Tom Barron (tbarron) >>-- Ed Leafe >> >> >> >> >> >> >>__________________________________________________________________________ >>OpenStack Development Mailing List (not for usage questions) >>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dms at danplanet.com Fri Aug 17 17:30:41 2018 From: dms at danplanet.com (Dan Smith) Date: Fri, 17 Aug 2018 10:30:41 -0700 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <20180817160937.GB24275@sm-workstation> (Sean McGinnis's message of "Fri, 17 Aug 2018 11:09:37 -0500") References: <20180817160937.GB24275@sm-workstation> Message-ID: > The subject of using placement in Cinder has come up, and since then I've had a > few conversations with people in and outside of that team. I really think until > placement is its own project outside of the nova team, there will be resistance > from some to adopt it. I know politics will be involved in this, but this is a really terrible reason to do a thing, IMHO. After the most recent meeting we had with the Cinder people on placement adoption, I'm about as convinced as ever that Cinder won't (and won't need to) _consume_ placement any time soon. I hope it will _report_ to placement so Nova can make better decisions, just like Neutron does now, but I think that's the extent we're likely to see if we're honest. What other projects are _likely_ to _consume_ placement even if they don't know they'd want to? What projects already want to use it but refuse to because it has Nova smeared all over it? We talked about this a lot in the early justification for placement, but the demand for that hasn't really materialized, IMHO; maybe it's just me. > This reluctance on having it part of Nova may be real or just perceived, but > with it within Nova it will likely be an uphill battle for some time convincing > other projects that it is a nicely separated common service that they can use. Splitting it out to another repository within the compute umbrella (what do we call it these days?) satisfies the _technical_ concern of not being able to use placement without installing the rest of the nova code and dependency tree. Artificially creating more "perceived" distance sounds really political to me, so let's be sure we're upfront about the reasoning for doing that if so :) --Dan From ed at leafe.com Fri Aug 17 17:47:10 2018 From: ed at leafe.com (Ed Leafe) Date: Fri, 17 Aug 2018 12:47:10 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> Message-ID: On Aug 17, 2018, at 12:30 PM, Dan Smith wrote: > > Splitting it out to another repository within the compute umbrella (what > do we call it these days?) satisfies the _technical_ concern of not > being able to use placement without installing the rest of the nova code > and dependency tree. Artificially creating more "perceived" distance > sounds really political to me, so let's be sure we're upfront about the > reasoning for doing that if so :) Characterizing the proposed separation as “artificial” seems to be quite political in itself. Of course there are political factors; it would be naive to think otherwise. That’s why I’d like to get input from those people who are not in the middle of it, and have no political motivation. I’d like this to be a technical discussion, with as little political overtones as possible. -- Ed Leafe From samueldmq at gmail.com Fri Aug 17 17:56:56 2018 From: samueldmq at gmail.com (Samuel de Medeiros Queiroz) Date: Fri, 17 Aug 2018 14:56:56 -0300 Subject: [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: Hi all, As someone who cares for this cause and participated twice in this program as a mentor, I'd like to candidate as program coordinator. Victoria, thanks for all your lovely work. You are awesome! Best regards, Samuel On Thu, Aug 9, 2018 at 6:51 PM Kendall Nelson wrote: > You have done such amazing things with the program! We appreciate > everything you do :) Enjoy the little extra spare time. > > -Kendall (daiblo_rojo) > > > On Tue, Aug 7, 2018 at 4:48 PM Victoria Martínez de la Cruz < > victoria at vmartinezdelacruz.com> wrote: > >> Hi all, >> >> I'm reaching you out to let you know that I'll be stepping down as >> coordinator for OpenStack next round. I had been contributing to this >> effort for several rounds now and I believe is a good moment for somebody >> else to take the lead. You all know how important is Outreachy to me and >> I'm grateful for all the amazing things I've done as part of the Outreachy >> program and all the great people I've met in the way. I plan to keep >> involved with the internships but leave the coordination tasks to somebody >> else. >> >> If you are interested in becoming an Outreachy coordinator, let me know >> and I can share my experience and provide some guidance. >> >> Thanks, >> >> Victoria >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Aug 17 18:10:36 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 17 Aug 2018 14:10:36 -0400 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> Message-ID: <1534528706-sup-7156@lrrr.local> Excerpts from Dan Smith's message of 2018-08-17 10:30:41 -0700: > > The subject of using placement in Cinder has come up, and since then I've had a > > few conversations with people in and outside of that team. I really think until > > placement is its own project outside of the nova team, there will be resistance > > from some to adopt it. > > I know politics will be involved in this, but this is a really terrible > reason to do a thing, IMHO. After the most recent meeting we had with > the Cinder people on placement adoption, I'm about as convinced as ever > that Cinder won't (and won't need to) _consume_ placement any time > soon. I hope it will _report_ to placement so Nova can make better > decisions, just like Neutron does now, but I think that's the extent > we're likely to see if we're honest. > > What other projects are _likely_ to _consume_ placement even if they > don't know they'd want to? What projects already want to use it but > refuse to because it has Nova smeared all over it? We talked about this > a lot in the early justification for placement, but the demand for that > hasn't really materialized, IMHO; maybe it's just me. > > > This reluctance on having it part of Nova may be real or just perceived, but > > with it within Nova it will likely be an uphill battle for some time convincing > > other projects that it is a nicely separated common service that they can use. > > Splitting it out to another repository within the compute umbrella (what > do we call it these days?) satisfies the _technical_ concern of not > being able to use placement without installing the rest of the nova code > and dependency tree. Artificially creating more "perceived" distance > sounds really political to me, so let's be sure we're upfront about the > reasoning for doing that if so :) > > --Dan > If we ignore the political concerns in the short term, are there other projects actually interested in using placement? With what technical caveats? Perhaps with modifications of some sort to support the needs of those projects? Doug From sean.mcginnis at gmx.com Fri Aug 17 18:34:26 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 17 Aug 2018 13:34:26 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <20180817171307.s7hvfs6avi4mvu2d@barron.net> References: <9ACFA0BC-B345-479F-A050-F8EFFD6D27FD@leafe.com> <9729c622-a39e-d2b5-2b0b-f355153d9444@gmail.com> <20180817171307.s7hvfs6avi4mvu2d@barron.net> Message-ID: <20180817183426.GA30053@sm-workstation> > > Has there been a discussion on record of how use of placement by cinder > would affect "standalone" cinder (or manila) initiatives where there is a > desire to be able to run cinder by itself (with no-auth) or just with > keystone (where OpenStack style multi-tenancy is desired)? > > Tom Barron (tbarron) > A little bit. That would be one of the pieces that needs to be done if we were to adopt it. Just high level brainstorming, but I think we would need something like we have now with using tooz where if it is configured for it, it will use etcd for distributed locking. And for single node installs it just defaults to file locks. From sean.mcginnis at gmx.com Fri Aug 17 18:37:41 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 17 Aug 2018 13:37:41 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> Message-ID: <20180817183741.GB30053@sm-workstation> On Fri, Aug 17, 2018 at 12:47:10PM -0500, Ed Leafe wrote: > On Aug 17, 2018, at 12:30 PM, Dan Smith wrote: > > > > Splitting it out to another repository within the compute umbrella (what > > do we call it these days?) satisfies the _technical_ concern of not > > being able to use placement without installing the rest of the nova code > > and dependency tree. Artificially creating more "perceived" distance > > sounds really political to me, so let's be sure we're upfront about the > > reasoning for doing that if so :) > > Characterizing the proposed separation as “artificial” seems to be quite political in itself. > Other than currently having a common set of interested people, is there something about placement that makes it something that should be under the compute umbrella? From melwittt at gmail.com Fri Aug 17 18:52:59 2018 From: melwittt at gmail.com (melanie witt) Date: Fri, 17 Aug 2018 11:52:59 -0700 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <20180817183741.GB30053@sm-workstation> References: <20180817160937.GB24275@sm-workstation> <20180817183741.GB30053@sm-workstation> Message-ID: On Fri, 17 Aug 2018 13:37:41 -0500, Sean Mcginnis wrote: > On Fri, Aug 17, 2018 at 12:47:10PM -0500, Ed Leafe wrote: >> On Aug 17, 2018, at 12:30 PM, Dan Smith wrote: >>> >>> Splitting it out to another repository within the compute umbrella (what >>> do we call it these days?) satisfies the _technical_ concern of not >>> being able to use placement without installing the rest of the nova code >>> and dependency tree. Artificially creating more "perceived" distance >>> sounds really political to me, so let's be sure we're upfront about the >>> reasoning for doing that if so :) >> >> Characterizing the proposed separation as “artificial” seems to be quite political in itself. >> > > Other than currently having a common set of interested people, is there > something about placement that makes it something that should be under the > compute umbrella? I explained why I think placement belongs under the compute umbrella for now in my reply [1]. My reply might have been missed in the shuffle. -melanie [1] http://lists.openstack.org/pipermail/openstack-dev/2018-August/133452.html From jungleboyj at gmail.com Fri Aug 17 19:09:34 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Fri, 17 Aug 2018 14:09:34 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <20180817183426.GA30053@sm-workstation> References: <9ACFA0BC-B345-479F-A050-F8EFFD6D27FD@leafe.com> <9729c622-a39e-d2b5-2b0b-f355153d9444@gmail.com> <20180817171307.s7hvfs6avi4mvu2d@barron.net> <20180817183426.GA30053@sm-workstation> Message-ID: <522145da-0a5b-6a7e-47ca-cdb3ef5d263c@gmail.com> On 8/17/2018 1:34 PM, Sean McGinnis wrote: >> Has there been a discussion on record of how use of placement by cinder >> would affect "standalone" cinder (or manila) initiatives where there is a >> desire to be able to run cinder by itself (with no-auth) or just with >> keystone (where OpenStack style multi-tenancy is desired)? >> >> Tom Barron (tbarron) >> > A little bit. That would be one of the pieces that needs to be done if we were > to adopt it. > > Just high level brainstorming, but I think we would need something like we have > now with using tooz where if it is configured for it, it will use etcd for > distributed locking. And for single node installs it just defaults to file > locks. > Sean and Tom, That brief discussion was in Vancouver: https://etherpad.openstack.org/p/YVR-cinder-placement But as Sean indicated I think the long story short was that we would make it so that we could use the placement service if it was available but would leave the existing functionality in the case it wasn't there. Jay From tpb at dyncloud.net Fri Aug 17 19:14:04 2018 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 17 Aug 2018 15:14:04 -0400 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <20180817183426.GA30053@sm-workstation> References: <9ACFA0BC-B345-479F-A050-F8EFFD6D27FD@leafe.com> <9729c622-a39e-d2b5-2b0b-f355153d9444@gmail.com> <20180817171307.s7hvfs6avi4mvu2d@barron.net> <20180817183426.GA30053@sm-workstation> Message-ID: <20180817191404.agw5bwxe5qtqvekh@barron.net> On 17/08/18 13:34 -0500, Sean McGinnis wrote: >> >> Has there been a discussion on record of how use of placement by cinder >> would affect "standalone" cinder (or manila) initiatives where there is a >> desire to be able to run cinder by itself (with no-auth) or just with >> keystone (where OpenStack style multi-tenancy is desired)? >> >> Tom Barron (tbarron) >> > >A little bit. That would be one of the pieces that needs to be done if we were >to adopt it. > >Just high level brainstorming, but I think we would need something like we have >now with using tooz where if it is configured for it, it will use etcd for >distributed locking. And for single node installs it just defaults to file >locks. So I want to understand better what problems placement would solve and whether those problems need to be solved even in the cinder/manila standalone case. And if they do have to be solved in both cases, why not use the same solution for both cases? That *might* mean running the placement service even in the standalone case if it's sufficiently lightweight and can be run without the rest of nova. (Whether it's "under" nova umbrella doesn't matter for this decoupling - nothing I'm saying here is intended to argue against e.g. Mel's or Dan's points in this thread.) -- Tom Barron (tbarron) > > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jungleboyj at gmail.com Fri Aug 17 19:14:49 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Fri, 17 Aug 2018 14:14:49 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> Message-ID: <5841b16e-e790-c151-7c7a-5f41ecaeb149@gmail.com> On 8/17/2018 12:30 PM, Dan Smith wrote: >> The subject of using placement in Cinder has come up, and since then I've had a >> few conversations with people in and outside of that team. I really think until >> placement is its own project outside of the nova team, there will be resistance >> from some to adopt it. > I know politics will be involved in this, but this is a really terrible > reason to do a thing, IMHO. After the most recent meeting we had with > the Cinder people on placement adoption, I'm about as convinced as ever > that Cinder won't (and won't need to) _consume_ placement any time > soon. I hope it will _report_ to placement so Nova can make better > decisions, just like Neutron does now, but I think that's the extent > we're likely to see if we're honest. Dan, I don't know of any reason we wouldn't want to report to placement. Just a matter of getting a person to implement it. Also, from a consumption standpoint we really only have one or two people are are opposed at this point.  We have time scheduled at the PTG to discuss this further.  The discussions in Vancouver seemed to be tilting toward the fact that it might solve other technical issues we have been having from an Active/Active HA configuration standpoint.  Just need to get the right people in the room to talk about it. Jay > What other projects are _likely_ to _consume_ placement even if they > don't know they'd want to? What projects already want to use it but > refuse to because it has Nova smeared all over it? We talked about this > a lot in the early justification for placement, but the demand for that > hasn't really materialized, IMHO; maybe it's just me. > >> This reluctance on having it part of Nova may be real or just perceived, but >> with it within Nova it will likely be an uphill battle for some time convincing >> other projects that it is a nicely separated common service that they can use. > Splitting it out to another repository within the compute umbrella (what > do we call it these days?) satisfies the _technical_ concern of not > being able to use placement without installing the rest of the nova code > and dependency tree. Artificially creating more "perceived" distance > sounds really political to me, so let's be sure we're upfront about the > reasoning for doing that if so :) > > --Dan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tpb at dyncloud.net Fri Aug 17 19:21:17 2018 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 17 Aug 2018 15:21:17 -0400 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <522145da-0a5b-6a7e-47ca-cdb3ef5d263c@gmail.com> References: <9ACFA0BC-B345-479F-A050-F8EFFD6D27FD@leafe.com> <9729c622-a39e-d2b5-2b0b-f355153d9444@gmail.com> <20180817171307.s7hvfs6avi4mvu2d@barron.net> <20180817183426.GA30053@sm-workstation> <522145da-0a5b-6a7e-47ca-cdb3ef5d263c@gmail.com> Message-ID: <20180817192117.vrc3t4la3ypf77nb@barron.net> On 17/08/18 14:09 -0500, Jay S Bryant wrote: > > >On 8/17/2018 1:34 PM, Sean McGinnis wrote: >>>Has there been a discussion on record of how use of placement by cinder >>>would affect "standalone" cinder (or manila) initiatives where there is a >>>desire to be able to run cinder by itself (with no-auth) or just with >>>keystone (where OpenStack style multi-tenancy is desired)? >>> >>>Tom Barron (tbarron) >>> >>A little bit. That would be one of the pieces that needs to be done if we were >>to adopt it. >> >>Just high level brainstorming, but I think we would need something like we have >>now with using tooz where if it is configured for it, it will use etcd for >>distributed locking. And for single node installs it just defaults to file >>locks. >> >Sean and Tom, > >That brief discussion was in Vancouver: >https://etherpad.openstack.org/p/YVR-cinder-placement Thanks, Jay. > >But as Sean indicated I think the long story short was that we would >make it so that we could use the placement service if it was available >but would leave the existing functionality in the case it wasn't >there. I think that even standalone if I'm running a scheduler (i.e., not doing emberlib version of standalone) then I'm likely to want to run them active-active on multiple nodes and will need a solution for the current races. So even standalone we face the question of do we use placement to solve that issue or do we introduce some coordination among the schedulers themselves to solve it. -- Tom Barron (tbarron) > >Jay > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From victoria at vmartinezdelacruz.com Fri Aug 17 20:07:00 2018 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Fri, 17 Aug 2018 17:07:00 -0300 Subject: [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: Thanks everyone for your words! I really love the OpenStack community and I'm glad I could contribute back with this. Samuel has been a great mentor for Outreachy in several rounds and I believe he will excel as coordinator along with Mahati. Thanks for volunteer for this Samuel! All the best, Victoria 2018-08-17 14:56 GMT-03:00 Samuel de Medeiros Queiroz : > Hi all, > > As someone who cares for this cause and participated twice in this program > as a mentor, I'd like to candidate as program coordinator. > > Victoria, thanks for all your lovely work. You are awesome! > > Best regards, > Samuel > > > On Thu, Aug 9, 2018 at 6:51 PM Kendall Nelson > wrote: > >> You have done such amazing things with the program! We appreciate >> everything you do :) Enjoy the little extra spare time. >> >> -Kendall (daiblo_rojo) >> >> >> On Tue, Aug 7, 2018 at 4:48 PM Victoria Martínez de la Cruz < >> victoria at vmartinezdelacruz.com> wrote: >> >>> Hi all, >>> >>> I'm reaching you out to let you know that I'll be stepping down as >>> coordinator for OpenStack next round. I had been contributing to this >>> effort for several rounds now and I believe is a good moment for somebody >>> else to take the lead. You all know how important is Outreachy to me and >>> I'm grateful for all the amazing things I've done as part of the Outreachy >>> program and all the great people I've met in the way. I plan to keep >>> involved with the internships but leave the coordination tasks to somebody >>> else. >>> >>> If you are interested in becoming an Outreachy coordinator, let me know >>> and I can share my experience and provide some guidance. >>> >>> Thanks, >>> >>> Victoria >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri Aug 17 21:02:32 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 17 Aug 2018 14:02:32 -0700 Subject: [openstack-dev] [Openstack-operators] [puppet] migrating to storyboard In-Reply-To: <7a5ea840-687b-449a-75e0-d5fb9268e46a@binero.se> References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> <01cc050e-c74b-a133-4020-6e0f219b7158@binero.se> <7a5ea840-687b-449a-75e0-d5fb9268e46a@binero.se> Message-ID: On Fri, Aug 17, 2018 at 12:15 AM Tobias Urdin wrote: > Hello Kendall, > > I went through the list of projects [1] and could only really see two > things. > > 1) puppet-rally and puppet-openstack-guide is missing > > I had created the projects, but missed adding them to the group. They should be there now :) > 2) We have some support projects which doesn't really need bug tracking, > where some others do. > You can remove puppet-openstack-specs and > puppet-openstack-cookiecutter all others would be > nice to still have left so we can track bugs. [2] > > i can remove them from the group if you want, but I don't think I can delete the projects entirely. > Best regards > Tobias > > [1] https://storyboard-dev.openstack.org/#!/project_group/60 > [2] Keeping puppet-openstack-integration (integration testing) and > puppet-openstack_spec_helper (helper for testing). > These two usually has a lot of changes so would be good to be able > to track them. > > > On 08/16/2018 09:40 PM, Kendall Nelson wrote: > > Hey :) > > I created all the puppet openstack repos in the storyboard-dev envrionment > and made a project group[1]. I am struggling a bit with finding all of your > launchpad projects to perform the migrations through, can you share a list > of all of them? > > -Kendall (diablo_rojo) > > [1] https://storyboard-dev.openstack.org/#!/project_group/60 > > On Wed, Aug 15, 2018 at 12:08 AM Tobias Urdin > wrote: > >> Hello Kendall, >> >> Thanks for your reply, that sounds awesome! >> We can then dig around and see how everything looks when all project bugs >> are imported to stories. >> >> I see no issues with being able to move to Storyboard anytime soon if the >> feedback for >> moving is positive. >> >> Best regards >> >> Tobias >> >> >> On 08/14/2018 09:06 PM, Kendall Nelson wrote: >> >> Hello! >> >> The error you hit can be resolved by adding launchpadlib to your tox.ini >> if I recall correctly.. >> >> also, if you'd like, I can run a test migration of puppet's launchpad >> projects into our storyboard-dev db (where I've done a ton of other test >> migrations) if you want to see how it looks/works with a larger db. Just >> let me know and I can kick it off. >> >> As for a time to migrate, if you all are good with it, we usually >> schedule for Friday's so there is even less activity. Its a small project >> config change and then we just need an infra core to kick off the script >> once the change merges. >> >> -Kendall (diablo_rojo) >> >> On Tue, Aug 14, 2018 at 9:33 AM Tobias Urdin >> wrote: >> >>> Hello all incredible Puppeters, >>> >>> I've tested setting up an Storyboard instance and test migrated >>> puppet-ceph and it went without any issues there using the documentation >>> [1] [2] >>> with just one minor issue during the SB setup [3]. >>> >>> My goal is that we will be able to swap to Storyboard during the Stein >>> cycle but considering that we have a low activity on >>> bugs my opinion is that we could do this swap very easily anything soon >>> as long as everybody is in favor of it. >>> >>> Please let me know what you think about moving to Storyboard? >>> If everybody is in favor of it we can request a migration to infra >>> according to documentation [2]. >>> >>> I will continue to test the import of all our project while people are >>> collecting their thoughts and feedback :) >>> >>> Best regards >>> Tobias >>> >>> [1] https://docs.openstack.org/infra/storyboard/install/development.html >>> [2] https://docs.openstack.org/infra/storyboard/migration.html >>> [3] It failed with an error about launchpadlib not being installed, >>> solved with `tox -e venv pip install launchpadlib` >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev If that's all good now, I can kick off test migrations but having a complete list of the launchpad projects you maintain and use would be super helpful so I don't miss any. Is there somewhere this is documented? Or can you send me a list? -Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Fri Aug 17 23:18:17 2018 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 17 Aug 2018 17:18:17 -0600 Subject: [openstack-dev] [tripleo] fedora28 python3 test environment Message-ID: Ahoy folks, In order to get to a spot where can start evaluate the current status of TripleO under python3 I've thrown together a set of ansible playbooks[0] to launch a fedora28 node and build the required python-tripleoclient (and dependencies) These playbooks will spawn a VM on an OpenStack cloud, runs through the the steps from the RDO etherpad[1] for using the fedora stablized repo and builds all the currently outstanding python3 package builds[2] for python-tripleoclient & company. Once the playblook has completed it should be at a spot to 'dnf install python3-tripleoclient'. I believe from here we can focus on getting the undercloud[3] and standalone[4] processes working correctly under python3. I think initially we should use the existing CentOS7 containers we build under the existing processes to see if we can't get the services deployed as we work on building out all the required python3 packaging. Thanks, -Alex [0] https://github.com/mwhahaha/tripleo-f28-testbed [1] https://review.rdoproject.org/etherpad/p/use-fedora-stabilized [2] https://review.rdoproject.org/r/#/q/status:open+owner:%22Alex+Schultz+%253Caschultz%2540next-development.com%253E%22+topic:python3 [3] https://docs.openstack.org/tripleo-docs/latest/install/installation/installation.html [4] https://docs.openstack.org/tripleo-docs/latest/install/containers_deployment/standalone.html From rodrigodsousa at gmail.com Sat Aug 18 00:39:36 2018 From: rodrigodsousa at gmail.com (Rodrigo Duarte) Date: Fri, 17 Aug 2018 17:39:36 -0700 Subject: [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: Thanks for everything, Victoria. On Fri, Aug 17, 2018 at 1:07 PM Victoria Martínez de la Cruz < victoria at vmartinezdelacruz.com> wrote: > Thanks everyone for your words! > > I really love the OpenStack community and I'm glad I could contribute back > with this. > > Samuel has been a great mentor for Outreachy in several rounds and I > believe he will excel as coordinator along with Mahati. Thanks for > volunteer for this Samuel! > > All the best, > > Victoria > > 2018-08-17 14:56 GMT-03:00 Samuel de Medeiros Queiroz >: > >> Hi all, >> >> As someone who cares for this cause and participated twice in this >> program as a mentor, I'd like to candidate as program coordinator. >> >> Victoria, thanks for all your lovely work. You are awesome! >> >> Best regards, >> Samuel >> >> >> On Thu, Aug 9, 2018 at 6:51 PM Kendall Nelson >> wrote: >> >>> You have done such amazing things with the program! We appreciate >>> everything you do :) Enjoy the little extra spare time. >>> >>> -Kendall (daiblo_rojo) >>> >>> >>> On Tue, Aug 7, 2018 at 4:48 PM Victoria Martínez de la Cruz < >>> victoria at vmartinezdelacruz.com> wrote: >>> >>>> Hi all, >>>> >>>> I'm reaching you out to let you know that I'll be stepping down as >>>> coordinator for OpenStack next round. I had been contributing to this >>>> effort for several rounds now and I believe is a good moment for somebody >>>> else to take the lead. You all know how important is Outreachy to me and >>>> I'm grateful for all the amazing things I've done as part of the Outreachy >>>> program and all the great people I've met in the way. I plan to keep >>>> involved with the internships but leave the coordination tasks to somebody >>>> else. >>>> >>>> If you are interested in becoming an Outreachy coordinator, let me know >>>> and I can share my experience and provide some guidance. >>>> >>>> Thanks, >>>> >>>> Victoria >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Rodrigo http://rodrigods.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From najoy at cisco.com Sat Aug 18 01:28:26 2018 From: najoy at cisco.com (Naveen Joy (najoy)) Date: Sat, 18 Aug 2018 01:28:26 +0000 Subject: [openstack-dev] networking-vpp 18.07 for VPP 18.07 is now available Message-ID: Hello Everyone, In conjunction with the release of VPP 18.07, we'd like to invite you all to try out networking-vpp 18.07 for VPP 18.07. As many of you may already know, VPP is a fast userspace forwarder based on the DPDK toolkit, and uses vector packet processing algorithms to minimize the CPU time spent on each packet to maximize throughput. Networking-vpp is a ML2 mechanism driver that controls VPP on your control and compute hosts to provide fast L2 forwarding under Neutron. This version has the below additional enhancements, along with supporting the latest VPP 18.07 APIs: - Network Trunking - Tap-as-a-Service (Taas) Both the above features are experimental in this release. Along with this, there have been the usual upkeep as Neutron versions and VPP APIs change, bug fixes, code and test improvements. The README [1] explains more about the above features and how you can try out VPP using devstack: the devstack plugin will deploy the mechanism driver and VPP itself and should give you a working system with a minimum of hassle. We will be continuing our development between now and VPP's 18.10 release. There are several features we're planning to work on and we will keep you updated through our bugs list [2]. We welcome anyone who would like to come help us. Everyone is welcome to join our biweekly IRC meetings, every other Monday (the next one is due this Monday at 0900 PST = 1600 GMT. -- Ian & Naveen [1]https://github.com/openstack/networking-vpp/blob/master/README.rst [2]http://goo.gl/i3TzAt -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Sat Aug 18 01:50:00 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Sat, 18 Aug 2018 10:50:00 +0900 Subject: [openstack-dev] [Freezer] Reactivate the team Message-ID: Dear Freezer team, Since we have appointed a new PTL for the Stein cycle (gengchc2), I suggest that we should reactivate the team follows these actions: 1. Have a team meeting to formalize the new leader as well as discuss the new direction. 2. Grant PTL privileges for gengchc2 on Launchpad and Project Gerrit repositories. 3. Reorganize the core team to make sure we have enough active core reviewers for new patches. 4. Clean up bug reports, blueprints on Launchpad, as well as unreviewed patched on Gerrit. I hope that we can revive Freezer. Best regards, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From superuser151093 at gmail.com Sat Aug 18 02:15:20 2018 From: superuser151093 at gmail.com (super user) Date: Sat, 18 Aug 2018 11:15:20 +0900 Subject: [openstack-dev] [goal][python3] more updates to the goal tools In-Reply-To: <1534517202-sup-3596@lrrr.local> References: <1533682621-sup-2284@lrrr.local> <1534516186-sup-9875@lrrr.local> <1534517202-sup-3596@lrrr.local> Message-ID: The problem was fixed. Nguyen Hai On Fri, Aug 17, 2018 at 11:47 PM Doug Hellmann wrote: > I was not able to reproduce the problem. Please test the fix in > https://review.openstack.org/#/c/593068/ to see if that helps. > > Which version of Python are you using to run the tools? And on which OS? > > Excerpts from Doug Hellmann's message of 2018-08-17 10:30:29 -0400: > > I will work on fixing this today. > > > > Has the designate team agreed to go ahead with their migration, or > > are you still testing the scripts? > > > > Doug > > > > Excerpts from super user's message of 2018-08-17 15:37:03 +0900: > > > Hi Doug, > > > > > > I'm Nguyen Hai. I proposed the python3-first patch set for > > > designate projects. However, I have met this error to designate and > > > designate-dashboard: > > > > > > === ../Output/designate/openstack/designate @ master === > > > > > > ./tools/python3-first/do_repo.sh > ../Output/designate/openstack/designate > > > master 24292 > > > > > > ++ cat ../Output/designate/openstack/designate/.gitreview > > > ++ grep project > > > ++ cut -f2 -d= > > > + actual=openstack/designate.git > > > +++ dirname ../Output/designate/openstack/designate > > > ++ basename ../Output/designate/openstack > > > ++ basename ../Output/designate/openstack/designate > > > + expected=openstack/designate > > > + '[' openstack/designate.git '!=' openstack/designate -a > > > openstack/designate.git '!=' openstack/designate.git ']' > > > + git -C ../Output/designate/openstack/designate review -s > > > Creating a git remote called 'gerrit' that maps to: > > > ssh:// > > > nguyentrihai at review.openstack.org:29418/openstack/designate.git > > > ++ basename master > > > + new_branch=python3-first-master > > > + git -C ../Output/designate/openstack/designate branch > > > + grep -q python3-first-master > > > + echo 'creating python3-first-master' > > > creating python3-first-master > > > + git -C ../Output/designate/openstack/designate checkout -- . > > > + git -C ../Output/designate/openstack/designate clean -f -d > > > + git -C ../Output/designate/openstack/designate checkout -q > origin/master > > > + git -C ../Output/designate/openstack/designate checkout -b > > > python3-first-master > > > Switched to a new branch 'python3-first-master' > > > + python3-first -v --debug jobs update > > > ../Output/designate/openstack/designate > > > determining repository name from .gitreview > > > working on openstack/designate @ master > > > looking for zuul config in > > > ../Output/designate/openstack/designate/.zuul.yaml > > > using zuul config from > ../Output/designate/openstack/designate/.zuul.yaml > > > loading project settings from ../project-config/zuul.d/projects.yaml > > > loading project templates from > > > ../openstack-zuul-jobs/zuul.d/project-templates.yaml > > > loading jobs from ../openstack-zuul-jobs/zuul.d/jobs.yaml > > > looking for settings for openstack/designate > > > looking at template 'openstack-python-jobs' > > > looking at template 'openstack-python35-jobs' > > > looking at template 'publish-openstack-sphinx-docs' > > > looking at template 'periodic-stable-jobs' > > > looking at template 'check-requirements' > > > did not find template definition for 'check-requirements' > > > looking at template 'translation-jobs-master-stable' > > > looking at template 'release-notes-jobs' > > > looking at template 'api-ref-jobs' > > > looking at template 'install-guide-jobs' > > > looking at template 'release-openstack-server' > > > filtering on master > > > merging templates > > > adding openstack-python-jobs > > > adding openstack-python35-jobs > > > adding publish-openstack-sphinx-docs > > > adding periodic-stable-jobs > > > adding check-requirements > > > adding release-notes-jobs > > > adding install-guide-jobs > > > merging pipeline check > > > *unhashable type: 'CommentedMap'* > > > *Traceback (most recent call last):* > > > * File > > > > "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/app.py", > > > line 402, in run_subcommand* > > > * result = cmd.run(parsed_args)* > > > * File > > > > "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/command.py", > > > line 184, in run* > > > * return_code = self.take_action(parsed_args) or 0* > > > * File > > > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > > > line 531, in take_action* > > > * entry,* > > > * File > > > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > > > line 397, in merge_project_settings* > > > * up.get(pipeline, comments.CommentedMap()),* > > > * File > > > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > > > line 362, in merge_pipeline* > > > * if job_name in job_names:* > > > *TypeError: unhashable type: 'CommentedMap'* > > > *Traceback (most recent call last):* > > > * File > "/home/stack/python3-first/goal-tools/.tox/venv/bin/python3-first", > > > line 10, in * > > > * sys.exit(main())* > > > * File > > > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/main.py", > > > line 42, in main* > > > * return Python3First().run(argv)* > > > * File > > > > "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/app.py", > > > line 281, in run* > > > * result = self.run_subcommand(remainder)* > > > * File > > > > "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/app.py", > > > line 402, in run_subcommand* > > > * result = cmd.run(parsed_args)* > > > * File > > > > "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/command.py", > > > line 184, in run* > > > * return_code = self.take_action(parsed_args) or 0* > > > * File > > > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > > > line 531, in take_action* > > > * entry,* > > > * File > > > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > > > line 397, in merge_project_settings* > > > * up.get(pipeline, comments.CommentedMap()),* > > > * File > > > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > > > line 362, in merge_pipeline* > > > * if job_name in job_names:* > > > *TypeError: unhashable type: 'CommentedMap'* > > > *+ echo 'No changes'* > > > *No changes* > > > *+ exit 1* > > > > > > On Wed, Aug 8, 2018 at 7:58 AM Doug Hellmann > wrote: > > > > > > > Champions, > > > > > > > > I have made quite a few changes to the tools for generating the zuul > > > > migration patches today. If you have any patches you generated > locally > > > > for testing, please check out the latest version of the tool (when > all > > > > of the changes merge) and regenerate them. > > > > > > > > Doug > > > > > > > > > __________________________________________________________________________ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ynisha11 at gmail.com Sat Aug 18 04:54:30 2018 From: ynisha11 at gmail.com (Nisha Yadav) Date: Sat, 18 Aug 2018 10:24:30 +0530 Subject: [openstack-dev] [Openstack] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: Hey all, Victoria you are an inspiration! Going through your blog when I embarked on the OpenStack journey gave me a lot of motivation. It was a pleasure working with you. Thanks for all your support and hard work. Good luck Samuel, great to hear. Cheers to Outreachy and OpenStack! Best regards, Nisha On Sat, Aug 18, 2018 at 1:37 AM, Victoria Martínez de la Cruz < victoria at vmartinezdelacruz.com> wrote: > Thanks everyone for your words! > > I really love the OpenStack community and I'm glad I could contribute back > with this. > > Samuel has been a great mentor for Outreachy in several rounds and I > believe he will excel as coordinator along with Mahati. Thanks for > volunteer for this Samuel! > > All the best, > > Victoria > > 2018-08-17 14:56 GMT-03:00 Samuel de Medeiros Queiroz >: > >> Hi all, >> >> As someone who cares for this cause and participated twice in this >> program as a mentor, I'd like to candidate as program coordinator. >> >> Victoria, thanks for all your lovely work. You are awesome! >> >> Best regards, >> Samuel >> >> >> On Thu, Aug 9, 2018 at 6:51 PM Kendall Nelson >> wrote: >> >>> You have done such amazing things with the program! We appreciate >>> everything you do :) Enjoy the little extra spare time. >>> >>> -Kendall (daiblo_rojo) >>> >>> >>> On Tue, Aug 7, 2018 at 4:48 PM Victoria Martínez de la Cruz < >>> victoria at vmartinezdelacruz.com> wrote: >>> >>>> Hi all, >>>> >>>> I'm reaching you out to let you know that I'll be stepping down as >>>> coordinator for OpenStack next round. I had been contributing to this >>>> effort for several rounds now and I believe is a good moment for somebody >>>> else to take the lead. You all know how important is Outreachy to me and >>>> I'm grateful for all the amazing things I've done as part of the Outreachy >>>> program and all the great people I've met in the way. I plan to keep >>>> involved with the internships but leave the coordination tasks to somebody >>>> else. >>>> >>>> If you are interested in becoming an Outreachy coordinator, let me know >>>> and I can share my experience and provide some guidance. >>>> >>>> Thanks, >>>> >>>> Victoria >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Sat Aug 18 09:53:10 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Sat, 18 Aug 2018 11:53:10 +0200 Subject: [openstack-dev] [neutron] Broken pep8 job In-Reply-To: References: Message-ID: <08EB38DB-65CF-44A0-9183-4B2FD2A99BD1@redhat.com> Hi, Patch [1] is merged now so You can rebase Your patches for master branch and pep8 should be fine now. [1] https://review.openstack.org/#/c/592884/ > Wiadomość napisana przez Slawomir Kaplonski w dniu 17.08.2018, o godz. 10:16: > > Hi, > > It looks that pep8 job in Neutron is currently broken because of new version of bandit (1.5.0). > If You have in Your patch failure of pep8 job with error like [1] please don’t recheck as it will not help. > I did some patch which should fix it [2]. Will let You know when it will be fixed and You will be able to rebase You patches. > > [1] http://logs.openstack.org/37/382037/67/check/openstack-tox-pep8/e2bbd84/job-output.txt.gz#_2018-08-16_21_45_55_366148 > [2] https://review.openstack.org/#/c/592884/ > > — > Slawek Kaplonski > Senior software engineer > Red Hat > — Slawek Kaplonski Senior software engineer Red Hat From jayamiact at gmail.com Sat Aug 18 10:59:02 2018 From: jayamiact at gmail.com (Hhhtyh ByNhb) Date: Sat, 18 Aug 2018 17:59:02 +0700 Subject: [openstack-dev] [kolla-ansible] unable to install kolla-ansible Message-ID: Hi All, I tried to install openstack kolla by following kolla documentation: https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html When doing this command: "kolla-ansible -i ./multinode bootstrap-servers", I observed following error: Error message is "fatal: [control01]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'ipv4'\n\nThe error appears to have been in '/usr/local/share/kolla-ansible/ansible/roles/baremetal/tasks/pre-install.yml': line 19, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Generate /etc/hosts for all of the nodes\n ^ here\n"} Command failed ansible-playbook -i ./multinode -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml -e CONFIG_DIR=/etc/kolla -e action=bootstrap-servers /usr/local/share/kolla-ansible/ansible/kolla-host.yml any suggestion? BR//jaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From dabarren at gmail.com Sat Aug 18 11:10:48 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Sat, 18 Aug 2018 13:10:48 +0200 Subject: [openstack-dev] [kolla-ansible] unable to install kolla-ansible In-Reply-To: References: Message-ID: Hi, the interface name must be the same for all nodes including localhost (deployment host). If the iface names are not the same along all the hosts will have to: - Comment network_interface (or the interface var which name differs) - Set the variable with an appropriate value at inventory file on each host. In example: [compute] node1 network_interface=eth1 node2 network_interface=eno1 Regards On Sat, Aug 18, 2018, 12:59 PM Hhhtyh ByNhb wrote: > Hi All, > I tried to install openstack kolla by following kolla documentation: > https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html > > When doing this command: "kolla-ansible -i ./multinode bootstrap-servers", > I observed following error: > Error message is "fatal: [control01]: FAILED! => {"msg": "The task > includes an option with an undefined variable. The error was: 'dict object' > has no attribute 'ipv4'\n\nThe error appears to have been in > '/usr/local/share/kolla-ansible/ansible/roles/baremetal/tasks/pre-install.yml': > line 19, column 3, but may\nbe elsewhere in the file depending on the exact > syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Generate > /etc/hosts for all of the nodes\n ^ here\n"} > Command failed ansible-playbook -i ./multinode -e @/etc/kolla/globals.yml > -e @/etc/kolla/passwords.yml -e CONFIG_DIR=/etc/kolla -e > action=bootstrap-servers > /usr/local/share/kolla-ansible/ansible/kolla-host.yml > > any suggestion? > > BR//jaya > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Sat Aug 18 12:25:25 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Sat, 18 Aug 2018 13:25:25 +0100 (BST) Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <1534528706-sup-7156@lrrr.local> References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> Message-ID: On Fri, 17 Aug 2018, Doug Hellmann wrote: > If we ignore the political concerns in the short term, are there > other projects actually interested in using placement? With what > technical caveats? Perhaps with modifications of some sort to support > the needs of those projects? I think ignoring the political concerns (in any term) is not possible. We are a group of interacting humans, politics are always present. Cordial but active debate to determine the best course of action is warranted. (tl;dr: Let's have existing and potential placement contributors decide its destiny.) Five topics I think are relevant here, in order of politics, least to most: 1. Placement has been designed from the outset to have a hard contract between it and the services that use it. Being embedded and/or deeply associated with one other single service means that that contract evolves in a way that is strongly coupled. We made placement have an HTTP API, not use RPC, and not produce or consume notifications because it is supposed to be bounded and independent. Sharing code and human management doesn't enable that. As you'll read below, placement's progress has been overly constrained by compute. 2. There are other projects actively using placement, not merely interested. If you search codesearch.o.o for terms like "resource provider" you can find them. But to rattle off those that I'm aware of (which I'm certain is an incomplete list): * Cyborg is actively working on using placement to track FPGA e.g., https://review.openstack.org/#/c/577438/ * Blazar is working on using them for reservations: https://review.openstack.org/#/q/status:open+project:openstack/blazar+branch:master+topic:bp/placement-api * Neutron has been reporting to placement for some time and has work in progress on minimum bandwidth handling with the help of placement: https://review.openstack.org/#/q/status:open+project:openstack/neutron-lib+branch:master+topic:minimum-bandwidth-allocation-placement-api * Ironic uses resource classes to describe types of nodes * Mogan (which may or may not be dead, not clear) was intending to track nodes with placement: http://git.openstack.org/cgit/openstack/mogan-specs/tree/specs/pike/approved/track-resources-using-placement.rst * Zun is working to use placement for "unified resource management": https://blueprints.launchpad.net/zun/+spec/use-placement-resource-management * Cinder has had discussion about using placement to overcome race conditions in its existing scheduling subsystem (a purpose to which placement was explicitly designed). 3. Placement's direction and progress is heavily curtailed by the choices and priorities that compute wants or needs to make. That means that for the past year or more much of the effort in placement has been devoted to eventually satisfying NFV use cases driven by "enhanced platform awareness" to the detriment of the simple use case of "get me some resource providers". Compute is under a lot of pressure in this area, and is under-resourced, so placement's progress is delayed by being in the (necessarily) narrow engine of compute. Similarly, computes's overall progress is delayed because a lot of attention is devoted to placement. I think the relevance of that latter point has been under-estimated by the voices that are hoping to keep placement near to nova. The concern there has been that we need to continue iterating in concert and quickly. I disagree with that from two angles. One is that we _will_ continue to work in concert. We are OpenStack, and presumably all the same people working on placement now will continue to do so, and many of those are active contributors to nova. We will work together. The other angle is that, actually, placement is several months ahead of nova in terms of features and it would be to everyone's advantage if placement, from a feature standpoint, took a time out (to extract) while nova had a chance to catch up with fully implementing shared providers, nested resource providers, consumer generations, resource request groups, using the reshaper properly from the virt drivers, having a fast forward upgrade script talking to PlacementDirect, and other things that I'm not remembering right now. The placement side for those things is in place. The work that it needs now is a _diversity_ of callers (not just nova) so that the features can been fully exercised and bugs and performance problems found. The projects above, which might like to--and at various times have expressed desire to do so--work on features within placement that would benefit their projects, are forced to compete with existing priorities to get blueprint attention. Though runways seemed to help a bit on that front this just-ending cycle, it's simply too dense a competitive environment for good, clean progress. 4. While extracting the placement code into another repo within the compute umbrella might help a small amount with some of the competition described in item 3, it would be insufficient. The same forces would apply. Similarly, _if_ there are factors which are preventing some people from being willing to participate with a compute-associated project, a repo within compute is an insufficient break. Also, if we are going to go to the trouble of doing any kind of disrupting transition of the placement code, we may as well take as a big a step as possible in this one instance as these opportunities are rare and our capacity for change is slow. I started working on placement in early 2016, at that time we had plans to extract it to "it's own thing". We've passed the half-way point in 2018. 5. In OpenStack we have a tradition of the contributors having a strong degree of self-determination. If that tradition is to be upheld, then it would make sense that the people who designed and wrote the code that is being extracted would get to choose what happens with it. As much as Mel's and Dan's (only picking on them here because they are the dissenting voices that have showed up so far) input has been extremely important and helpful in the evolution of placement, they are not those people. So my hope is that (in no particular order) Jay Pipes, Eric Fried, Takashi Natsume, Tetsuro Nakamura, Matt Riedemann, Andrey Volkov, Alex Xu, Balazs Gibizer, Ed Leafe, and any other contributor to placement whom I'm forgetting [1] would express their preference on what they'd like to see happen. At the same time, if people from neutron, cinder, blazar, zun, mogan, ironic, and cyborg could express their preferences, we can get through this by acclaim and get on with getting things done. Thank you. [1] My apologies if I have left you out. It's Saturday, I'm tried from trying to make this happen for so long, and I'm using various forms of git blame and git log to extract names from the git history and there's some degree of magic and guessing going on. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From cdent+os at anticdent.org Sat Aug 18 12:35:32 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Sat, 18 Aug 2018 13:35:32 +0100 (BST) Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <20180817171307.s7hvfs6avi4mvu2d@barron.net> References: <9ACFA0BC-B345-479F-A050-F8EFFD6D27FD@leafe.com> <9729c622-a39e-d2b5-2b0b-f355153d9444@gmail.com> <20180817171307.s7hvfs6avi4mvu2d@barron.net> Message-ID: On Fri, 17 Aug 2018, Tom Barron wrote: > Has there been a discussion on record of how use of placement by cinder would > affect "standalone" cinder (or manila) initiatives where there is a desire to > be able to run cinder by itself (with no-auth) or just with keystone (where > OpenStack style multi-tenancy is desired)? This has been sort of glancingly addressed elsewhere in the thread, but I wanted to make it explicit: * It's possible now to run placement now with faked auth (the noauth2 concept) or keystone. Making auth handling more flexible would be a matter of choosing a different piece of middleware. * Partly driven by discussion with Cinder people and also with fast forward upgrade people, there's a feature in placement called "PlacementDirect". This makes it possible to interact with placement in the same process as the thing that is using it, rather than over HTTP. So no additional placement server is required, if that's how people want it. More info at: https://github.com/openstack/nova/blob/master/nova/api/openstack/placement/direct.py http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/reshape-provider-tree.html#direct-interface-to-placement However, since placement is lightweight (a simple-ish wsgi app over some database tables) it likely easier just to run it like normal, maybe in some containers to allow it to scale up and down easily. If you have a look at https://github.com/cdent/placedock and some of the links in the README, the flexibility and lightness may become a bit more clear. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From doug at doughellmann.com Sat Aug 18 12:56:14 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Sat, 18 Aug 2018 08:56:14 -0400 Subject: [openstack-dev] [goal][python3] more updates to the goal tools In-Reply-To: References: <1533682621-sup-2284@lrrr.local> <1534516186-sup-9875@lrrr.local> <1534517202-sup-3596@lrrr.local> Message-ID: <1534596935-sup-867@lrrr.local> https://review.openstack.org/#/c/593289/ should fix the similar problem you found with CommentedSeq. Doug Excerpts from super user's message of 2018-08-18 11:15:20 +0900: > The problem was fixed. > > Nguyen Hai > > On Fri, Aug 17, 2018 at 11:47 PM Doug Hellmann > wrote: > > > I was not able to reproduce the problem. Please test the fix in > > https://review.openstack.org/#/c/593068/ to see if that helps. > > > > Which version of Python are you using to run the tools? And on which OS? > > > > Excerpts from Doug Hellmann's message of 2018-08-17 10:30:29 -0400: > > > I will work on fixing this today. > > > > > > Has the designate team agreed to go ahead with their migration, or > > > are you still testing the scripts? > > > > > > Doug > > > > > > Excerpts from super user's message of 2018-08-17 15:37:03 +0900: > > > > Hi Doug, > > > > > > > > I'm Nguyen Hai. I proposed the python3-first patch set for > > > > designate projects. However, I have met this error to designate and > > > > designate-dashboard: > > > > > > > > === ../Output/designate/openstack/designate @ master === > > > > > > > > ./tools/python3-first/do_repo.sh > > ../Output/designate/openstack/designate > > > > master 24292 > > > > > > > > ++ cat ../Output/designate/openstack/designate/.gitreview > > > > ++ grep project > > > > ++ cut -f2 -d= > > > > + actual=openstack/designate.git > > > > +++ dirname ../Output/designate/openstack/designate > > > > ++ basename ../Output/designate/openstack > > > > ++ basename ../Output/designate/openstack/designate > > > > + expected=openstack/designate > > > > + '[' openstack/designate.git '!=' openstack/designate -a > > > > openstack/designate.git '!=' openstack/designate.git ']' > > > > + git -C ../Output/designate/openstack/designate review -s > > > > Creating a git remote called 'gerrit' that maps to: > > > > ssh:// > > > > nguyentrihai at review.openstack.org:29418/openstack/designate.git > > > > ++ basename master > > > > + new_branch=python3-first-master > > > > + git -C ../Output/designate/openstack/designate branch > > > > + grep -q python3-first-master > > > > + echo 'creating python3-first-master' > > > > creating python3-first-master > > > > + git -C ../Output/designate/openstack/designate checkout -- . > > > > + git -C ../Output/designate/openstack/designate clean -f -d > > > > + git -C ../Output/designate/openstack/designate checkout -q > > origin/master > > > > + git -C ../Output/designate/openstack/designate checkout -b > > > > python3-first-master > > > > Switched to a new branch 'python3-first-master' > > > > + python3-first -v --debug jobs update > > > > ../Output/designate/openstack/designate > > > > determining repository name from .gitreview > > > > working on openstack/designate @ master > > > > looking for zuul config in > > > > ../Output/designate/openstack/designate/.zuul.yaml > > > > using zuul config from > > ../Output/designate/openstack/designate/.zuul.yaml > > > > loading project settings from ../project-config/zuul.d/projects.yaml > > > > loading project templates from > > > > ../openstack-zuul-jobs/zuul.d/project-templates.yaml > > > > loading jobs from ../openstack-zuul-jobs/zuul.d/jobs.yaml > > > > looking for settings for openstack/designate > > > > looking at template 'openstack-python-jobs' > > > > looking at template 'openstack-python35-jobs' > > > > looking at template 'publish-openstack-sphinx-docs' > > > > looking at template 'periodic-stable-jobs' > > > > looking at template 'check-requirements' > > > > did not find template definition for 'check-requirements' > > > > looking at template 'translation-jobs-master-stable' > > > > looking at template 'release-notes-jobs' > > > > looking at template 'api-ref-jobs' > > > > looking at template 'install-guide-jobs' > > > > looking at template 'release-openstack-server' > > > > filtering on master > > > > merging templates > > > > adding openstack-python-jobs > > > > adding openstack-python35-jobs > > > > adding publish-openstack-sphinx-docs > > > > adding periodic-stable-jobs > > > > adding check-requirements > > > > adding release-notes-jobs > > > > adding install-guide-jobs > > > > merging pipeline check > > > > *unhashable type: 'CommentedMap'* > > > > *Traceback (most recent call last):* > > > > * File > > > > > > "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/app.py", > > > > line 402, in run_subcommand* > > > > * result = cmd.run(parsed_args)* > > > > * File > > > > > > "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/command.py", > > > > line 184, in run* > > > > * return_code = self.take_action(parsed_args) or 0* > > > > * File > > > > > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > > > > line 531, in take_action* > > > > * entry,* > > > > * File > > > > > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > > > > line 397, in merge_project_settings* > > > > * up.get(pipeline, comments.CommentedMap()),* > > > > * File > > > > > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > > > > line 362, in merge_pipeline* > > > > * if job_name in job_names:* > > > > *TypeError: unhashable type: 'CommentedMap'* > > > > *Traceback (most recent call last):* > > > > * File > > "/home/stack/python3-first/goal-tools/.tox/venv/bin/python3-first", > > > > line 10, in * > > > > * sys.exit(main())* > > > > * File > > > > > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/main.py", > > > > line 42, in main* > > > > * return Python3First().run(argv)* > > > > * File > > > > > > "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/app.py", > > > > line 281, in run* > > > > * result = self.run_subcommand(remainder)* > > > > * File > > > > > > "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/app.py", > > > > line 402, in run_subcommand* > > > > * result = cmd.run(parsed_args)* > > > > * File > > > > > > "/home/stack/python3-first/goal-tools/.tox/venv/lib/python3.6/site-packages/cliff/command.py", > > > > line 184, in run* > > > > * return_code = self.take_action(parsed_args) or 0* > > > > * File > > > > > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > > > > line 531, in take_action* > > > > * entry,* > > > > * File > > > > > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > > > > line 397, in merge_project_settings* > > > > * up.get(pipeline, comments.CommentedMap()),* > > > > * File > > > > > > "/home/stack/python3-first/goal-tools/goal_tools/python3_first/jobs.py", > > > > line 362, in merge_pipeline* > > > > * if job_name in job_names:* > > > > *TypeError: unhashable type: 'CommentedMap'* > > > > *+ echo 'No changes'* > > > > *No changes* > > > > *+ exit 1* > > > > > > > > On Wed, Aug 8, 2018 at 7:58 AM Doug Hellmann > > wrote: > > > > > > > > > Champions, > > > > > > > > > > I have made quite a few changes to the tools for generating the zuul > > > > > migration patches today. If you have any patches you generated > > locally > > > > > for testing, please check out the latest version of the tool (when > > all > > > > > of the changes merge) and regenerate them. > > > > > > > > > > Doug > > > > > > > > > > > > __________________________________________________________________________ > > > > > OpenStack Development Mailing List (not for usage questions) > > > > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From doug at doughellmann.com Sat Aug 18 13:10:04 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Sat, 18 Aug 2018 09:10:04 -0400 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> Message-ID: <1534597467-sup-947@lrrr.local> Excerpts from Chris Dent's message of 2018-08-18 13:25:25 +0100: > > 2. There are other projects actively using placement, not merely > interested. If you search codesearch.o.o for terms like "resource > provider" you can find them. But to rattle off those that I'm aware > of (which I'm certain is an incomplete list): This is the bit I was trying to ask about, and it sounds like the answer is clearly "yes, there are other services using placement". If the answer had been "no, there is no interest" then it would not make sense to go further. Now, as you point out, the next step is to find out from the contributors to placement what they want the ultimate home for the service to be, and what steps need to be taken to reach that point. Doug From aj at suse.com Sat Aug 18 17:39:19 2018 From: aj at suse.com (Andreas Jaeger) Date: Sat, 18 Aug 2018 19:39:19 +0200 Subject: [openstack-dev] [astara] Retirement of astara repos? In-Reply-To: References: <572FF9CF-9AB5-4CBA-A4C8-26E7A012309E@gmx.com> <0DE3CB09-5CA1-4557-9158-C40F0FC37E6E@mcclain.xyz> Message-ID: Mark, shall I start the retirement of astara now? I would appreciate a "go ahead" - unless you want to do it yourself... Andreas On 2018-02-23 14:34, Andreas Jaeger wrote: > On 2018-01-11 22:55, Mark McClain wrote: >> Sean, Andreas- >> >> Sorry I missed Andres’ message earlier in December about retiring astara. Everyone is correct that development stopped a good while ago. We attempted in Barcelona to find others in the community to take over the day-to-day management of the project. Unfortunately, nothing sustained resulted from that session. >> >> I’ve intentionally delayed archiving the repos because of background conversations around restarting active development for some pieces bubble up from time-to-time. I’ll contact those I know were interested and try for a resolution to propose before the PTG. > > Mark, any update here? -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From jayamiact at gmail.com Sat Aug 18 17:47:45 2018 From: jayamiact at gmail.com (Hhhtyh ByNhb) Date: Sun, 19 Aug 2018 00:47:45 +0700 Subject: [openstack-dev] [Openstack] [kolla-ansible] unable to install kolla-ansible In-Reply-To: References: Message-ID: Hi Eduardo, Thanks for your suggestion which is very helpful. Indeed, network_interface is not same in localhost and other nodes. Furthermore, the reason it failed is "network_interface" must have configured IPv4 address and up. This is not mentioned *explicitly *in the quick start documentation. To help someone like me (if any in the future), i've created bug report in the following url https://bugs.launchpad.net/kolla-ansible/+bug/1787750 Thanks again! Regards, J On Sat, Aug 18, 2018 at 6:25 PM Eduardo Gonzalez wrote: > Hi, the interface name must be the same for all nodes including localhost > (deployment host). If the iface names are not the same along all the hosts > will have to: > > - Comment network_interface (or the interface var which name differs) > - Set the variable with an appropriate value at inventory file on each > host. In example: > [compute] > node1 network_interface=eth1 > node2 network_interface=eno1 > > Regards > > On Sat, Aug 18, 2018, 12:59 PM Hhhtyh ByNhb wrote: > >> Hi All, >> I tried to install openstack kolla by following kolla documentation: >> https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html >> >> When doing this command: "kolla-ansible -i ./multinode >> bootstrap-servers", >> I observed following error: >> Error message is "fatal: [control01]: FAILED! => {"msg": "The task >> includes an option with an undefined variable. The error was: 'dict object' >> has no attribute 'ipv4'\n\nThe error appears to have been in >> '/usr/local/share/kolla-ansible/ansible/roles/baremetal/tasks/pre-install.yml': >> line 19, column 3, but may\nbe elsewhere in the file depending on the exact >> syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Generate >> /etc/hosts for all of the nodes\n ^ here\n"} >> Command failed ansible-playbook -i ./multinode -e @/etc/kolla/globals.yml >> -e @/etc/kolla/passwords.yml -e CONFIG_DIR=/etc/kolla -e >> action=bootstrap-servers >> /usr/local/share/kolla-ansible/ansible/kolla-host.yml >> >> any suggestion? >> >> BR//jaya >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Sat Aug 18 22:22:09 2018 From: openstack at fried.cc (Eric Fried) Date: Sat, 18 Aug 2018 17:22:09 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> Message-ID: > So my hope is that (in no particular order) Jay Pipes, Eric Fried, > Takashi Natsume, Tetsuro Nakamura, Matt Riedemann, Andrey Volkov, > Alex Xu, Balazs Gibizer, Ed Leafe, and any other contributor to > placement whom I'm forgetting [1] would express their preference on > what they'd like to see happen. Extract now, as a fully-independent project, under governance right out of the gate. A year ago we might have developed a feature where one patch would straddle placement and nova. Six months ago we were developing features where those patches were separate but in the same series. Today that's becoming less and less the case: nrp, sharing providers, consumer generations, and other things mentioned have had their placement side completed and their nova side - if started at all - done completely independently. The reshaper series is an exception - but looking back on its development, Depends-On would have worked just as well. Agree with the notion that nova needs to catch up with placement features, and would therefore actually *benefit* from a placement "feature freeze". Agree the nova project is overloaded and would benefit from having broader core reviewer coverage over placement code. The list Chris gives above includes more than one non-nova core who should be made placement cores as soon as that's a thing. The fact that other projects are in various stages of adopting/using placement in various capacities is a great motive to extract. But IMO the above would be sufficient reason without that. Plus other things that other people have said. Do it. Do it completely. Do it now. -efried . From mriedemos at gmail.com Sun Aug 19 02:53:10 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sun, 19 Aug 2018 10:53:10 +0800 Subject: [openstack-dev] [PTL][TC] Stein Cycle Goals In-Reply-To: <20180813152247.GA25512@sm-workstation> References: <20180813152247.GA25512@sm-workstation> Message-ID: <3f2c8ccf-406d-c5c8-8a96-fed8c3719798@gmail.com> On 8/13/2018 11:22 PM, Sean McGinnis wrote: > Support Pre Upgrade Checks (upgrade-checkers) > --------------------------------------------- > One of the hot topics we've been discussing for some time at Forum and PTG > events has been making upgrades better. To that end, we want to add tooling for > each service to provide an "upgrade checker" tool that can check for various > known issues so we can either give operators some assurance that they are ready > to upgrade, or to let them know if some step was overlooked that will need to > be done before attempting the upgrade. > > This goal follows the Nova `nova-status upgrade check` command precendent to > make it a consistent capability for each service. The checks should look for > things like missing or changed configuration options, incompatible object > states, or other conditions that could lead to failures upgrading that project. > > More details can be found in the goal: > > https://governance.openstack.org/tc/goals/stein/upgrade-checkers.html > > Thanks to Matt Riedemann for championing this goal. I've been traveling for the past week but plan on writing up some more thorough developer documentation for this including examples of the checks added to nova since they don't all follow the same pattern. I also plan on starting an etherpad where I will try and go through some of the upgrade release notes for the core projects looking for candidates so those projects can see what to look for. I know that's late for Rocky but should give some ideas for Stein. Feel free to reach out to me with any questions though. -- Thanks, Matt From mriedemos at gmail.com Sun Aug 19 03:20:09 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 18 Aug 2018 22:20:09 -0500 Subject: [openstack-dev] [Openstack-operators][nova] deployment question consultation In-Reply-To: References: Message-ID: On 8/13/2018 9:30 PM, Rambo wrote: >        1.Only in one region situation,what will happen in the cloud as > expansion of cluster size?Then how solve it?If have the limit physical > node number under the one region situation?How many nodes would be the > best in one regione? This question seems a bit too open-ended and completely subjective. >        2.When to use cellV2 is most suitable in cloud? When this has been asked in the past, the best answer I've heard is, "whatever your current DB and MQ limits are for nova". So if that's about 200 hosts before the DB/MQ are struggling, then that could a cell. For reference, CERN has 70 cells with ~200 hosts per cell. However, at least one public cloud is approaching cells with fewer cells and thousands of hosts per cell. So it varies based on where your limitations lie. Also note that cells do not have to be defined by DB/MQ limits, they can also be used as a way to shard hardware and instance (flavor) types. For example, generation 1 hardware in cell1, gen2 hardware in cell2, etc. >        3.How to shorten the time of batch creation of instance? This again is completely subjective. It would depend on the configuration, size of nova deployment, size of hardware, available capacity, etc. Have you done profiling to point out *specific* problem areas during multi-create, for example, are you packing VMs onto as few hosts as possible to reduce costs? And if so, are you hitting problems with that due to rescheduling the server build because you have multiple scheduler workers picking the same host(s) for a subset of the VMs in the request? Or are you hitting RPC timeouts during select_destinations? If so, that might be related to the problem described in [1]. [1] https://review.openstack.org/#/c/510235/ -- Thanks, Matt From mriedemos at gmail.com Sun Aug 19 03:21:03 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 18 Aug 2018 22:21:03 -0500 Subject: [openstack-dev] [Openstack-operators][nova] deployment question consultation In-Reply-To: References: Message-ID: <51390cd8-8495-5c34-c109-a25e5165e4f3@gmail.com> +ops list On 8/18/2018 10:20 PM, Matt Riedemann wrote: > On 8/13/2018 9:30 PM, Rambo wrote: >>         1.Only in one region situation,what will happen in the cloud >> as expansion of cluster size?Then how solve it?If have the limit >> physical node number under the one region situation?How many nodes >> would be the best in one regione? > > This question seems a bit too open-ended and completely subjective. > >>         2.When to use cellV2 is most suitable in cloud? > > When this has been asked in the past, the best answer I've heard is, > "whatever your current DB and MQ limits are for nova". So if that's > about 200 hosts before the DB/MQ are struggling, then that could a cell. > For reference, CERN has 70 cells with ~200 hosts per cell. However, at > least one public cloud is approaching cells with fewer cells and > thousands of hosts per cell. So it varies based on where your > limitations lie. Also note that cells do not have to be defined by DB/MQ > limits, they can also be used as a way to shard hardware and instance > (flavor) types. For example, generation 1 hardware in cell1, gen2 > hardware in cell2, etc. > >>         3.How to shorten the time of batch creation of instance? > > This again is completely subjective. It would depend on the > configuration, size of nova deployment, size of hardware, available > capacity, etc. Have you done profiling to point out *specific* problem > areas during multi-create, for example, are you packing VMs onto as few > hosts as possible to reduce costs? And if so, are you hitting problems > with that due to rescheduling the server build because you have multiple > scheduler workers picking the same host(s) for a subset of the VMs in > the request? Or are you hitting RPC timeouts during select_destinations? > If so, that might be related to the problem described in [1]. > > [1] https://review.openstack.org/#/c/510235/ > -- Thanks, Matt From mriedemos at gmail.com Sun Aug 19 03:28:38 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 18 Aug 2018 22:28:38 -0500 Subject: [openstack-dev] [nova] about live-resize down the instance In-Reply-To: <2ba474cf-5439-b293-be2b-e72d5325e07d@gmail.com> References: <5B71A7E1.1090709@windriver.com> <2ba474cf-5439-b293-be2b-e72d5325e07d@gmail.com> Message-ID: <1345610c-f6b1-5067-046f-67419e034a0a@gmail.com> On 8/13/2018 4:42 PM, melanie witt wrote: > From what I find in the PTG notes [1] and the spec, it looks like this > didn't go forward for lack of general interest. We have a lot of work to > review every cycle and we generally focus on functionality that impact > operators the most and look for +1s on specs from operators who are > interested in the features. From what I can tell from the > comments/votes, there isn't much/any operator interest about live-resize. > > As has been mentioned, resize down is hypervisor-specific whether or not > it's supported. For example, in the libvirt driver, resize down of > ephemeral disk is not allowed at all and resize down of root disk is > only allowed if the instance is boot-from-volume [2]. The xenapi driver > disallows resize down of ephemeral disk [3], the vmware driver disallows > resize down of root disk [4], the hyperv driver disallows resize down of > root disk [5]. > > So, allowing only live-resize up would be a way to behave consistently > across virt drivers. Somewhat related to this, but some feedback I got from our product teams this last week was they'd like to see the duplicate resource allocations during (cold) resize to same host fixed. Since Queens the migration record has the old flavor allocations and the instance holds the new flavor allocations, but the same-host compute node resource provider still has allocations from both during the resize, which might take it out of scheduling contention even though we only need to count the max() of any values between the old/new flavors. Our public cloud is very keen on maximizing efficient usage of hosts (packing) for cost reasons (obviously, and this is common) but this isn't just a public cloud cost savings thing. It's also an issue for, are you ready for this? **EDGE!!!** Simply because you could have one or two compute hosts at a site and can't afford the duplicate resource allocations in that case for a resize. Anyway, it's somewhat tangential to the live resize stuff, but it's an added complication in existing functionality that we should fix, and Kevin/Yikun/myself (one of us) plan on working on that in Stein. -- Thanks, Matt From ivolinengong at gmail.com Sun Aug 19 18:32:58 2018 From: ivolinengong at gmail.com (Ivoline Ngong) Date: Sun, 19 Aug 2018 18:32:58 +0000 Subject: [openstack-dev] New Contributor Message-ID: Hi all, I am Ivoline Ngong. I am a Cameroonian who lives in Turkey. I will love to contribute to Open source through OpenStack. I code in Java and Python and I think OpenStack is a good fit for me.I'll appreciate it if you can point me to the right direction on how I can get started. Best regards,Ivoline -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Sun Aug 19 22:08:13 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Mon, 20 Aug 2018 10:08:13 +1200 Subject: [openstack-dev] [qinling] Qinling dashboard demo Message-ID: Hi all, Thanks to the effort from Keiichi Hikita of qinling-dashboard team, we have an initial version of qinling-dashboard available(but unfortunately it's not included in Rocky). Keiichi Hikita also recorded a video for the introduction, you can see the demo here[1]. The team will continue to add more features that Qinling provides and improve the panels to make the function developers happy. Any feedback or suggestion is welcomed. [1]: https://youtu.be/fdySaFZb2cY Cheers, Lingxian Kong -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhengzhenyulixi at gmail.com Mon Aug 20 01:07:59 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Mon, 20 Aug 2018 09:07:59 +0800 Subject: [openstack-dev] [Searchlight] Reaching out to the Searchlight core members for Stein In-Reply-To: References: Message-ID: Hi, Thanks for stand up, I would like to continue work on SL. On Sat, Aug 18, 2018 at 12:16 AM Trinh Nguyen wrote: > Dear Searchlight team, > > As you may know, the Searchlight project has missed several milestones, > especially the Rocky cycle. The TC already has the plan to remove > Searchlight from governance [1] but I volunteer to take over it [2]. But > due to the unresponsive on IRC and launchpad, I send this email to reach > out to all the Searchlight core members to discuss our plan in Stein as > well as re-organize the team. Hopefully, this effort will work well and may > bring Searchlight back to life. > > If anyone on the core team sees this email, please reply. > > My IRC is dangtrinhnt. > > [1] https://review.openstack.org/#/c/588644/ > [2] https://review.openstack.org/#/c/590601/ > > Best regards, > > *Trinh Nguyen *| Founder & Chief Architect > > > > *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Mon Aug 20 01:40:37 2018 From: soulxu at gmail.com (Alex Xu) Date: Mon, 20 Aug 2018 09:40:37 +0800 Subject: [openstack-dev] [Nova] A multi-cell instance-list performance test In-Reply-To: References: Message-ID: 2018-08-17 2:44 GMT+08:00 Dan Smith : > > yes, the DB query was in serial, after some investigation, it seems > that we are unable to perform eventlet.mockey_patch in uWSGI mode, so > > Yikun made this fix: > > > > https://review.openstack.org/#/c/592285/ > > Cool, good catch :) > > > > > After making this change, we test again, and we got this kind of data: > > > > total collect sort view > > before monkey_patch 13.5745 11.7012 1.1511 0.5966 > > after monkey_patch 12.8367 10.5471 1.5642 0.6041 > > > > The performance improved a little, and from the log we can saw: > > Since these all took ~1s when done in series, but now take ~10s in > parallel, I think you must be hitting some performance bottleneck in > either case, which is why the overall time barely changes. Some ideas: > > 1. In the real world, I think you really need to have 10x database > servers or at least a DB server with plenty of cores loading from a > very fast (or separate) disk in order to really ensure you're getting > full parallelism of the DB work. However, because these queries all > took ~1s in your serialized case, I expect this is not your problem. > > 2. What does the network look like between the api machine and the DB? > > 3. What do the memory and CPU usage of the api process look like while > this is happening? > > Related to #3, even though we issue the requests to the DB in parallel, > we still process the result of those calls in series in a single python > thread on the API. That means all the work of reading the data from the > socket, constructing the SQLA objects, turning those into nova objects, > etc, all happens serially. It could be that the DB query is really a > small part of the overall time and our serialized python handling of the > result is the slow part. If you see the api process pegging a single > core at 100% for ten seconds, I think that's likely what is happening. > I remember I did a test on sqlalchemy, the sqlalchemy object construction is super slow than fetch the data from remote. Maybe you can try profile it, to figure out how much time spend on the wire, how much time spend on construct the object. http://docs.sqlalchemy.org/en/latest/faq/performance.html > > > so, now the queries are in parallel, but the whole thing still seems > > serial. > > In your table, you show the time for "1 cell, 1000 instances" as ~3s and > "10 cells, 1000 instances" as 10s. The problem with comparing those > directly is that in the latter, you're actually pulling 10,000 records > over the network, into memory, processing them, and then just returning > the first 1000 from the sort. A closer comparison would be the "10 > cells, 100 instances" with "1 cell, 1000 instances". In both of those > cases, you pull 1000 instances total from the db, into memory, and > return 1000 from the sort. In that case, the multi-cell situation is > faster (~2.3s vs. ~3.1s). You could also compare the "10 cells, 1000 > instances" case to "1 cell, 10,000 instances" just to confirm at the > larger scale that it's better or at least the same. > > We _have_ to pull $limit instances from each cell, in case (according to > the sort key) the first $limit instances are all in one cell. We _could_ > try to batch the results from each cell to avoid loading so many that we > don't need, but we punted this as an optimization to be done later. I'm > not sure it's really worth the complexity at this point, but it's > something we could investigate. > > --Dan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Mon Aug 20 03:57:24 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Mon, 20 Aug 2018 13:57:24 +1000 Subject: [openstack-dev] [neutron][api][grapql] Proof of Concept In-Reply-To: References: Message-ID: On 25/07/18 23:48, Ed Leafe wrote: > On Jun 6, 2018, at 7:35 PM, Gilles Dubreuil wrote: >> The branch is now available under feature/graphql on the neutron core repository [1]. > I wanted to follow up with you on this effort. I haven’t seen any activity on StoryBoard for several weeks now, and wanted to be sure that there was nothing blocking you that we could help with. > > > -- Ed Leafe > > > Hi Ed, Thanks for following up. There has been 2 essential counterproductive factors to the effort. The first is that I've been busy attending issues on other part of my job. The second one is the lack of response/follow-up from the Neutron core team. We have all the plumbing in place but we need to layer the data through oslo policies. Cheers, Gilles From tobias.urdin at binero.se Mon Aug 20 07:12:29 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 20 Aug 2018 09:12:29 +0200 Subject: [openstack-dev] [Openstack-operators] [puppet] migrating to storyboard In-Reply-To: References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> <01cc050e-c74b-a133-4020-6e0f219b7158@binero.se> <7a5ea840-687b-449a-75e0-d5fb9268e46a@binero.se> Message-ID: <07896a14-6327-8a9a-30f8-c58a8aa9c5eb@binero.se> Hello Kendall, I think you can just leave them in the group then, at your convenience. If they are there we can start using them if so. Best regards Tobias On 08/17/2018 11:08 PM, Kendall Nelson wrote: > > > On Fri, Aug 17, 2018 at 12:15 AM Tobias Urdin > wrote: > > Hello Kendall, > > I went through the list of projects [1] and could only really see > two things. > > 1) puppet-rally and puppet-openstack-guide is missing > > I had created the projects, but missed adding them to the group. They > should be there now :) > > 2) We have some support projects which doesn't really need bug > tracking, where some others do. >     You can remove puppet-openstack-specs and > puppet-openstack-cookiecutter all others would be >     nice to still have left so we can track bugs. [2] > > i can remove them from the group if you want, but I don't think I can > delete the projects entirely. > > Best regards > Tobias > > [1] https://storyboard-dev.openstack.org/#!/project_group/60 > > [2] Keeping puppet-openstack-integration (integration testing) and > puppet-openstack_spec_helper (helper for testing). >       These two usually has a lot of changes so would be good to > be able to track them. > > > On 08/16/2018 09:40 PM, Kendall Nelson wrote: >> Hey :) >> >> I created all the puppet openstack repos in the storyboard-dev >> envrionment and made a project group[1]. I am struggling a bit >> with finding all of your launchpad projects to perform the >> migrations through, can you share a list of all of them? >> >> -Kendall (diablo_rojo) >> >> [1] https://storyboard-dev.openstack.org/#!/project_group/60 >> >> >> On Wed, Aug 15, 2018 at 12:08 AM Tobias Urdin >> > wrote: >> >> Hello Kendall, >> >> Thanks for your reply, that sounds awesome! >> We can then dig around and see how everything looks when all >> project bugs are imported to stories. >> >> I see no issues with being able to move to Storyboard anytime >> soon if the feedback for >> moving is positive. >> >> Best regards >> >> Tobias >> >> >> On 08/14/2018 09:06 PM, Kendall Nelson wrote: >>> Hello! >>> >>> The error you hit can be resolved by adding launchpadlib to >>> your tox.ini if I recall correctly.. >>> >>> also, if you'd like, I can run a test migration of puppet's >>> launchpad projects into our storyboard-dev db (where I've >>> done a ton of other test migrations) if you want to see how >>> it looks/works with a larger db. Just let me know and I can >>> kick it off. >>> >>> As for a time to migrate, if you all are good with it, we >>> usually schedule for Friday's so there is even less >>> activity. Its a small project config change and then we just >>> need an infra core to kick off the script once the change >>> merges. >>> >>> -Kendall (diablo_rojo) >>> >>> On Tue, Aug 14, 2018 at 9:33 AM Tobias Urdin >>> > wrote: >>> >>> Hello all incredible Puppeters, >>> >>> I've tested setting up an Storyboard instance and test >>> migrated >>> puppet-ceph and it went without any issues there using >>> the documentation >>> [1] [2] >>> with just one minor issue during the SB setup [3]. >>> >>> My goal is that we will be able to swap to Storyboard >>> during the Stein >>> cycle but considering that we have a low activity on >>> bugs my opinion is that we could do this swap very >>> easily anything soon >>> as long as everybody is in favor of it. >>> >>> Please let me know what you think about moving to >>> Storyboard? >>> If everybody is in favor of it we can request a >>> migration to infra >>> according to documentation [2]. >>> >>> I will continue to test the import of all our project >>> while people are >>> collecting their thoughts and feedback :) >>> >>> Best regards >>> Tobias >>> >>> [1] >>> https://docs.openstack.org/infra/storyboard/install/development.html >>> [2] >>> https://docs.openstack.org/infra/storyboard/migration.html >>> [3] It failed with an error about launchpadlib not being >>> installed, >>> solved with `tox -e venv pip install launchpadlib` >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > If that's all good now, I can kick off test migrations but having a > complete list of the launchpad projects you maintain and use would be > super helpful so I don't miss any. Is there somewhere this is > documented? Or can you send me a list? > > -Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmellado at redhat.com Mon Aug 20 07:22:56 2018 From: dmellado at redhat.com (Daniel Mellado Area) Date: Mon, 20 Aug 2018 09:22:56 +0200 Subject: [openstack-dev] [kuryr] No meeting today Message-ID: Hi folks, since a couple of our core reviewers are on PTO today we have decided not to host a meeting today. If you have any questions just ping us at #openstack-kuryr Best! Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From pawel at suder.info Mon Aug 20 07:46:36 2018 From: pawel at suder.info (=?UTF-8?Q?Pawe=C5=82_Suder?=) Date: Mon, 20 Aug 2018 09:46:36 +0200 Subject: [openstack-dev] [neutron] Bug deputy report week Aug 12th - Aug 19th Message-ID: <3c1aa7685951d269d614ef6a23cf6310@suder.info> Dear Neutron Team, I was the bugs deputy for the week of Aug 12th - Aug 19th. Here's the summary of the bugs that were filed: Something what needs attention to clarify: https://bugs.launchpad.net/neutron/+bug/1787385 - vpnaas and dynamic-routing missing neutron-tempest-plugin in test-requirements.txt https://bugs.launchpad.net/neutron/+bug/1787420 - Floating ip association to router interface should be restricted - Confirmed behavior - should be like that? New: https://bugs.launchpad.net/neutron/+bug/1787534 - DNS extension broken for provider networks - configuration or code issue? Confirmed: https://bugs.launchpad.net/neutron/+bug/1786934 - Duplicating packet log when enable security group logging - another issue split from https://bugs.launchpad.net/neutron/+bug/1781372 In progress: https://bugs.launchpad.net/neutron/+bug/1786746 - issue with not deleted NFLOG - fix proposed https://review.openstack.org/#/c/591978/ https://bugs.launchpad.net/neutron/+bug/1787028 - neutron returned internal server error on updating tags - issue observed on gates - ERROR neutron.api.v2.resource StaleDataError: UPDATE statement on table 'standardattributes' expected to update 1 row(s); 0 were matched. https://bugs.launchpad.net/neutron/+bug/1787106 - cannot ping over router between VMs in two different subnets, with allowed ICMP and set logging https://bugs.launchpad.net/neutron/+bug/1787119 - [Logging] firewall_group log resource and security_group log resource could not co-exist correctly Potentially RFE: https://bugs.launchpad.net/neutron/+bug/1787793 - Does not support shared N-S qos per-tenant - looks like a RFE question? https://bugs.launchpad.net/neutron/+bug/1787792 - Does not support ipv6 N-S qos - looks like a RFE question? Best regards, -- Paweł Suder From thierry at openstack.org Mon Aug 20 08:56:06 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 20 Aug 2018 10:56:06 +0200 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> Message-ID: Chris Dent wrote: > [...] > So my hope is that (in no particular order) Jay Pipes, Eric Fried, > Takashi Natsume, Tetsuro Nakamura, Matt Riedemann, Andrey Volkov, > Alex Xu, Balazs Gibizer, Ed Leafe, and any other contributor to > placement whom I'm forgetting [1] would express their preference on > what they'd like to see happen. > > At the same time, if people from neutron, cinder, blazar, zun, > mogan, ironic, and cyborg could express their preferences, we can get > through this by acclaim and get on with getting things done. > [...] I fully support that existing and potential placement contributors decide its destiny. Upstream development work in OpenStack is (currently) organized in "project teams" (groups of people), not programs (domains). If the existing and potential contributors match an existing project team, then work can be placed within it. If it's just a very partial overlap, I'd recommend creating a specific team, especially if placement is expected to attract other contributors. Notes: - the new project team "officialization" can be fast-tracked as this would be a split of official code, not new code - being in separate teams does not prevent cooperation or coordination -- Thierry Carrez (ttx) From thierry at openstack.org Mon Aug 20 09:01:09 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 20 Aug 2018 11:01:09 +0200 Subject: [openstack-dev] New Contributor In-Reply-To: References: Message-ID: <49b15368-a67b-dfcb-0501-6b527a42c71c@openstack.org> Ivoline Ngong wrote: > I am Ivoline Ngong. I am a Cameroonian who lives in Turkey. I will love > to contribute to Open source through OpenStack. I code in Java and > Python and I think OpenStack is a good fit for me. > I'll appreciate it if you can point me to the right direction on how I > can get started. Hi Ivoline, Welcome to the OpenStack community ! The OpenStack Technical Committee maintains a list of areas in most need of help: https://governance.openstack.org/tc/reference/help-most-needed.html Depending on your interest, you could pick one of those projects and reach out to the mentioned contact points. For more general information on how to contribute, you can check out our contribution portal: https://www.openstack.org/community/ -- Thierry Carrez (ttx) From cdent+os at anticdent.org Mon Aug 20 09:02:55 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 20 Aug 2018 10:02:55 +0100 (BST) Subject: [openstack-dev] [all] [tc] Who is responsible for the mission and scope of OpenStack? Message-ID: TC-members and everyone else, In the discussion on a draft technical vision for OpenStack at https://review.openstack.org/#/c/592205/ there is a question about whether the TC is "responsible for the mission and scope of OpenStack". As the discussion there indicates, there is plenty of nuance, but underlying it is a pretty fundamental question that seems important to answer as we're going into yet another TC election period. I've always assumed it was the case: the TC is an elected representative body of the so-called active technical contributors to OpenStack. So while the TC is not responsible for creating the mission from whole cloth, they are responsible for representing the goals of the people who elected them and thus for refining, documenting and caring for the mission and scope while working with all the other people invested in the community. Does anyone disagree? If so, who is responsible if not the TC? -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From thierry at openstack.org Mon Aug 20 09:33:30 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 20 Aug 2018 11:33:30 +0200 Subject: [openstack-dev] [all] [tc] Who is responsible for the mission and scope of OpenStack? In-Reply-To: References: Message-ID: Chris Dent wrote: > In the discussion on a draft technical vision for OpenStack at > >     https://review.openstack.org/#/c/592205/ > > there is a question about whether the TC is "responsible for the > mission and scope of OpenStack". > > As the discussion there indicates, there is plenty of nuance, but > underlying it is a pretty fundamental question that seems important > to answer as we're going into yet another TC election period. > > I've always assumed it was the case: the TC is an elected > representative body of the so-called active technical contributors > to OpenStack. So while the TC is not responsible for creating the > mission from whole cloth, they are responsible for representing the > goals of the people who elected them and thus for refining, > documenting and caring for the mission and scope while working with > all the other people invested in the community. > > Does anyone disagree? If so, who is responsible if not the TC? A few indications from the bylaws: The TC manages "technical matters relating to the OpenStack Project". The "OpenStack project" is defined as "the released projects to enable cloud computing and the associated library projects, gating projects, and supporting projects". The TC cooperates with the board to apply the OpenStack trademark : it approves components for trademark programs inclusion, and the board decides whether to apply it to those or not. In that sense, the TC is in charge of the "scope" of "OpenStack", since it decides which components may be added to things that bear the "OpenStack" name. But ultimately the board is in charge of the trademark, and not apply it even to things we deem in scope. As far as the mission goes, the bylaws just indicate that the OpenStack project is an "open source cloud computing project". Beyond that, we have an official OpenStack mission statement. In the past, we ruled that this statement was co-owned by the Board and the TC, and that changes to it would need to pass *both* bodies. So I think the answer is... The Board and the TC co-own the mission and the scope of OpenStack. The TC is in charge of the technical side, especially when it comes to implementation. The Board is in charge of the trademark side (it ultimately owns what can be called "OpenStack"). -- Thierry Carrez (ttx) From tobias.urdin at binero.se Mon Aug 20 09:36:36 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 20 Aug 2018 11:36:36 +0200 Subject: [openstack-dev] neutron ipv6 radvd sends out link-local or nothing as def gw (L3 HA issue?) Message-ID: <07d859f1-034e-d3b4-7fc0-0c7b087056a4@binero.se> Hello, Note: before reading, this router was a regular router but was then disable, changed ha=true so it's now a L3 HA router, then it was enabled again. CC openstack-dev for help or feedback if it's a possible bug. I've been testing around with IPv6 and overall the experience has been positive but I've met some weird issue that I cannot put my head around. So this is a neutron L3 router with an outside interface with a ipv4 and ipv6 from the provider network and one inside interface for ipv4 and one inside interface for ipv6. The instances for some reason get's there default gateway as the ipv6 link-local (in fe80::/10) from the router with SLAAC and radvd. (1111.2222 is provider network, 1111.4444 is inside network, they are masked so don't pay attention to the number per se) *interfaces inside router:* 15: ha-9bde1bb1-bd: mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000     link/ether fa:16:3e:05:80:32 brd ff:ff:ff:ff:ff:ff     inet 169.254.192.7/18 brd 169.254.255.255 scope global ha-9bde1bb1-bd        valid_lft forever preferred_lft forever     inet 169.254.0.1/24 scope global ha-9bde1bb1-bd        valid_lft forever preferred_lft forever     inet6 fe80::f816:3eff:fe05:8032/64 scope link        valid_lft forever preferred_lft forever 19: qg-86e465f6-33: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000     link/ether fa:16:3e:3b:8b:a5 brd ff:ff:ff:ff:ff:ff     inet 1.2.3.4/22 scope global qg-86e465f6-33        valid_lft forever preferred_lft forever     inet6 1111:2222::f/64 scope global nodad        valid_lft forever preferred_lft forever     inet6 fe80::f816:3eff:fe3b:8ba5/64 scope link nodad        valid_lft forever preferred_lft forever 1168: qr-5be04815-68: mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000     link/ether fa:16:3e:c3:85:bd brd ff:ff:ff:ff:ff:ff     inet 192.168.99.1/24 scope global qr-5be04815-68        valid_lft forever preferred_lft forever     inet6 fe80::f816:3eff:fec3:85bd/64 scope link        valid_lft forever preferred_lft forever 1169: qr-7fad6b1b-c9: mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000     link/ether fa:16:3e:66:de:a8 brd ff:ff:ff:ff:ff:ff     inet6 1111:4444:0:1::1/64 scope global nodad        valid_lft forever preferred_lft forever     inet6 fe80::f816:3eff:fe66:dea8/64 scope link        valid_lft forever preferred_lft forever I get this error messages in dmesg on the network node: [581085.858869] IPv6: qr-5be04815-68: IPv6 duplicate address 1111:4444:0:1:f816:3eff:fec3:85bd detected! [581085.997497] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address 1111:4444:0:1:f816:3eff:fe66:dea8 detected! [581142.869939] IPv6: qr-5be04815-68: IPv6 duplicate address 1111:4444:0:1:f816:3eff:fec3:85bd detected! [581143.182371] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address 1111:4444:0:1:f816:3eff:fe66:dea8 detected! *radvd:* interface qr-7fad6b1b-c9 {    AdvSendAdvert on;    MinRtrAdvInterval 30;    MaxRtrAdvInterval 100;    AdvLinkMTU 1450;    RDNSS  2001:4860:4860::8888  {};    prefix 1111:4444:0:1::/64    {         AdvOnLink on;         AdvAutonomous on;    }; }; *inside instance:* ipv4 = 192.168.199.7 ipv6 = 1111:4444:0:1:f816:3eff:fe29:723d/64 (from radvd SLAAC) I can ping ipv4 gateway 192.168.199.1 and internet over ipv4. I can ping ipv6 gateway 1111:4444:0:1::1 but I can't ping the internet checking the ipv6 routing table on my instance I either get no default gateway at all or I get a default gateway to a fe80::/10 link-local address. IIRC this worked before I changed the router to a L3 HA router. Appreciate any feedback! Best regards Tobias -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.se Mon Aug 20 09:37:57 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 20 Aug 2018 11:37:57 +0200 Subject: [openstack-dev] [neutron] neutron ipv6 radvd sends out link-local or nothing as def gw (L3 HA issue?) In-Reply-To: <07d859f1-034e-d3b4-7fc0-0c7b087056a4@binero.se> References: <07d859f1-034e-d3b4-7fc0-0c7b087056a4@binero.se> Message-ID: Forgot [neutron] tag. On 08/20/2018 11:36 AM, Tobias Urdin wrote: > Hello, > > Note: before reading, this router was a regular router but was then > disable, changed ha=true so it's now a L3 HA router, then it was > enabled again. > CC openstack-dev for help or feedback if it's a possible bug. > > I've been testing around with IPv6 and overall the experience has been > positive but I've met some weird issue that I cannot put my head around. > So this is a neutron L3 router with an outside interface with a ipv4 > and ipv6 from the provider network and one inside interface for ipv4 > and one inside interface for ipv6. > > The instances for some reason get's there default gateway as the ipv6 > link-local (in fe80::/10) from the router with SLAAC and radvd. > > (1111.2222 is provider network, 1111.4444 is inside network, they are > masked so don't pay attention to the number per se) > > *interfaces inside router:* > 15: ha-9bde1bb1-bd: mtu 1450 qdisc > noqueue state UNKNOWN group default qlen 1000 >     link/ether fa:16:3e:05:80:32 brd ff:ff:ff:ff:ff:ff >     inet 169.254.192.7/18 brd 169.254.255.255 scope global ha-9bde1bb1-bd >        valid_lft forever preferred_lft forever >     inet 169.254.0.1/24 scope global ha-9bde1bb1-bd >        valid_lft forever preferred_lft forever >     inet6 fe80::f816:3eff:fe05:8032/64 scope link >        valid_lft forever preferred_lft forever > 19: qg-86e465f6-33: mtu 1500 qdisc > noqueue state UNKNOWN group default qlen 1000 >     link/ether fa:16:3e:3b:8b:a5 brd ff:ff:ff:ff:ff:ff >     inet 1.2.3.4/22 scope global qg-86e465f6-33 >        valid_lft forever preferred_lft forever >     inet6 1111:2222::f/64 scope global nodad >        valid_lft forever preferred_lft forever >     inet6 fe80::f816:3eff:fe3b:8ba5/64 scope link nodad >        valid_lft forever preferred_lft forever > 1168: qr-5be04815-68: mtu 1450 qdisc > noqueue state UNKNOWN group default qlen 1000 >     link/ether fa:16:3e:c3:85:bd brd ff:ff:ff:ff:ff:ff >     inet 192.168.99.1/24 scope global qr-5be04815-68 >        valid_lft forever preferred_lft forever >     inet6 fe80::f816:3eff:fec3:85bd/64 scope link >        valid_lft forever preferred_lft forever > 1169: qr-7fad6b1b-c9: mtu 1450 qdisc > noqueue state UNKNOWN group default qlen 1000 >     link/ether fa:16:3e:66:de:a8 brd ff:ff:ff:ff:ff:ff >     inet6 1111:4444:0:1::1/64 scope global nodad >        valid_lft forever preferred_lft forever >     inet6 fe80::f816:3eff:fe66:dea8/64 scope link >        valid_lft forever preferred_lft forever > > I get this error messages in dmesg on the network node: > [581085.858869] IPv6: qr-5be04815-68: IPv6 duplicate address > 1111:4444:0:1:f816:3eff:fec3:85bd detected! > [581085.997497] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address > 1111:4444:0:1:f816:3eff:fe66:dea8 detected! > [581142.869939] IPv6: qr-5be04815-68: IPv6 duplicate address > 1111:4444:0:1:f816:3eff:fec3:85bd detected! > [581143.182371] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address > 1111:4444:0:1:f816:3eff:fe66:dea8 detected! > > *radvd:* > interface qr-7fad6b1b-c9 > { >    AdvSendAdvert on; >    MinRtrAdvInterval 30; >    MaxRtrAdvInterval 100; > >    AdvLinkMTU 1450; > >    RDNSS  2001:4860:4860::8888  {}; > >    prefix 1111:4444:0:1::/64 >    { >         AdvOnLink on; >         AdvAutonomous on; >    }; > }; > > *inside instance:* > ipv4 = 192.168.199.7 > ipv6 = 1111:4444:0:1:f816:3eff:fe29:723d/64 (from radvd SLAAC) > > I can ping ipv4 gateway 192.168.199.1 and internet over ipv4. > I can ping ipv6 gateway 1111:4444:0:1::1 but I can't ping the internet > > checking the ipv6 routing table on my instance I either get no default > gateway at all or I get a default gateway to a fe80::/10 link-local > address. > IIRC this worked before I changed the router to a L3 HA router. > > Appreciate any feedback! > > Best regards > Tobias -------------- next part -------------- An HTML attachment was scrubbed... URL: From nakamura.tetsuro at lab.ntt.co.jp Mon Aug 20 09:39:23 2018 From: nakamura.tetsuro at lab.ntt.co.jp (TETSURO NAKAMURA) Date: Mon, 20 Aug 2018 18:39:23 +0900 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> Message-ID: <9a8cf047-2b40-6a39-876c-4b78ce761e97@lab.ntt.co.jp> >> So my hope is that (in no particular order) Jay Pipes, Eric Fried, >> Takashi Natsume, Tetsuro Nakamura, Matt Riedemann, Andrey Volkov, >> Alex Xu, Balazs Gibizer, Ed Leafe, and any other contributor to >> placement whom I'm forgetting [1] would express their preference on >> what they'd like to see happen. +1 on extract. 1) Since we are open source, we should keep thinking about getting new developers. Keeping functions in one big project is not a good strategy to get new participants. 2) Let projects get small sounds a good strategy to get more core reviewers. Being a core is a strong reason for one to spend more time on OpenStack in a company... at least in the company I work for. -- Tetsuro Nakamura NTT Network Service Systems Laboratories From thierry at openstack.org Mon Aug 20 09:44:20 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 20 Aug 2018 11:44:20 +0200 Subject: [openstack-dev] [ptg] Register now ! Price increases Wednesday Message-ID: Hi everyone, If you haven't registered for the PTG in Denver yet, I'd recommend you do it today or tomorrow as the price will switch to last-minute pricing at the end of day on August 22 ! https://www.openstack.org/ptg Protip: There might still be a couple of rooms available in the PTG hotel, but our hotel block closes TODAY. So book now if you want to be at the center of the activity ! -- Thierry Carrez (ttx) From tobias.urdin at binero.se Mon Aug 20 09:50:44 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 20 Aug 2018 11:50:44 +0200 Subject: [openstack-dev] [neutron] neutron ipv6 radvd sends out link-local or nothing as def gw (L3 HA issue?) In-Reply-To: References: <07d859f1-034e-d3b4-7fc0-0c7b087056a4@binero.se> Message-ID: <4e1f27a4-cd70-4ad4-5249-20b18e1dab76@binero.se> Ok, so the issue here seems to be that I have a L3 HA router with SLAAC, both the active and standby router will configure the SLAAC obtained address causing a conflict since both side share the same MAC address. Is there any workaround for this? Should SLAAC even be enabled for interfaces on the standby router? Best regards Tobias On 08/20/2018 11:37 AM, Tobias Urdin wrote: > Forgot [neutron] tag. > > On 08/20/2018 11:36 AM, Tobias Urdin wrote: >> Hello, >> >> Note: before reading, this router was a regular router but was then >> disable, changed ha=true so it's now a L3 HA router, then it was >> enabled again. >> CC openstack-dev for help or feedback if it's a possible bug. >> >> I've been testing around with IPv6 and overall the experience has >> been positive but I've met some weird issue that I cannot put my head >> around. >> So this is a neutron L3 router with an outside interface with a ipv4 >> and ipv6 from the provider network and one inside interface for ipv4 >> and one inside interface for ipv6. >> >> The instances for some reason get's there default gateway as the ipv6 >> link-local (in fe80::/10) from the router with SLAAC and radvd. >> >> (1111.2222 is provider network, 1111.4444 is inside network, they are >> masked so don't pay attention to the number per se) >> >> *interfaces inside router:* >> 15: ha-9bde1bb1-bd: mtu 1450 qdisc >> noqueue state UNKNOWN group default qlen 1000 >>     link/ether fa:16:3e:05:80:32 brd ff:ff:ff:ff:ff:ff >>     inet 169.254.192.7/18 brd 169.254.255.255 scope global ha-9bde1bb1-bd >>        valid_lft forever preferred_lft forever >>     inet 169.254.0.1/24 scope global ha-9bde1bb1-bd >>        valid_lft forever preferred_lft forever >>     inet6 fe80::f816:3eff:fe05:8032/64 scope link >>        valid_lft forever preferred_lft forever >> 19: qg-86e465f6-33: mtu 1500 qdisc >> noqueue state UNKNOWN group default qlen 1000 >>     link/ether fa:16:3e:3b:8b:a5 brd ff:ff:ff:ff:ff:ff >>     inet 1.2.3.4/22 scope global qg-86e465f6-33 >>        valid_lft forever preferred_lft forever >>     inet6 1111:2222::f/64 scope global nodad >>        valid_lft forever preferred_lft forever >>     inet6 fe80::f816:3eff:fe3b:8ba5/64 scope link nodad >>        valid_lft forever preferred_lft forever >> 1168: qr-5be04815-68: mtu 1450 >> qdisc noqueue state UNKNOWN group default qlen 1000 >>     link/ether fa:16:3e:c3:85:bd brd ff:ff:ff:ff:ff:ff >>     inet 192.168.99.1/24 scope global qr-5be04815-68 >>        valid_lft forever preferred_lft forever >>     inet6 fe80::f816:3eff:fec3:85bd/64 scope link >>        valid_lft forever preferred_lft forever >> 1169: qr-7fad6b1b-c9: mtu 1450 >> qdisc noqueue state UNKNOWN group default qlen 1000 >>     link/ether fa:16:3e:66:de:a8 brd ff:ff:ff:ff:ff:ff >>     inet6 1111:4444:0:1::1/64 scope global nodad >>        valid_lft forever preferred_lft forever >>     inet6 fe80::f816:3eff:fe66:dea8/64 scope link >>        valid_lft forever preferred_lft forever >> >> I get this error messages in dmesg on the network node: >> [581085.858869] IPv6: qr-5be04815-68: IPv6 duplicate address >> 1111:4444:0:1:f816:3eff:fec3:85bd detected! >> [581085.997497] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address >> 1111:4444:0:1:f816:3eff:fe66:dea8 detected! >> [581142.869939] IPv6: qr-5be04815-68: IPv6 duplicate address >> 1111:4444:0:1:f816:3eff:fec3:85bd detected! >> [581143.182371] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address >> 1111:4444:0:1:f816:3eff:fe66:dea8 detected! >> >> *radvd:* >> interface qr-7fad6b1b-c9 >> { >>    AdvSendAdvert on; >>    MinRtrAdvInterval 30; >>    MaxRtrAdvInterval 100; >> >>    AdvLinkMTU 1450; >> >>    RDNSS  2001:4860:4860::8888  {}; >> >>    prefix 1111:4444:0:1::/64 >>    { >>         AdvOnLink on; >>         AdvAutonomous on; >>    }; >> }; >> >> *inside instance:* >> ipv4 = 192.168.199.7 >> ipv6 = 1111:4444:0:1:f816:3eff:fe29:723d/64 (from radvd SLAAC) >> >> I can ping ipv4 gateway 192.168.199.1 and internet over ipv4. >> I can ping ipv6 gateway 1111:4444:0:1::1 but I can't ping the internet >> >> checking the ipv6 routing table on my instance I either get no >> default gateway at all or I get a default gateway to a fe80::/10 >> link-local address. >> IIRC this worked before I changed the router to a L3 HA router. >> >> Appreciate any feedback! >> >> Best regards >> Tobias > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.se Mon Aug 20 09:58:07 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 20 Aug 2018 11:58:07 +0200 Subject: [openstack-dev] [neutron] neutron ipv6 radvd sends out link-local or nothing as def gw (L3 HA issue?) In-Reply-To: <4e1f27a4-cd70-4ad4-5249-20b18e1dab76@binero.se> References: <07d859f1-034e-d3b4-7fc0-0c7b087056a4@binero.se> <4e1f27a4-cd70-4ad4-5249-20b18e1dab76@binero.se> Message-ID: Continuing forward, these patches should've fixed that https://review.openstack.org/#/q/topic:bug/1667756+(status:open+OR+status:merged) I'm on Queens. The two inside interfaces on the backup router: [root at controller2 ~]# ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a cat /proc/sys/net/ipv6/conf/qr-7fad6b1b-c9/accept_ra 1 [root at controller2 ~]# ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a cat /proc/sys/net/ipv6/conf/qr-5be04815-68/accept_ra 1 Perhaps the accept_ra patches does not apply for enable/disable or routers changing from a normal router to a L3 HA router? Best regards On 08/20/2018 11:50 AM, Tobias Urdin wrote: > Ok, so the issue here seems to be that I have a L3 HA router with > SLAAC, both the active and standby router will > configure the SLAAC obtained address causing a conflict since both > side share the same MAC address. > > Is there any workaround for this? Should SLAAC even be enabled for > interfaces on the standby router? > > Best regards > Tobias > > On 08/20/2018 11:37 AM, Tobias Urdin wrote: >> Forgot [neutron] tag. >> >> On 08/20/2018 11:36 AM, Tobias Urdin wrote: >>> Hello, >>> >>> Note: before reading, this router was a regular router but was then >>> disable, changed ha=true so it's now a L3 HA router, then it was >>> enabled again. >>> CC openstack-dev for help or feedback if it's a possible bug. >>> >>> I've been testing around with IPv6 and overall the experience has >>> been positive but I've met some weird issue that I cannot put my >>> head around. >>> So this is a neutron L3 router with an outside interface with a ipv4 >>> and ipv6 from the provider network and one inside interface for ipv4 >>> and one inside interface for ipv6. >>> >>> The instances for some reason get's there default gateway as the >>> ipv6 link-local (in fe80::/10) from the router with SLAAC and radvd. >>> >>> (1111.2222 is provider network, 1111.4444 is inside network, they >>> are masked so don't pay attention to the number per se) >>> >>> *interfaces inside router:* >>> 15: ha-9bde1bb1-bd: mtu 1450 qdisc >>> noqueue state UNKNOWN group default qlen 1000 >>>     link/ether fa:16:3e:05:80:32 brd ff:ff:ff:ff:ff:ff >>>     inet 169.254.192.7/18 brd 169.254.255.255 scope global >>> ha-9bde1bb1-bd >>>        valid_lft forever preferred_lft forever >>>     inet 169.254.0.1/24 scope global ha-9bde1bb1-bd >>>        valid_lft forever preferred_lft forever >>>     inet6 fe80::f816:3eff:fe05:8032/64 scope link >>>        valid_lft forever preferred_lft forever >>> 19: qg-86e465f6-33: mtu 1500 qdisc >>> noqueue state UNKNOWN group default qlen 1000 >>>     link/ether fa:16:3e:3b:8b:a5 brd ff:ff:ff:ff:ff:ff >>>     inet 1.2.3.4/22 scope global qg-86e465f6-33 >>>        valid_lft forever preferred_lft forever >>>     inet6 1111:2222::f/64 scope global nodad >>>        valid_lft forever preferred_lft forever >>>     inet6 fe80::f816:3eff:fe3b:8ba5/64 scope link nodad >>>        valid_lft forever preferred_lft forever >>> 1168: qr-5be04815-68: mtu 1450 >>> qdisc noqueue state UNKNOWN group default qlen 1000 >>>     link/ether fa:16:3e:c3:85:bd brd ff:ff:ff:ff:ff:ff >>>     inet 192.168.99.1/24 scope global qr-5be04815-68 >>>        valid_lft forever preferred_lft forever >>>     inet6 fe80::f816:3eff:fec3:85bd/64 scope link >>>        valid_lft forever preferred_lft forever >>> 1169: qr-7fad6b1b-c9: mtu 1450 >>> qdisc noqueue state UNKNOWN group default qlen 1000 >>>     link/ether fa:16:3e:66:de:a8 brd ff:ff:ff:ff:ff:ff >>>     inet6 1111:4444:0:1::1/64 scope global nodad >>>        valid_lft forever preferred_lft forever >>>     inet6 fe80::f816:3eff:fe66:dea8/64 scope link >>>        valid_lft forever preferred_lft forever >>> >>> I get this error messages in dmesg on the network node: >>> [581085.858869] IPv6: qr-5be04815-68: IPv6 duplicate address >>> 1111:4444:0:1:f816:3eff:fec3:85bd detected! >>> [581085.997497] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address >>> 1111:4444:0:1:f816:3eff:fe66:dea8 detected! >>> [581142.869939] IPv6: qr-5be04815-68: IPv6 duplicate address >>> 1111:4444:0:1:f816:3eff:fec3:85bd detected! >>> [581143.182371] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address >>> 1111:4444:0:1:f816:3eff:fe66:dea8 detected! >>> >>> *radvd:* >>> interface qr-7fad6b1b-c9 >>> { >>>    AdvSendAdvert on; >>>    MinRtrAdvInterval 30; >>>    MaxRtrAdvInterval 100; >>> >>>    AdvLinkMTU 1450; >>> >>>    RDNSS  2001:4860:4860::8888  {}; >>> >>>    prefix 1111:4444:0:1::/64 >>>    { >>>         AdvOnLink on; >>>         AdvAutonomous on; >>>    }; >>> }; >>> >>> *inside instance:* >>> ipv4 = 192.168.199.7 >>> ipv6 = 1111:4444:0:1:f816:3eff:fe29:723d/64 (from radvd SLAAC) >>> >>> I can ping ipv4 gateway 192.168.199.1 and internet over ipv4. >>> I can ping ipv6 gateway 1111:4444:0:1::1 but I can't ping the internet >>> >>> checking the ipv6 routing table on my instance I either get no >>> default gateway at all or I get a default gateway to a fe80::/10 >>> link-local address. >>> IIRC this worked before I changed the router to a L3 HA router. >>> >>> Appreciate any feedback! >>> >>> Best regards >>> Tobias >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail2hemanth.n at gmail.com Mon Aug 20 09:58:16 2018 From: mail2hemanth.n at gmail.com (Hemanth N) Date: Mon, 20 Aug 2018 15:28:16 +0530 Subject: [openstack-dev] [neutron][neutron-classifier] info on common classifier Message-ID: Hi I am looking for information on Neutron Common Classifier. What is the implementation done today (Is API extension implemented?) and further plans for the same. I could not get much information except for the specification and github https://specs.openstack.org/openstack/neutron-specs/specs/pike/common-classification-framework.html https://github.com/openstack/neutron-classifier Also could you provide any references to early adopters of neutron-common-classifier, if any. Regards, Hemanth From tobias.urdin at binero.se Mon Aug 20 10:06:26 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 20 Aug 2018 12:06:26 +0200 Subject: [openstack-dev] [neutron] neutron ipv6 radvd sends out link-local or nothing as def gw (L3 HA issue?) In-Reply-To: References: <07d859f1-034e-d3b4-7fc0-0c7b087056a4@binero.se> <4e1f27a4-cd70-4ad4-5249-20b18e1dab76@binero.se> Message-ID: <02ac47b0-e96a-7916-3275-665b10d76d1d@binero.se> When I removed those ips and set accept_ra to 0 on the backup router: ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a sysctl -w net.ipv6.conf.qr-7fad6b1b-c9.accept_ra=0 ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a sysctl -w net.ipv6.conf.qr-5be04815-68.accept_ra=0 ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a ip a l ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a ip addr del 1111:4444:0:1:f816:3eff:fe66:dea8/64 dev qr-7fad6b1b-c9 ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a ip addr del 1111:4444:0:1:f816:3eff:fec3:85bd/64 dev qr-5be04815-68 And enabled ipv6 forwarding on the active router: ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a sysctl -w net.ipv6.conf.all.forwarding=1 It started working again, I think this is an issue when disabling a router, change it to L3 HA and enable it again, so a bug? Best regards Tobias On 08/20/2018 11:58 AM, Tobias Urdin wrote: > Continuing forward, these patches should've fixed that > https://review.openstack.org/#/q/topic:bug/1667756+(status:open+OR+status:merged) > I'm on Queens. > > The two inside interfaces on the backup router: > [root at controller2 ~]# ip netns exec > qrouter-0775785e-a93a-4501-917b-be92ff03f36a cat > /proc/sys/net/ipv6/conf/qr-7fad6b1b-c9/accept_ra > 1 > [root at controller2 ~]# ip netns exec > qrouter-0775785e-a93a-4501-917b-be92ff03f36a cat > /proc/sys/net/ipv6/conf/qr-5be04815-68/accept_ra > 1 > > Perhaps the accept_ra patches does not apply for enable/disable or > routers changing from a normal router to a L3 HA router? > Best regards > > On 08/20/2018 11:50 AM, Tobias Urdin wrote: >> Ok, so the issue here seems to be that I have a L3 HA router with >> SLAAC, both the active and standby router will >> configure the SLAAC obtained address causing a conflict since both >> side share the same MAC address. >> >> Is there any workaround for this? Should SLAAC even be enabled for >> interfaces on the standby router? >> >> Best regards >> Tobias >> >> On 08/20/2018 11:37 AM, Tobias Urdin wrote: >>> Forgot [neutron] tag. >>> >>> On 08/20/2018 11:36 AM, Tobias Urdin wrote: >>>> Hello, >>>> >>>> Note: before reading, this router was a regular router but was then >>>> disable, changed ha=true so it's now a L3 HA router, then it was >>>> enabled again. >>>> CC openstack-dev for help or feedback if it's a possible bug. >>>> >>>> I've been testing around with IPv6 and overall the experience has >>>> been positive but I've met some weird issue that I cannot put my >>>> head around. >>>> So this is a neutron L3 router with an outside interface with a >>>> ipv4 and ipv6 from the provider network and one inside interface >>>> for ipv4 and one inside interface for ipv6. >>>> >>>> The instances for some reason get's there default gateway as the >>>> ipv6 link-local (in fe80::/10) from the router with SLAAC and radvd. >>>> >>>> (1111.2222 is provider network, 1111.4444 is inside network, they >>>> are masked so don't pay attention to the number per se) >>>> >>>> *interfaces inside router:* >>>> 15: ha-9bde1bb1-bd: mtu 1450 >>>> qdisc noqueue state UNKNOWN group default qlen 1000 >>>>     link/ether fa:16:3e:05:80:32 brd ff:ff:ff:ff:ff:ff >>>>     inet 169.254.192.7/18 brd 169.254.255.255 scope global >>>> ha-9bde1bb1-bd >>>>        valid_lft forever preferred_lft forever >>>>     inet 169.254.0.1/24 scope global ha-9bde1bb1-bd >>>>        valid_lft forever preferred_lft forever >>>>     inet6 fe80::f816:3eff:fe05:8032/64 scope link >>>>        valid_lft forever preferred_lft forever >>>> 19: qg-86e465f6-33: mtu 1500 >>>> qdisc noqueue state UNKNOWN group default qlen 1000 >>>>     link/ether fa:16:3e:3b:8b:a5 brd ff:ff:ff:ff:ff:ff >>>>     inet 1.2.3.4/22 scope global qg-86e465f6-33 >>>>        valid_lft forever preferred_lft forever >>>>     inet6 1111:2222::f/64 scope global nodad >>>>        valid_lft forever preferred_lft forever >>>>     inet6 fe80::f816:3eff:fe3b:8ba5/64 scope link nodad >>>>        valid_lft forever preferred_lft forever >>>> 1168: qr-5be04815-68: mtu 1450 >>>> qdisc noqueue state UNKNOWN group default qlen 1000 >>>>     link/ether fa:16:3e:c3:85:bd brd ff:ff:ff:ff:ff:ff >>>>     inet 192.168.99.1/24 scope global qr-5be04815-68 >>>>        valid_lft forever preferred_lft forever >>>>     inet6 fe80::f816:3eff:fec3:85bd/64 scope link >>>>        valid_lft forever preferred_lft forever >>>> 1169: qr-7fad6b1b-c9: mtu 1450 >>>> qdisc noqueue state UNKNOWN group default qlen 1000 >>>>     link/ether fa:16:3e:66:de:a8 brd ff:ff:ff:ff:ff:ff >>>>     inet6 1111:4444:0:1::1/64 scope global nodad >>>>        valid_lft forever preferred_lft forever >>>>     inet6 fe80::f816:3eff:fe66:dea8/64 scope link >>>>        valid_lft forever preferred_lft forever >>>> >>>> I get this error messages in dmesg on the network node: >>>> [581085.858869] IPv6: qr-5be04815-68: IPv6 duplicate address >>>> 1111:4444:0:1:f816:3eff:fec3:85bd detected! >>>> [581085.997497] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address >>>> 1111:4444:0:1:f816:3eff:fe66:dea8 detected! >>>> [581142.869939] IPv6: qr-5be04815-68: IPv6 duplicate address >>>> 1111:4444:0:1:f816:3eff:fec3:85bd detected! >>>> [581143.182371] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address >>>> 1111:4444:0:1:f816:3eff:fe66:dea8 detected! >>>> >>>> *radvd:* >>>> interface qr-7fad6b1b-c9 >>>> { >>>>    AdvSendAdvert on; >>>>    MinRtrAdvInterval 30; >>>>    MaxRtrAdvInterval 100; >>>> >>>>    AdvLinkMTU 1450; >>>> >>>>    RDNSS  2001:4860:4860::8888  {}; >>>> >>>>    prefix 1111:4444:0:1::/64 >>>>    { >>>>         AdvOnLink on; >>>>         AdvAutonomous on; >>>>    }; >>>> }; >>>> >>>> *inside instance:* >>>> ipv4 = 192.168.199.7 >>>> ipv6 = 1111:4444:0:1:f816:3eff:fe29:723d/64 (from radvd SLAAC) >>>> >>>> I can ping ipv4 gateway 192.168.199.1 and internet over ipv4. >>>> I can ping ipv6 gateway 1111:4444:0:1::1 but I can't ping the internet >>>> >>>> checking the ipv6 routing table on my instance I either get no >>>> default gateway at all or I get a default gateway to a fe80::/10 >>>> link-local address. >>>> IIRC this worked before I changed the router to a L3 HA router. >>>> >>>> Appreciate any feedback! >>>> >>>> Best regards >>>> Tobias >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.shaughnessy at intel.com Mon Aug 20 10:37:32 2018 From: david.shaughnessy at intel.com (Shaughnessy, David) Date: Mon, 20 Aug 2018 10:37:32 +0000 Subject: [openstack-dev] [neutron][neutron-classifier] info on common classifier In-Reply-To: References: Message-ID: <70F4AA2FC5B65149B129716D2C9441D45EE67B20@IRSMSX102.ger.corp.intel.com> Hi Hemanth The API is just waiting for its last review and will hopefully merge after that. https://review.openstack.org/#/c/487182/25 We wanted to ensure that the project was thoroughly tested and setting up the project to run these tests took some time. The patch to follow this one is an extension to the OpenStack client which would give the cli to interact with this plugin. Regards. David. -----Original Message----- From: Hemanth N [mailto:mail2hemanth.n at gmail.com] Sent: Monday, August 20, 2018 10:58 AM To: openstack-dev at lists.openstack.org Subject: [openstack-dev] [neutron][neutron-classifier] info on common classifier Hi I am looking for information on Neutron Common Classifier. What is the implementation done today (Is API extension implemented?) and further plans for the same. I could not get much information except for the specification and github https://specs.openstack.org/openstack/neutron-specs/specs/pike/common-classification-framework.html https://github.com/openstack/neutron-classifier Also could you provide any references to early adopters of neutron-common-classifier, if any. Regards, Hemanth __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ivolinengong at gmail.com Mon Aug 20 11:07:28 2018 From: ivolinengong at gmail.com (Ivoline Ngong) Date: Mon, 20 Aug 2018 14:07:28 +0300 Subject: [openstack-dev] New Contributor In-Reply-To: <49b15368-a67b-dfcb-0501-6b527a42c71c@openstack.org> References: <49b15368-a67b-dfcb-0501-6b527a42c71c@openstack.org> Message-ID: Thanks so much for help Josh and Thierry. I'll check out the links and hopefully find a way forward from there. Will get back here in case I have any questions. Cheers, Ivoline On Mon, Aug 20, 2018, 12:01 Thierry Carrez wrote: > Ivoline Ngong wrote: > > I am Ivoline Ngong. I am a Cameroonian who lives in Turkey. I will love > > to contribute to Open source through OpenStack. I code in Java and > > Python and I think OpenStack is a good fit for me. > > I'll appreciate it if you can point me to the right direction on how I > > can get started. > > Hi Ivoline, > > Welcome to the OpenStack community ! > > The OpenStack Technical Committee maintains a list of areas in most need > of help: > > https://governance.openstack.org/tc/reference/help-most-needed.html > > Depending on your interest, you could pick one of those projects and > reach out to the mentioned contact points. > > For more general information on how to contribute, you can check out our > contribution portal: > > https://www.openstack.org/community/ > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From assaf at redhat.com Mon Aug 20 11:49:36 2018 From: assaf at redhat.com (Assaf Muller) Date: Mon, 20 Aug 2018 07:49:36 -0400 Subject: [openstack-dev] [neutron] neutron ipv6 radvd sends out link-local or nothing as def gw (L3 HA issue?) In-Reply-To: <02ac47b0-e96a-7916-3275-665b10d76d1d@binero.se> References: <07d859f1-034e-d3b4-7fc0-0c7b087056a4@binero.se> <4e1f27a4-cd70-4ad4-5249-20b18e1dab76@binero.se> <02ac47b0-e96a-7916-3275-665b10d76d1d@binero.se> Message-ID: On Mon, Aug 20, 2018 at 6:06 AM, Tobias Urdin wrote: > When I removed those ips and set accept_ra to 0 on the backup router: > > ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a sysctl -w > net.ipv6.conf.qr-7fad6b1b-c9.accept_ra=0 > ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a sysctl -w > net.ipv6.conf.qr-5be04815-68.accept_ra=0 > ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a ip a l > ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a ip addr del > 1111:4444:0:1:f816:3eff:fe66:dea8/64 dev qr-7fad6b1b-c9 > ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a ip addr del > 1111:4444:0:1:f816:3eff:fec3:85bd/64 dev qr-5be04815-68 > > And enabled ipv6 forwarding on the active router: > ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a sysctl -w > net.ipv6.conf.all.forwarding=1 > > It started working again, I think this is an issue when disabling a router, > change it to L3 HA and enable it again, so a bug? Quite possibly. Are you able to find a minimal reproducer? > > Best regards > Tobias > > > On 08/20/2018 11:58 AM, Tobias Urdin wrote: > > Continuing forward, these patches should've fixed that > https://review.openstack.org/#/q/topic:bug/1667756+(status:open+OR+status:merged) > I'm on Queens. > > The two inside interfaces on the backup router: > [root at controller2 ~]# ip netns exec > qrouter-0775785e-a93a-4501-917b-be92ff03f36a cat > /proc/sys/net/ipv6/conf/qr-7fad6b1b-c9/accept_ra > 1 > [root at controller2 ~]# ip netns exec > qrouter-0775785e-a93a-4501-917b-be92ff03f36a cat > /proc/sys/net/ipv6/conf/qr-5be04815-68/accept_ra > 1 > > Perhaps the accept_ra patches does not apply for enable/disable or routers > changing from a normal router to a L3 HA router? > Best regards > > On 08/20/2018 11:50 AM, Tobias Urdin wrote: > > Ok, so the issue here seems to be that I have a L3 HA router with SLAAC, > both the active and standby router will > configure the SLAAC obtained address causing a conflict since both side > share the same MAC address. > > Is there any workaround for this? Should SLAAC even be enabled for > interfaces on the standby router? > > Best regards > Tobias > > On 08/20/2018 11:37 AM, Tobias Urdin wrote: > > Forgot [neutron] tag. > > On 08/20/2018 11:36 AM, Tobias Urdin wrote: > > Hello, > > Note: before reading, this router was a regular router but was then disable, > changed ha=true so it's now a L3 HA router, then it was enabled again. > CC openstack-dev for help or feedback if it's a possible bug. > > I've been testing around with IPv6 and overall the experience has been > positive but I've met some weird issue that I cannot put my head around. > So this is a neutron L3 router with an outside interface with a ipv4 and > ipv6 from the provider network and one inside interface for ipv4 and one > inside interface for ipv6. > > The instances for some reason get's there default gateway as the ipv6 > link-local (in fe80::/10) from the router with SLAAC and radvd. > > (1111.2222 is provider network, 1111.4444 is inside network, they are masked > so don't pay attention to the number per se) > > interfaces inside router: > 15: ha-9bde1bb1-bd: mtu 1450 qdisc noqueue > state UNKNOWN group default qlen 1000 > link/ether fa:16:3e:05:80:32 brd ff:ff:ff:ff:ff:ff > inet 169.254.192.7/18 brd 169.254.255.255 scope global ha-9bde1bb1-bd > valid_lft forever preferred_lft forever > inet 169.254.0.1/24 scope global ha-9bde1bb1-bd > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fe05:8032/64 scope link > valid_lft forever preferred_lft forever > 19: qg-86e465f6-33: mtu 1500 qdisc noqueue > state UNKNOWN group default qlen 1000 > link/ether fa:16:3e:3b:8b:a5 brd ff:ff:ff:ff:ff:ff > inet 1.2.3.4/22 scope global qg-86e465f6-33 > valid_lft forever preferred_lft forever > inet6 1111:2222::f/64 scope global nodad > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fe3b:8ba5/64 scope link nodad > valid_lft forever preferred_lft forever > 1168: qr-5be04815-68: mtu 1450 qdisc > noqueue state UNKNOWN group default qlen 1000 > link/ether fa:16:3e:c3:85:bd brd ff:ff:ff:ff:ff:ff > inet 192.168.99.1/24 scope global qr-5be04815-68 > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fec3:85bd/64 scope link > valid_lft forever preferred_lft forever > 1169: qr-7fad6b1b-c9: mtu 1450 qdisc > noqueue state UNKNOWN group default qlen 1000 > link/ether fa:16:3e:66:de:a8 brd ff:ff:ff:ff:ff:ff > inet6 1111:4444:0:1::1/64 scope global nodad > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fe66:dea8/64 scope link > valid_lft forever preferred_lft forever > > I get this error messages in dmesg on the network node: > [581085.858869] IPv6: qr-5be04815-68: IPv6 duplicate address > 1111:4444:0:1:f816:3eff:fec3:85bd detected! > [581085.997497] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address > 1111:4444:0:1:f816:3eff:fe66:dea8 detected! > [581142.869939] IPv6: qr-5be04815-68: IPv6 duplicate address > 1111:4444:0:1:f816:3eff:fec3:85bd detected! > [581143.182371] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address > 1111:4444:0:1:f816:3eff:fe66:dea8 detected! > > radvd: > interface qr-7fad6b1b-c9 > { > AdvSendAdvert on; > MinRtrAdvInterval 30; > MaxRtrAdvInterval 100; > > AdvLinkMTU 1450; > > RDNSS 2001:4860:4860::8888 {}; > > prefix 1111:4444:0:1::/64 > { > AdvOnLink on; > AdvAutonomous on; > }; > }; > > inside instance: > ipv4 = 192.168.199.7 > ipv6 = 1111:4444:0:1:f816:3eff:fe29:723d/64 (from radvd SLAAC) > > I can ping ipv4 gateway 192.168.199.1 and internet over ipv4. > I can ping ipv6 gateway 1111:4444:0:1::1 but I can't ping the internet > > checking the ipv6 routing table on my instance I either get no default > gateway at all or I get a default gateway to a fe80::/10 link-local address. > IIRC this worked before I changed the router to a L3 HA router. > > Appreciate any feedback! > > Best regards > Tobias > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jaypipes at gmail.com Mon Aug 20 11:52:46 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 20 Aug 2018 07:52:46 -0400 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> Message-ID: <098e105d-e96b-79c5-e1b9-2d7b27feb194@gmail.com> On 08/18/2018 08:25 AM, Chris Dent wrote: > So my hope is that (in no particular order) Jay Pipes, Eric Fried, > Takashi Natsume, Tetsuro Nakamura, Matt Riedemann, Andrey Volkov, > Alex Xu, Balazs Gibizer, Ed Leafe, and any other contributor to > placement whom I'm forgetting [1] would express their preference on > what they'd like to see happen. > > At the same time, if people from neutron, cinder, blazar, zun, > mogan, ironic, and cyborg could express their preferences, we can get > through this by acclaim and get on with getting things done. I am not opposed to extracting the placement service into its own repo. I also do not view it as a priority that should take precedence over the completion of other items, including the reshaper effort and the integration of placement calls into Nova (nested providers, sharing providers, etc). The remaining items are Nova-centric. We need Nova-focused contributors to make placement more useful to Nova, and I fail to see how extracting the placement service will meet that goal. In fact, one might argue, as Melanie implies, that extracting placement outside of the Compute project would increase the velocity of the placement project *at the expense of* getting things done in the Nova project. We've shown we can get many things done in placement. We've shown we can evolve the API fairly quickly. The velocity of the placement project isn't the problem. The problem is the lag between features being written into placement (sometimes too hastily IMHO) and actually *using* those features in Nova. As for the argument about other projects being able (or being more willing to) use placement, I think that's not actually true. The projects that might want to ditch their own custom resource tracking and management code (Cyborg, Neutron, Cinder, Ironic) have either already done so or would require minimal changes to do that. There are no projects other than Ironic that I'm aware of that are interested in using the allocation candidates functionality (and the allocation claim process that entails) for the rough scheduling functionality that provides. I'm not sure placement being extracted would change that. Would extracting placement out into its own repo result in a couple more people being added to the new placement core contributor team? Possibly. Will that result in Nova getting the integration pieces written that make use of placement? No, I don't believe so. So, I'm on the fence. I understand the desire for separation, and I'm fully aware of my bias as a current Nova core contributor. I even support the process of extracting placement. But do I think it will do much other than provide some minor measure of independence? No, not really. Consider me +0. Best, -jay From sfinucan at redhat.com Mon Aug 20 13:12:54 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Mon, 20 Aug 2018 14:12:54 +0100 Subject: [openstack-dev] =?iso-8859-1?q?=5Boslo=5D_proposing_Mois=E9s_Guim?= =?iso-8859-1?q?ar=E3es_for_oslo=2Econfig_core?= In-Reply-To: <54361874-8077-a0a6-188f-21001b806740@nemebean.com> References: <1533129742-sup-2007@lrrr.local> <1533733971-sup-7865@lrrr.local> <54361874-8077-a0a6-188f-21001b806740@nemebean.com> Message-ID: On Mon, 2018-08-13 at 17:39 -0500, Ben Nemec wrote: > > On 08/08/2018 08:18 AM, Doug Hellmann wrote: > > Excerpts from Doug Hellmann's message of 2018-08-01 09:27:09 -0400: > > > Moisés Guimarães (moguimar) did quite a bit of work on oslo.config > > > during the Rocky cycle to add driver support. Based on that work, > > > and a discussion we have had since then about general cleanup needed > > > in oslo.config, I think he would make a good addition to the > > > oslo.config review team. > > > > > > Please indicate your approval or concerns with +1/-1. > > > > > > Doug > > > > Normally I would have added moguimar to the oslo-config-core team > > today, after a week's wait. Funny story, though. There is no > > oslo-config-core team. > > > > oslo.config is one of a few of our libraries that we never set up with a > > separate review team. It is managed by oslo-core. We could set up a new > > review team for that library, but after giving it some thought I > > realized that *most* of the libraries are fairly stable, our team is > > pretty small, and Moisés is a good guy so maybe we don't need to worry > > about that. > > > > I spoke with Moisés, and he agreed to be part of the larger core team. > > He pointed out that the next phase of the driver work is going to happen > > in castellan, so it would be useful to have another reviewer there. And > > I'm sure we can trust him to be careful with reviews in other repos > > until he learns his way around. > > > > So, I would like to amend my original proposal and suggest that we add > > Moisés to the oslo-core team. > > > > Please indicate support with +1 or present any concerns you have. I > > apologize for the confusion on my part. > > I'm good with this reasoning, so +1 from me. As above. +1 From dabarren at gmail.com Mon Aug 20 13:35:10 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Mon, 20 Aug 2018 15:35:10 +0200 Subject: [openstack-dev] [kolla][project navigator] kolla missing in project navigator Message-ID: Hi, while checking around the project navigator, I don't see kolla in the deployment tools section. How could we get kolla appear in the navigator along other deployment tools? Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at mcclain.xyz Mon Aug 20 13:52:59 2018 From: mark at mcclain.xyz (Mark McClain) Date: Mon, 20 Aug 2018 09:52:59 -0400 Subject: [openstack-dev] [astara] Retirement of astara repos? In-Reply-To: References: <572FF9CF-9AB5-4CBA-A4C8-26E7A012309E@gmx.com> <0DE3CB09-5CA1-4557-9158-C40F0FC37E6E@mcclain.xyz> Message-ID: <4D511F22-5D6F-43EF-BDCF-A2322103F12D@mcclain.xyz> Yeah. I’ll post the retirement commits this week. mark > On Aug 18, 2018, at 13:39, Andreas Jaeger wrote: > > Mark, shall I start the retirement of astara now? I would appreciate a "go ahead" - unless you want to do it yourself... > > Andreas > >> On 2018-02-23 14:34, Andreas Jaeger wrote: >>> On 2018-01-11 22:55, Mark McClain wrote: >>> Sean, Andreas- >>> >>> Sorry I missed Andres’ message earlier in December about retiring astara. Everyone is correct that development stopped a good while ago. We attempted in Barcelona to find others in the community to take over the day-to-day management of the project. Unfortunately, nothing sustained resulted from that session. >>> >>> I’ve intentionally delayed archiving the repos because of background conversations around restarting active development for some pieces bubble up from time-to-time. I’ll contact those I know were interested and try for a resolution to propose before the PTG. >> Mark, any update here? > > -- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > From jimmy at openstack.org Mon Aug 20 13:57:00 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 20 Aug 2018 08:57:00 -0500 Subject: [openstack-dev] [kolla][project navigator] kolla missing in project navigator In-Reply-To: References: Message-ID: <5B7AC8AC.7000106@openstack.org> Eduardo, Thanks for the heads up. We're in the process of updating the Project Navigator to match the OpenStack Map [1]. It looks like Kolla got lost in the shuffle. I've added it back to the Project [2Navigator under the Deployment Tools section [2]. Please let me know if I can assist further. Thanks, Jimmy [1] https://www.openstack.org/assets/software/projectmap/openstack-map.pdf [2] https://www.openstack.org/software/project-navigator/deployment-tools Eduardo Gonzalez wrote: > Hi, > > while checking around the project navigator, I don't see kolla in the > deployment tools section. > > How could we get kolla appear in the navigator along other deployment > tools? > > Regards > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dabarren at gmail.com Mon Aug 20 14:08:24 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Mon, 20 Aug 2018 16:08:24 +0200 Subject: [openstack-dev] [kolla][project navigator] kolla missing in project navigator In-Reply-To: <5B7AC8AC.7000106@openstack.org> References: <5B7AC8AC.7000106@openstack.org> Message-ID: Hi Jimmy, thanks for the the quick update. I see it in the navigator now. Thanks On Mon, Aug 20, 2018, 3:57 PM Jimmy McArthur wrote: > Eduardo, > > Thanks for the heads up. We're in the process of updating the Project > Navigator to match the OpenStack Map [1]. It looks like Kolla got lost > in the shuffle. I've added it back to the Project [2Navigator under the > Deployment Tools section [2]. > > Please let me know if I can assist further. > > Thanks, > Jimmy > > > [1] https://www.openstack.org/assets/software/projectmap/openstack-map.pdf > [2] https://www.openstack.org/software/project-navigator/deployment-tools > > Eduardo Gonzalez wrote: > > Hi, > > > > while checking around the project navigator, I don't see kolla in the > > deployment tools section. > > > > How could we get kolla appear in the navigator along other deployment > > tools? > > > > Regards > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Mon Aug 20 14:08:55 2018 From: dms at danplanet.com (Dan Smith) Date: Mon, 20 Aug 2018 07:08:55 -0700 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <098e105d-e96b-79c5-e1b9-2d7b27feb194@gmail.com> Message-ID: >> So my hope is that (in no particular order) Jay Pipes, Eric Fried, >> Takashi Natsume, Tetsuro Nakamura, Matt Riedemann, Andrey Volkov, >> Alex Xu, Balazs Gibizer, Ed Leafe, and any other contributor to >> placement whom I'm forgetting [1] would express their preference on >> what they'd like to see happen. I apparently don't qualify for a vote, so I'll just reply to Jay's comments here. > I am not opposed to extracting the placement service into its own > repo. I also do not view it as a priority that should take precedence > over the completion of other items, including the reshaper effort and > the integration of placement calls into Nova (nested providers, > sharing providers, etc). > > The remaining items are Nova-centric. We need Nova-focused > contributors to make placement more useful to Nova, and I fail to see > how extracting the placement service will meet that goal. In fact, one > might argue, as Melanie implies, that extracting placement outside of > the Compute project would increase the velocity of the placement > project *at the expense of* getting things done in the Nova project. Yep, this. I know it's a Nova-centric view, but unlike any other project, we have taken the risk of putting placement in our critical path. That has yielded several fire drills right before releases, as well as complicated backports to fix things that we have broken in the process, etc. We've got a list of things that are half-finished or promised-but-not-started, and those are my priority over most everything else. > We've shown we can get many things done in placement. We've shown we > can evolve the API fairly quickly. The velocity of the placement > project isn't the problem. The problem is the lag between features > being written into placement (sometimes too hastily IMHO) and actually > *using* those features in Nova. Right, and the reshaper effort is a really good example of what I'm concerned about. Nova has been getting ready for NRPs for several cycles now, and just before crunch time in Rocky, we realize there's a huge missing piece of the puzzle on the placement side. That's not the first time that has happened and I'm sure it won't be the last. > As for the argument about other projects being able (or being more > willing to) use placement, I think that's not actually true. The > projects that might want to ditch their own custom resource tracking > and management code (Cyborg, Neutron, Cinder, Ironic) have either > already done so or would require minimal changes to do that. There are > no projects other than Ironic that I'm aware of that are interested in > using the allocation candidates functionality (and the allocation > claim process that entails) for the rough scheduling functionality > that provides. I'm not sure placement being extracted would change > that. My point about this is that "reporting" and "consuming" placement are different things. Neutron reports, we'd like Cinder to report. Ironic reports, but indirectly. Cyborg would report. Those reporting activities are to help projects that "consume" placement make better decisions, but I think it's entirely likely that Nova will be the only one that ever does that. --Dan From doug at doughellmann.com Mon Aug 20 14:29:39 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 20 Aug 2018 10:29:39 -0400 Subject: [openstack-dev] [all] [tc] Who is responsible for the mission and scope of OpenStack? In-Reply-To: References: Message-ID: <1534774773-sup-5944@lrrr.local> Excerpts from Thierry Carrez's message of 2018-08-20 11:33:30 +0200: > Chris Dent wrote: > > In the discussion on a draft technical vision for OpenStack at > > > >     https://review.openstack.org/#/c/592205/ > > > > there is a question about whether the TC is "responsible for the > > mission and scope of OpenStack". > > > > As the discussion there indicates, there is plenty of nuance, but > > underlying it is a pretty fundamental question that seems important > > to answer as we're going into yet another TC election period. > > > > I've always assumed it was the case: the TC is an elected > > representative body of the so-called active technical contributors > > to OpenStack. So while the TC is not responsible for creating the > > mission from whole cloth, they are responsible for representing the > > goals of the people who elected them and thus for refining, > > documenting and caring for the mission and scope while working with > > all the other people invested in the community. > > > > Does anyone disagree? If so, who is responsible if not the TC? > > A few indications from the bylaws: > > The TC manages "technical matters relating to the OpenStack Project". > The "OpenStack project" is defined as "the released projects to enable > cloud computing and the associated library projects, gating projects, > and supporting projects". > > The TC cooperates with the board to apply the OpenStack trademark : it > approves components for trademark programs inclusion, and the board > decides whether to apply it to those or not. In that sense, the TC is in > charge of the "scope" of "OpenStack", since it decides which components > may be added to things that bear the "OpenStack" name. But ultimately > the board is in charge of the trademark, and not apply it even to things > we deem in scope. > > As far as the mission goes, the bylaws just indicate that the OpenStack > project is an "open source cloud computing project". Beyond that, we > have an official OpenStack mission statement. In the past, we ruled that > this statement was co-owned by the Board and the TC, and that changes to > it would need to pass *both* bodies. > > So I think the answer is... The Board and the TC co-own the mission and > the scope of OpenStack. The TC is in charge of the technical side, > especially when it comes to implementation. The Board is in charge of > the trademark side (it ultimately owns what can be called "OpenStack"). > Thierry's description matches my understanding. The most recent update to the mission statement (to add "users") was a joint effort between the TC and Board. As far as scope for new projects goes, we have said they "should help further the OpenStack mission, by providing a cloud infrastructure service, or directly building on an existing OpenStack infrastructure service." [1] I don't think there's any problem with writing down more detail about what we mean by "should help further the OpenStack mission". If anything, it gives us an opportunity to ensure that our interpretation is aligned with the Board's. Doug [1] https://governance.openstack.org/tc/reference/new-projects-requirements.html From thierry at openstack.org Mon Aug 20 14:31:30 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 20 Aug 2018 16:31:30 +0200 Subject: [openstack-dev] [kolla][project navigator] kolla missing in project navigator In-Reply-To: References: <5B7AC8AC.7000106@openstack.org> Message-ID: <2ec52efe-78f8-23eb-6e83-be3955b75015@openstack.org> Eduardo, "Kolla" was originally left out of the map (and therefore the new OpenStack components page) because the map only shows deliverables that are directly usable by deployers. That is why "Kolla-Ansible" is listed there and not "Kolla". Are you making the case that Kolla should be used directly by deployers (rather than run it though Ansible with Kolla-Ansible), and therefore should appear as a deployment option on the map as well ? -- Thierry Carrez (ttx) From dabarren at gmail.com Mon Aug 20 14:41:42 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Mon, 20 Aug 2018 16:41:42 +0200 Subject: [openstack-dev] [kolla][project navigator] kolla missing in project navigator In-Reply-To: <2ec52efe-78f8-23eb-6e83-be3955b75015@openstack.org> References: <5B7AC8AC.7000106@openstack.org> <2ec52efe-78f8-23eb-6e83-be3955b75015@openstack.org> Message-ID: Hi, effectively kolla-ansible is the deployment tool for container images, and kolla the image artifact builder used by other deployment projects as consumable. In the project navigator wasn't kolla nor kolla-ansible, my bad for not expecifying kolla-ansible as deployment deliverable. Regards On Mon, Aug 20, 2018, 4:31 PM Thierry Carrez wrote: > Eduardo, > > "Kolla" was originally left out of the map (and therefore the new > OpenStack components page) because the map only shows deliverables that > are directly usable by deployers. That is why "Kolla-Ansible" is listed > there and not "Kolla". > > Are you making the case that Kolla should be used directly by deployers > (rather than run it though Ansible with Kolla-Ansible), and therefore > should appear as a deployment option on the map as well ? > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Mon Aug 20 14:44:51 2018 From: jean-philippe at evrard.me (=?utf-8?q?jean-philippe=40evrard=2Eme?=) Date: Mon, 20 Aug 2018 16:44:51 +0200 Subject: [openstack-dev] =?utf-8?b?Pz09P3V0Zi04P3E/ICBba29sbGFdW3Byb2pl?= =?utf-8?q?ct=3F=3D=3D=3Futf-8=3Fq=3F_navigator=5D_kolla_missing_in_projec?= =?utf-8?q?t_navigator?= In-Reply-To: <5B7AC8AC.7000106@openstack.org> Message-ID: <31e-5b7ad400-5-5388870@147913795> On Monday, August 20, 2018 15:57 CEST, Jimmy McArthur wrote: > Eduardo, > > Thanks for the heads up. We're in the process of updating the Project > Navigator to match the OpenStack Map [1]. It looks like Kolla got lost > in the shuffle. I've added it back to the Project [2Navigator under the > Deployment Tools section [2]. > > Please let me know if I can assist further. > > Thanks, > Jimmy > > > [1] https://www.openstack.org/assets/software/projectmap/openstack-map.pdf > [2] https://www.openstack.org/software/project-navigator/deployment-tools That second link is very confusing, on the openstack-ansible case. Yes we ship recipes (roles, playbooks) for deployment using Ansible (any other ansible project could use them, maybe we need to adapt there if it's not the case...), but we also deal with the installation and upgrade of the openstack cloud. Therefore, shouldn't OSA also be listed in the "Deployment / Lifecycle Tools"? Best regards, Jean-Philippe Evrard (evrardjp) From thierry at openstack.org Mon Aug 20 14:55:36 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 20 Aug 2018 16:55:36 +0200 Subject: [openstack-dev] ?= [kolla][project=?utf-8?q? navigator] kolla missing in project navigator In-Reply-To: <31e-5b7ad400-5-5388870@147913795> References: <31e-5b7ad400-5-5388870@147913795> Message-ID: jean-philippe at evrard.me wrote: > That second link is very confusing, on the openstack-ansible case. Yes we ship recipes (roles, playbooks) for deployment using Ansible (any other ansible project could use them, maybe we need to adapt there if it's not the case...), but we also deal with the installation and upgrade of the openstack cloud. Therefore, shouldn't OSA also be listed in the "Deployment / Lifecycle Tools"? Yes, the barrier between the two is rather porous. We used to only have TripleO on the other side, but more and more tools (like Charms, Helm) have maintained that their tooling is more than just a pile of recipes. Once we have all that display driven through YAML files stored in a Git repo under Gerrit, we'll be able to fine tune that content, create more subcategories etc. A bit of patience :) -- Thierry Carrez (ttx) From thierry at openstack.org Mon Aug 20 14:59:19 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 20 Aug 2018 16:59:19 +0200 Subject: [openstack-dev] [kolla][project navigator] kolla missing in project navigator In-Reply-To: References: <5B7AC8AC.7000106@openstack.org> <2ec52efe-78f8-23eb-6e83-be3955b75015@openstack.org> Message-ID: <314432e1-430e-ebbd-8a62-eacb18154eee@openstack.org> Eduardo Gonzalez wrote: > Hi, effectively kolla-ansible is the deployment tool for container > images, and kolla the image artifact builder used by other deployment > projects as consumable. > > In the project navigator wasn't kolla nor kolla-ansible, my bad for not > expecifying kolla-ansible as deployment deliverable. OK, so how about we make sure *Kolla-Ansible* is present both in the map and the components list ? -- Thierry Carrez (ttx) From dabarren at gmail.com Mon Aug 20 15:01:30 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Mon, 20 Aug 2018 17:01:30 +0200 Subject: [openstack-dev] [kolla][project navigator] kolla missing in project navigator In-Reply-To: <314432e1-430e-ebbd-8a62-eacb18154eee@openstack.org> References: <5B7AC8AC.7000106@openstack.org> <2ec52efe-78f8-23eb-6e83-be3955b75015@openstack.org> <314432e1-430e-ebbd-8a62-eacb18154eee@openstack.org> Message-ID: Yes, that was my intention. Didn't meant to say kolla as images, i meant kolla-ansible. On Mon, Aug 20, 2018, 4:59 PM Thierry Carrez wrote: > Eduardo Gonzalez wrote: > > Hi, effectively kolla-ansible is the deployment tool for container > > images, and kolla the image artifact builder used by other deployment > > projects as consumable. > > > > In the project navigator wasn't kolla nor kolla-ansible, my bad for not > > expecifying kolla-ansible as deployment deliverable. > > OK, so how about we make sure *Kolla-Ansible* is present both in the map > and the components list ? > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Mon Aug 20 15:10:35 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 21 Aug 2018 00:10:35 +0900 Subject: [openstack-dev] [Searchlight] Reaching out to the Searchlight core members for Stein - Call for team meeting In-Reply-To: References: Message-ID: Hi Zhenyu Zheng, Thanks for your response. What is your IRC handler? I would like to have a team meeting this week to decide what we gonna do with Searchlight in Stein. Searchlight IRC channel: #openstack-searchlight My IRC: dangtrinhnt Bests, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Sat, Aug 18, 2018 at 1:16 AM Trinh Nguyen wrote: > Dear Searchlight team, > > As you may know, the Searchlight project has missed several milestones, > especially the Rocky cycle. The TC already has the plan to remove > Searchlight from governance [1] but I volunteer to take over it [2]. But > due to the unresponsive on IRC and launchpad, I send this email to reach > out to all the Searchlight core members to discuss our plan in Stein as > well as re-organize the team. Hopefully, this effort will work well and may > bring Searchlight back to life. > > If anyone on the core team sees this email, please reply. > > My IRC is dangtrinhnt. > > [1] https://review.openstack.org/#/c/588644/ > [2] https://review.openstack.org/#/c/590601/ > > Best regards, > > *Trinh Nguyen *| Founder & Chief Architect > > > > *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Aug 20 15:27:09 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 20 Aug 2018 11:27:09 -0400 Subject: [openstack-dev] [goal][python3] week 2 update Message-ID: <1534778701-sup-1930@lrrr.local> This is week 2 of the roll-out of the "Run under Python 3 by default" goal (https://governance.openstack.org/tc/goals/stein/python3-first.html). == What we learned last week == As we worked on approving the changes to add the zuul settings to a few Oslo repositories, we had trouble with some of the older branches because they were running newer versions of the jobs, as configured in project-config. To work around this problem, we removed those job templates in project-config by submitting separate patches (rather than waiting for the full clean-up patch). We used the Oslo team repos when we were testing some of the new jobs, so it is possible this won't come up for any other teams, but I thought I would mention the problem and solution, just in case. We had at least one question about the order in which the patches need to land across the branches. We need the ones with the subject "import zuul job settings from project-config" to land before the others, but it doesn't make any difference which branches go first. Those patches should be basically no-ops, neither adding nor changing any of the existing testing. The other follow-up patches change or add tests, and are submitted separately specifically so the changes they contain can be managed and issues fixed to allow them to land. Nguyen found a couple of cases where older branches did not work with the existing documentation job. The fix may require backporting changes to remove tox_install.sh, or other changes that have been made in newer stable branches but not backported all the way. Because the new documentation job runs through tox we may be able to use that in the older branches, as an alternative. We discovered last night that the version of git on CentOS does not support the -C option, so we will need to change our scripts to be compatible with the older platform. == Completed work == Congratulations to the Documentation team for approving all of the patches to import their zuul job configuration! == Ongoing work == The Oslo team is working on migrating their zuul settings. The Ironic, Vitrage, Cyborg, Solum, Tacker, Masakari, Congress, Designate, Mistral, Watcher, Glance, and Requirements teams have started migrating their zuul settings. The Ironic team has started working on adding functional tests that run under Python 3. Thanks to dtantsur for adding a variant of the python 3.6 jobs that installs neutron from source, needed by several networking-related projects that integrate tightly with neutron. https://review.openstack.org/#/c/593643/ == Next Steps == If your team is ready to have your zuul settings migrated, please let us know by following up to this email. We will start with the volunteers, and then work our way through the other teams. After the Rocky cycle-trailing projects are released, I will propose the change to project-config to change all of the packaging jobs to use the new publish-to-pypi-python3 template. We should be able to have that change in place before the first milestone for Stein so that we have an opportunity to test it. == How can you help? == 1. Choose a patch that has failing tests and help fix it. https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) 2. Review the patches for the zuul changes. Keep in mind that some of those patches will be on the stable branches for projects. 3. Work on adding functional test jobs that run under Python 3. == How can you ask for help? == If you have any questions, please post them here to the openstack-dev list with the topic tag [python3] in the subject line. Posting questions to the mailing list will give the widest audience the chance to see the answers. We are using the #openstack-dev IRC channel for discussion as well, but I'm not sure how good our timezone coverage is so it's probably better to use the mailing list. == Reference Material == Goal description: https://governance.openstack.org/tc/goals/stein/python3-first.html Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-first+is:open Storyboard: https://storyboard.openstack.org/#!/board/104 Zuul migration notes: https://etherpad.openstack.org/p/python3-first Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 From doug at doughellmann.com Mon Aug 20 15:29:38 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 20 Aug 2018 11:29:38 -0400 Subject: [openstack-dev] =?utf-8?q?=5Boslo=5D_proposing_Mois=C3=A9s_Guimar?= =?utf-8?q?=C3=A3es_for_oslo=2Econfig_core?= In-Reply-To: <1533733971-sup-7865@lrrr.local> References: <1533129742-sup-2007@lrrr.local> <1533733971-sup-7865@lrrr.local> Message-ID: <1534778937-sup-6479@lrrr.local> Excerpts from Doug Hellmann's message of 2018-08-08 09:18:44 -0400: > Excerpts from Doug Hellmann's message of 2018-08-01 09:27:09 -0400: > > Moisés Guimarães (moguimar) did quite a bit of work on oslo.config > > during the Rocky cycle to add driver support. Based on that work, > > and a discussion we have had since then about general cleanup needed > > in oslo.config, I think he would make a good addition to the > > oslo.config review team. > > > > Please indicate your approval or concerns with +1/-1. > > > > Doug > > Normally I would have added moguimar to the oslo-config-core team > today, after a week's wait. Funny story, though. There is no > oslo-config-core team. > > oslo.config is one of a few of our libraries that we never set up with a > separate review team. It is managed by oslo-core. We could set up a new > review team for that library, but after giving it some thought I > realized that *most* of the libraries are fairly stable, our team is > pretty small, and Moisés is a good guy so maybe we don't need to worry > about that. > > I spoke with Moisés, and he agreed to be part of the larger core team. > He pointed out that the next phase of the driver work is going to happen > in castellan, so it would be useful to have another reviewer there. And > I'm sure we can trust him to be careful with reviews in other repos > until he learns his way around. > > So, I would like to amend my original proposal and suggest that we add > Moisés to the oslo-core team. > > Please indicate support with +1 or present any concerns you have. I > apologize for the confusion on my part. > > Doug There being no objections, I have added Moisés Guimarães (moguimar) to the oslo-core team today. Welcome to the team! Doug From mbooth at redhat.com Mon Aug 20 15:29:52 2018 From: mbooth at redhat.com (Matthew Booth) Date: Mon, 20 Aug 2018 16:29:52 +0100 Subject: [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration) Message-ID: For those who aren't familiar with it, nova's volume-update (also called swap volume by nova devs) is the nova part of the implementation of cinder's live migration (also called retype). Volume-update is essentially an internal cinder<->nova api, but as that's not a thing it's also unfortunately exposed to users. Some users have found it and are using it, but because it's essentially an internal cinder<->nova api it breaks pretty easily if you don't treat it like a special snowflake. It looks like we've finally found a way it's broken for non-cinder callers that we can't fix, even with a dirty hack. volume-update essentially does a live copy of the data on volume to volume, then seamlessly swaps the attachment to from to . The guest OS on will not notice anything at all as the hypervisor swaps the storage backing an attached volume underneath it. When called by cinder, as intended, cinder does some post-operation cleanup such that is deleted and inherits the same volume_id; that is effectively becomes . When called any other way, however, this cleanup doesn't happen, which breaks a bunch of assumptions. One of these is that a disk's serial number is the same as the attached volume_id. Disk serial number, in KVM at least, is immutable, so can't be updated during volume-update. This is fine if we were called via cinder, because the cinder cleanup means the volume_id stays the same. If called any other way, however, they no longer match, at least until a hard reboot when it will be reset to the new volume_id. It turns out this breaks live migration, but probably other things too. We can't think of a workaround. I wondered why users would want to do this anyway. It turns out that sometimes cinder won't let you migrate a volume, but nova volume-update doesn't do those checks (as they're specific to cinder internals, none of nova's business, and duplicating them would be fragile, so we're not adding them!). Specifically we know that cinder won't let you migrate a volume with snapshots. There may be other reasons. If cinder won't let you migrate your volume, you can still move your data by using nova's volume-update, even though you'll end up with a new volume on the destination, and a slightly broken instance. Apparently the former is a trade-off worth making, but the latter has been reported as a bug. I'd like to make it very clear that nova's volume-update, isn't expected to work correctly except when called by cinder. Specifically there was a proposal that we disable volume-update from non-cinder callers in some way, possibly by asserting volume state that can only be set by cinder. However, I'm also very aware that users are calling volume-update because it fills a need, and we don't want to trap data that wasn't previously trapped. Firstly, is anybody aware of any other reasons to use nova's volume-update directly? Secondly, is there any reason why we shouldn't just document then you have to delete snapshots before doing a volume migration? Hopefully some cinder folks or operators can chime in to let me know how to back them up or somehow make them independent before doing this, at which point the volume itself should be migratable? If we can establish that there's an acceptable alternative to calling volume-update directly for all use-cases we're aware of, I'm going to propose heading off this class of bug by disabling it for non-cinder callers. Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) From nguyentrihai93 at gmail.com Mon Aug 20 16:12:32 2018 From: nguyentrihai93 at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gVHLDrSBI4bqjaQ==?=) Date: Tue, 21 Aug 2018 01:12:32 +0900 Subject: [openstack-dev] [goal][python3] week 2 update In-Reply-To: <1534778701-sup-1930@lrrr.local> References: <1534778701-sup-1930@lrrr.local> Message-ID: Hi, Vitrage team is going to finish the zuul job soon. As I see, only few patches in the old branches need to be merged. For the other projects, some patches have problems with different errors. Please help to fix them. Thanks for your cooperation. Nguyen Hai On Tue, Aug 21, 2018, 12:27 AM Doug Hellmann wrote: > This is week 2 of the roll-out of the "Run under Python 3 by default" > goal (https://governance.openstack.org/tc/goals/stein/python3-first.html). > > == What we learned last week == > > As we worked on approving the changes to add the zuul settings to > a few Oslo repositories, we had trouble with some of the older > branches because they were running newer versions of the jobs, as > configured in project-config. To work around this problem, we removed > those job templates in project-config by submitting separate patches > (rather than waiting for the full clean-up patch). We used the Oslo > team repos when we were testing some of the new jobs, so it is > possible this won't come up for any other teams, but I thought I > would mention the problem and solution, just in case. > > We had at least one question about the order in which the patches > need to land across the branches. We need the ones with the subject > "import zuul job settings from project-config" to land before the > others, but it doesn't make any difference which branches go first. > Those patches should be basically no-ops, neither adding nor changing > any of the existing testing. The other follow-up patches change or > add tests, and are submitted separately specifically so the changes > they contain can be managed and issues fixed to allow them to land. > > Nguyen found a couple of cases where older branches did not work > with the existing documentation job. The fix may require backporting > changes to remove tox_install.sh, or other changes that have been > made in newer stable branches but not backported all the way. Because > the new documentation job runs through tox we may be able to use > that in the older branches, as an alternative. > > We discovered last night that the version of git on CentOS does not > support the -C option, so we will need to change our scripts to be > compatible with the older platform. > > == Completed work == > > Congratulations to the Documentation team for approving all of the > patches to import their zuul job configuration! > > == Ongoing work == > > The Oslo team is working on migrating their zuul settings. > > The Ironic, Vitrage, Cyborg, Solum, Tacker, Masakari, Congress, > Designate, Mistral, Watcher, Glance, and Requirements teams have > started migrating their zuul settings. > > The Ironic team has started working on adding functional tests that > run under Python 3. > > Thanks to dtantsur for adding a variant of the python 3.6 jobs that > installs neutron from source, needed by several networking-related > projects that integrate tightly with neutron. > https://review.openstack.org/#/c/593643/ > > == Next Steps == > > If your team is ready to have your zuul settings migrated, please > let us know by following up to this email. We will start with the > volunteers, and then work our way through the other teams. > > After the Rocky cycle-trailing projects are released, I will propose > the change to project-config to change all of the packaging jobs > to use the new publish-to-pypi-python3 template. We should be able > to have that change in place before the first milestone for Stein > so that we have an opportunity to test it. > > == How can you help? == > > 1. Choose a patch that has failing tests and help fix it. > > https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) > 2. Review the patches for the zuul changes. Keep in mind that some of > those patches will be on the stable branches for projects. > 3. Work on adding functional test jobs that run under Python 3. > > == How can you ask for help? == > > If you have any questions, please post them here to the openstack-dev > list with the topic tag [python3] in the subject line. Posting > questions to the mailing list will give the widest audience the > chance to see the answers. > > We are using the #openstack-dev IRC channel for discussion as well, > but I'm not sure how good our timezone coverage is so it's probably > better to use the mailing list. > > == Reference Material == > > Goal description: > https://governance.openstack.org/tc/goals/stein/python3-first.html > Open patches needing reviews: > https://review.openstack.org/#/q/topic:python3-first+is:open > Storyboard: https://storyboard.openstack.org/#!/board/104 > Zuul migration notes: https://etherpad.openstack.org/p/python3-first > Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 > Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- *Nguyen Tri Hai */ Ph.D. Student ANDA Lab., Soongsil Univ., Seoul, South Korea -------------- next part -------------- An HTML attachment was scrubbed... URL: From moguimar at redhat.com Mon Aug 20 16:34:59 2018 From: moguimar at redhat.com (Moises Guimaraes de Medeiros) Date: Mon, 20 Aug 2018 18:34:59 +0200 Subject: [openstack-dev] =?utf-8?q?=5Boslo=5D_proposing_Mois=C3=A9s_Guimar?= =?utf-8?q?=C3=A3es_for_oslo=2Econfig_core?= In-Reply-To: <1534778937-sup-6479@lrrr.local> References: <1533129742-sup-2007@lrrr.local> <1533733971-sup-7865@lrrr.local> <1534778937-sup-6479@lrrr.local> Message-ID: o/ Em seg, 20 de ago de 2018 às 17:30, Doug Hellmann escreveu: > Excerpts from Doug Hellmann's message of 2018-08-08 09:18:44 -0400: > > Excerpts from Doug Hellmann's message of 2018-08-01 09:27:09 -0400: > > > Moisés Guimarães (moguimar) did quite a bit of work on oslo.config > > > during the Rocky cycle to add driver support. Based on that work, > > > and a discussion we have had since then about general cleanup needed > > > in oslo.config, I think he would make a good addition to the > > > oslo.config review team. > > > > > > Please indicate your approval or concerns with +1/-1. > > > > > > Doug > > > > Normally I would have added moguimar to the oslo-config-core team > > today, after a week's wait. Funny story, though. There is no > > oslo-config-core team. > > > > oslo.config is one of a few of our libraries that we never set up with a > > separate review team. It is managed by oslo-core. We could set up a new > > review team for that library, but after giving it some thought I > > realized that *most* of the libraries are fairly stable, our team is > > pretty small, and Moisés is a good guy so maybe we don't need to worry > > about that. > > > > I spoke with Moisés, and he agreed to be part of the larger core team. > > He pointed out that the next phase of the driver work is going to happen > > in castellan, so it would be useful to have another reviewer there. And > > I'm sure we can trust him to be careful with reviews in other repos > > until he learns his way around. > > > > So, I would like to amend my original proposal and suggest that we add > > Moisés to the oslo-core team. > > > > Please indicate support with +1 or present any concerns you have. I > > apologize for the confusion on my part. > > > > Doug > > There being no objections, I have added Moisés Guimarães (moguimar) to > the oslo-core team today. > > Welcome to the team! > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- MOISÉS GUIMARÃES SOFTWARE ENGINEER Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From opensrloo at gmail.com Mon Aug 20 16:44:30 2018 From: opensrloo at gmail.com (Ruby Loo) Date: Mon, 20 Aug 2018 12:44:30 -0400 Subject: [openstack-dev] [ironic] ironic-staging-drivers: what to do? In-Reply-To: References: Message-ID: Hi, On Mon, Aug 13, 2018 at 2:40 PM, Julia Kreger wrote: > Greetings fellow ironicans! > > As many of you might know an openstack/ironic-staging-drivers[1] > repository exists. What most might not know is that it was > intentionally created outside of ironic's governance[2]. > ... > > This topic has come up in passing at PTGs and most recently on IRC[9], > and I think we ought to discuss it during our next weekly meeting[10]. > I've gone ahead and added an item to the agenda, but we can also > discuss via email. > > We had a short discussion on this in our meeting today [1]. To summarize, we did not discuss whether to put it under the ironic governance. We did agree that it would be ok to have the ironic cores be cores in ironic-staging-drivers. There would be no guarantee (of course) that the ironic cores would actually review any of these. Julia will get in touch with the cores in ironic-staging-drivers, to see if they would like to add ironic-cores as cores. --ruby [1] http://eavesdrop.openstack.org/meetings/ironic/2018/ironic.2018-08-20-15.00.log.html#l-118 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkovar at redhat.com Mon Aug 20 17:02:03 2018 From: pkovar at redhat.com (Petr Kovar) Date: Mon, 20 Aug 2018 19:02:03 +0200 Subject: [openstack-dev] [docs][all] Testing installation guides for Rocky Message-ID: <20180820190203.bb4ce900955d889d1b6c7f4e@redhat.com> Hi all, With the Rocky release quickly approaching, I wanted to draw your attention to installation guides testing. With each project team now maintaining their own installation guide, and with the old coordinated effort to test installation docs now retired, the Docs team recently updated guidelines for IGs testing, you can find the instructions here: https://docs.openstack.org/doc-contrib-guide/release/taskdetail.html#installation-guides-testing As a reminder, all installation guides should be based on the common installation guide found at https://docs.openstack.org/install-guide/ and should be tested together with installation instructions from related project teams docs: https://docs.openstack.org/rocky/install/ We set up an Installation Guides Review Inbox that tracks all open patches touching files under doc/source/install/ in project team repositories: https://gerrit-dash-creator.readthedocs.io/en/latest/dashboards/dashboard_doc-install-guides.html This will hopefully make it easier for project teams and individual docs contributors to monitor and follow-up on changes happening across OpenStack projects. Thanks, pk From aschultz at redhat.com Mon Aug 20 17:11:45 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 20 Aug 2018 11:11:45 -0600 Subject: [openstack-dev] [tripleo] fedora28 python3 test environment In-Reply-To: References: Message-ID: On Fri, Aug 17, 2018 at 5:18 PM, Alex Schultz wrote: > Ahoy folks, > > In order to get to a spot where can start evaluate the current status > of TripleO under python3 I've thrown together a set of ansible > playbooks[0] to launch a fedora28 node and build the required > python-tripleoclient (and dependencies) These playbooks will spawn a > VM on an OpenStack cloud, runs through the the steps from the RDO > etherpad[1] for using the fedora stablized repo and builds all the > currently outstanding python3 package builds[2] for > python-tripleoclient & company. Once the playblook has completed it > should be at a spot to 'dnf install python3-tripleoclient'. > > I believe from here we can focus on getting the undercloud[3] and > standalone[4] processes working correctly under python3. I think > initially we should use the existing CentOS7 containers we build under > the existing processes to see if we can't get the services deployed as > we work on building out all the required python3 packaging. > To follow up, I've started an etherpad[0] to track the various issues related to the python3 version of tripleoclient. [0] https://etherpad.openstack.org/p/tripleo-python3-tripleoclient-issues > Thanks, > -Alex > > [0] https://github.com/mwhahaha/tripleo-f28-testbed > [1] https://review.rdoproject.org/etherpad/p/use-fedora-stabilized > [2] https://review.rdoproject.org/r/#/q/status:open+owner:%22Alex+Schultz+%253Caschultz%2540next-development.com%253E%22+topic:python3 > [3] https://docs.openstack.org/tripleo-docs/latest/install/installation/installation.html > [4] https://docs.openstack.org/tripleo-docs/latest/install/containers_deployment/standalone.html From kennelson11 at gmail.com Mon Aug 20 17:23:07 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 20 Aug 2018 10:23:07 -0700 Subject: [openstack-dev] [os-upstream-institute] Restarting meetings on August 20 In-Reply-To: <8D827CFA-946D-4C11-BBC1-4B8408FFCD0B@gmail.com> References: <8D827CFA-946D-4C11-BBC1-4B8408FFCD0B@gmail.com> Message-ID: Hello Everyone, To avoid meeting conflicts with the Women of OpenStack, we will actually be doing meetings weekly on Mondays at 20:00 UTC on odd weeks. Long story short, our kickoff meeting after this luxurious summer break will be a week from today on August 27th. Thanks everyone! See you next week :) -Kendall Nelson (diablo_rojo) On Sun, 12 Aug 2018, 12:44 am Ildiko Vancsa, wrote: > Hi, > > As the Summer vacation season is getting to its end and we also need to > start to prepare for the training just before the Berlin Summit we plan to > resurrect the OUI meetings on every second Monday at 2000 UTC starting on > August 20. > > We will post the agenda on the regular etherpad: > https://etherpad.openstack.org/p/openstack-upstream-institute-meetings > > > Further useful links: > > You can see the current state of the website: > https://docs.openstack.org/upstream-training/index.html > The current training content can be found here: > https://docs.openstack.org/upstream-training/upstream-training-content.html > To check the latest stage of the Contributor Guide: > https://docs.openstack.org/contributors/index.html > > Open training-guide reviews: > https://review.openstack.org/#/q/project:openstack/training-guides+status:open > Open Contributor Guide reviews: > https://review.openstack.org/#/q/project:openstack/contributor-guide+status:open > > Contributor Guide StoryBoard open Stories/Tasks: > https://storyboard.openstack.org/#!/project/913 > > > Please let me know if you have any questions. > > Thanks and Best Regards, > Ildikó > (IRC: ildikov) > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kendall at openstack.org Mon Aug 20 17:32:55 2018 From: kendall at openstack.org (Kendall Waters) Date: Mon, 20 Aug 2018 12:32:55 -0500 Subject: [openstack-dev] Early Bird Registration Deadline Extended to 8/28 - Berlin Summit Message-ID: <9F966B33-E6CE-4257-8ADB-50BF582230D0@openstack.org> Hi everyone, The OpenStack Summit Berlin schedule is now live and in order to give people some extra time to book tickets, we have decided to extend early bird registration. The NEW early bird registration deadline is August 28 at 11:59pm PT (August 29, 6:59 UTC). Register now before the price increases! Don’t miss out on sessions and workshops from organizations such as Oerlikon ManMade Fibers, Workday, CERN, Volkswagen, BMW and more. If you have any questions, please email summit at openstack.org . Cheers, Kendall Kendall Waters OpenStack Marketing & Events kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Aug 20 17:36:11 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 20 Aug 2018 10:36:11 -0700 Subject: [openstack-dev] New Contributor In-Reply-To: References: <49b15368-a67b-dfcb-0501-6b527a42c71c@openstack.org> Message-ID: Hello Ivoline, While I'm a little late to the party, I still wanted to say welcome and offer my help :) If you have any questions based about the links you've been sent, I'm happy to answer them! I can also help you find/get started with a team and introduce you to community members whenever you're ready. -Kendall Nelson (diablo_rojo) On Mon, 20 Aug 2018, 4:08 am Ivoline Ngong, wrote: > Thanks so much for help Josh and Thierry. I'll check out the links and > hopefully find a way forward from there. Will get back here in case I have > any questions. > > Cheers, > Ivoline > > On Mon, Aug 20, 2018, 12:01 Thierry Carrez wrote: > >> Ivoline Ngong wrote: >> > I am Ivoline Ngong. I am a Cameroonian who lives in Turkey. I will love >> > to contribute to Open source through OpenStack. I code in Java and >> > Python and I think OpenStack is a good fit for me. >> > I'll appreciate it if you can point me to the right direction on how I >> > can get started. >> >> Hi Ivoline, >> >> Welcome to the OpenStack community ! >> >> The OpenStack Technical Committee maintains a list of areas in most need >> of help: >> >> https://governance.openstack.org/tc/reference/help-most-needed.html >> >> Depending on your interest, you could pick one of those projects and >> reach out to the mentioned contact points. >> >> For more general information on how to contribute, you can check out our >> contribution portal: >> >> https://www.openstack.org/community/ >> >> -- >> Thierry Carrez (ttx) >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tenobreg at redhat.com Mon Aug 20 17:42:40 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Mon, 20 Aug 2018 14:42:40 -0300 Subject: [openstack-dev] New Contributor In-Reply-To: References: <49b15368-a67b-dfcb-0501-6b527a42c71c@openstack.org> Message-ID: Hi Ivoline, Also a little late but wanted to say welcome aboard, hopefully you will find a very welcoming community here and of course a lot of work to do. I work with Sahara, the big data processing project of OpenStack, we need help for sure. If this area interests you in any way, feel free to join us at #openstack-sahara on IRC or email me and we can send some work at your direction. On Mon, Aug 20, 2018 at 2:37 PM Kendall Nelson wrote: > Hello Ivoline, > > While I'm a little late to the party, I still wanted to say welcome and > offer my help :) > > If you have any questions based about the links you've been sent, I'm > happy to answer them! I can also help you find/get started with a team and > introduce you to community members whenever you're ready. > > -Kendall Nelson (diablo_rojo) > > > On Mon, 20 Aug 2018, 4:08 am Ivoline Ngong, > wrote: > >> Thanks so much for help Josh and Thierry. I'll check out the links and >> hopefully find a way forward from there. Will get back here in case I have >> any questions. >> >> Cheers, >> Ivoline >> >> On Mon, Aug 20, 2018, 12:01 Thierry Carrez wrote: >> >>> Ivoline Ngong wrote: >>> > I am Ivoline Ngong. I am a Cameroonian who lives in Turkey. I will >>> love >>> > to contribute to Open source through OpenStack. I code in Java and >>> > Python and I think OpenStack is a good fit for me. >>> > I'll appreciate it if you can point me to the right direction on how I >>> > can get started. >>> >>> Hi Ivoline, >>> >>> Welcome to the OpenStack community ! >>> >>> The OpenStack Technical Committee maintains a list of areas in most need >>> of help: >>> >>> https://governance.openstack.org/tc/reference/help-most-needed.html >>> >>> Depending on your interest, you could pick one of those projects and >>> reach out to the mentioned contact points. >>> >>> For more general information on how to contribute, you can check out our >>> contribution portal: >>> >>> https://www.openstack.org/community/ >>> >>> -- >>> Thierry Carrez (ttx) >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Mon Aug 20 17:44:07 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 20 Aug 2018 13:44:07 -0400 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: Message-ID: <2404d066-1048-3198-a537-311f98ceaf1e@redhat.com> On 17/08/18 11:51, Chris Dent wrote: > One of the questions that has come up on the etherpad is about how > placement should be positioned, as a project, after the extraction. > The options are: > > * A repo within the compute project > * Its own project, either: >   * working towards being official and governed >   * official and governed from the start So since this is under heavy discussion in #openstack-tc, and Ed asked for folks who are not invested in either side, allow me to offer this suggestion: It just doesn't matter. The really important thing here, and it sounds like one that everybody agrees on, is that placement gets split out into its own repo. That will enable things to move forward both technically (helping other projects to more easily consume it) and socially (allowing it to use a separate Gerrit ACL so it can add additional core reviewers with +2 rights only on that repo). So let's focus on getting that done. It seems unlikely to me that having the placement repo technically under the governance of the Nova project will present anywhere near the level of obstacle to other projects using as having it in the same repo as Nova currently does, if they are even aware of it at all. Conversely, I consider it equally unlikely that placement living outside of the Nova umbrella altogether would result in significant divergence between its interests and those of Nova. If you want my personal opinion then I'm a big believer in incremental change. So, despite recognising that it is born of long experience of which I have been blissfully mostly unaware, I have to disagree with Chris's position that if anybody lets you change something then you should try to change as much as possible in case they don't let you try again. (In fact I'd go so far as to suggest that those kinds of speculative changes are a contributing factor in making people reluctant to allow anything to happen at all.) So I'd suggest splitting the repo, trying things out for a while within Nova's governance, and then re-evaluating. If there are that point specific problems that separate governance would appear to address, then it's only a trivial governance patch and a PTL election away. It should also be much easier to get consensus at that point than it is at this distance where we're only speculating what things will be like after the extraction. I'd like to point out for the record that Mel already said this and said it better and is AFAICT pretty much never wrong :) cheers, Zane. From chris.friesen at windriver.com Mon Aug 20 18:02:20 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Mon, 20 Aug 2018 12:02:20 -0600 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <2404d066-1048-3198-a537-311f98ceaf1e@redhat.com> References: <2404d066-1048-3198-a537-311f98ceaf1e@redhat.com> Message-ID: <5B7B022C.7000202@windriver.com> On 08/20/2018 11:44 AM, Zane Bitter wrote: > If you want my personal opinion then I'm a big believer in incremental change. > So, despite recognising that it is born of long experience of which I have been > blissfully mostly unaware, I have to disagree with Chris's position that if > anybody lets you change something then you should try to change as much as > possible in case they don't let you try again. (In fact I'd go so far as to > suggest that those kinds of speculative changes are a contributing factor in > making people reluctant to allow anything to happen at all.) So I'd suggest > splitting the repo, trying things out for a while within Nova's governance, and > then re-evaluating. If there are that point specific problems that separate > governance would appear to address, then it's only a trivial governance patch > and a PTL election away. It should also be much easier to get consensus at that > point than it is at this distance where we're only speculating what things will > be like after the extraction. > > I'd like to point out for the record that Mel already said this and said it > better and is AFAICT pretty much never wrong :) In order to address the "velocity of change in placement" issues, how about making the main placement folks members of nova-core with the understanding that those powers would only be used in the new placement repo? Chris From tenobreg at redhat.com Mon Aug 20 18:07:29 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Mon, 20 Aug 2018 15:07:29 -0300 Subject: [openstack-dev] [goal][python3] week 2 update In-Reply-To: References: <1534778701-sup-1930@lrrr.local> Message-ID: Hi Doug, I believe Sahara is ready to have those patches worked on. Do we have to do anything specific to get the env ready? Thanks, On Mon, Aug 20, 2018 at 1:13 PM Nguyễn Trí Hải wrote: > Hi, > > Vitrage team is going to finish the zuul job soon. As I see, only few > patches in the old branches need to be merged. > > For the other projects, some patches have problems with different errors. > Please help to fix them. > > Thanks for your cooperation. > > Nguyen Hai > > > On Tue, Aug 21, 2018, 12:27 AM Doug Hellmann > wrote: > >> This is week 2 of the roll-out of the "Run under Python 3 by default" >> goal (https://governance.openstack.org/tc/goals/stein/python3-first.html >> ). >> >> == What we learned last week == >> >> As we worked on approving the changes to add the zuul settings to >> a few Oslo repositories, we had trouble with some of the older >> branches because they were running newer versions of the jobs, as >> configured in project-config. To work around this problem, we removed >> those job templates in project-config by submitting separate patches >> (rather than waiting for the full clean-up patch). We used the Oslo >> team repos when we were testing some of the new jobs, so it is >> possible this won't come up for any other teams, but I thought I >> would mention the problem and solution, just in case. >> >> We had at least one question about the order in which the patches >> need to land across the branches. We need the ones with the subject >> "import zuul job settings from project-config" to land before the >> others, but it doesn't make any difference which branches go first. >> Those patches should be basically no-ops, neither adding nor changing >> any of the existing testing. The other follow-up patches change or >> add tests, and are submitted separately specifically so the changes >> they contain can be managed and issues fixed to allow them to land. >> >> Nguyen found a couple of cases where older branches did not work >> with the existing documentation job. The fix may require backporting >> changes to remove tox_install.sh, or other changes that have been >> made in newer stable branches but not backported all the way. Because >> the new documentation job runs through tox we may be able to use >> that in the older branches, as an alternative. >> >> We discovered last night that the version of git on CentOS does not >> support the -C option, so we will need to change our scripts to be >> compatible with the older platform. >> >> == Completed work == >> >> Congratulations to the Documentation team for approving all of the >> patches to import their zuul job configuration! >> >> == Ongoing work == >> >> The Oslo team is working on migrating their zuul settings. >> >> The Ironic, Vitrage, Cyborg, Solum, Tacker, Masakari, Congress, >> Designate, Mistral, Watcher, Glance, and Requirements teams have >> started migrating their zuul settings. >> >> The Ironic team has started working on adding functional tests that >> run under Python 3. >> >> Thanks to dtantsur for adding a variant of the python 3.6 jobs that >> installs neutron from source, needed by several networking-related >> projects that integrate tightly with neutron. >> https://review.openstack.org/#/c/593643/ >> >> == Next Steps == >> >> If your team is ready to have your zuul settings migrated, please >> let us know by following up to this email. We will start with the >> volunteers, and then work our way through the other teams. >> >> After the Rocky cycle-trailing projects are released, I will propose >> the change to project-config to change all of the packaging jobs >> to use the new publish-to-pypi-python3 template. We should be able >> to have that change in place before the first milestone for Stein >> so that we have an opportunity to test it. >> >> == How can you help? == >> >> 1. Choose a patch that has failing tests and help fix it. >> >> https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) >> 2. Review the patches for the zuul changes. Keep in mind that some of >> those patches will be on the stable branches for projects. >> 3. Work on adding functional test jobs that run under Python 3. >> >> == How can you ask for help? == >> >> If you have any questions, please post them here to the openstack-dev >> list with the topic tag [python3] in the subject line. Posting >> questions to the mailing list will give the widest audience the >> chance to see the answers. >> >> We are using the #openstack-dev IRC channel for discussion as well, >> but I'm not sure how good our timezone coverage is so it's probably >> better to use the mailing list. >> >> == Reference Material == >> >> Goal description: >> https://governance.openstack.org/tc/goals/stein/python3-first.html >> Open patches needing reviews: >> https://review.openstack.org/#/q/topic:python3-first+is:open >> Storyboard: https://storyboard.openstack.org/#!/board/104 >> Zuul migration notes: https://etherpad.openstack.org/p/python3-first >> Zuul migration tracking: >> https://storyboard.openstack.org/#!/story/2002586 >> Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -- > > *Nguyen Tri Hai */ Ph.D. Student > > ANDA Lab., Soongsil Univ., Seoul, South Korea > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Mon Aug 20 18:25:06 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 20 Aug 2018 14:25:06 -0400 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <5B7B022C.7000202@windriver.com> References: <2404d066-1048-3198-a537-311f98ceaf1e@redhat.com> <5B7B022C.7000202@windriver.com> Message-ID: On 20/08/18 14:02, Chris Friesen wrote: > In order to address the "velocity of change in placement" issues, how > about making the main placement folks members of nova-core with the > understanding that those powers would only be used in the new placement > repo? That kind of 'understanding' is only needed (because of limitations in Gerrit) when working in the same repo. Once it's in a separate repo you just create a new 'placement-core' group and make nova-core a member of it. From hongbin034 at gmail.com Mon Aug 20 18:32:56 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Mon, 20 Aug 2018 14:32:56 -0400 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> Message-ID: On Sat, Aug 18, 2018 at 8:25 AM Chris Dent wrote: > > 5. In OpenStack we have a tradition of the contributors having a > strong degree of self-determination. If that tradition is to be > upheld, then it would make sense that the people who designed and > wrote the code that is being extracted would get to choose what > happens with it. As much as Mel's and Dan's (only picking on them > here because they are the dissenting voices that have showed up so > far) input has been extremely important and helpful in the evolution > of placement, they are not those people. > > So my hope is that (in no particular order) Jay Pipes, Eric Fried, > Takashi Natsume, Tetsuro Nakamura, Matt Riedemann, Andrey Volkov, > Alex Xu, Balazs Gibizer, Ed Leafe, and any other contributor to > placement whom I'm forgetting [1] would express their preference on > what they'd like to see happen. > > At the same time, if people from neutron, cinder, blazar, zun, > mogan, ironic, and cyborg could express their preferences, we can get > through this by acclaim and get on with getting things done. > > Thank you. > I express the Zun's point of view. Zun has a scheduler to schedule containers to nodes based on the demanded and available compute resources (i.e. cpu, memory). Right now, Zun's scheduler is independent of Nova so VMs and containers have to be separated into two set of resource pools. One of the most demanding features from our users (e.g. requested from Chinese UnionPay via OpenStack Financial WG) is to have VMs and containers share the same set of resource pool to maximize utilization. To satisfy this requirement, Zun needs to know the current resource allocation that are made by external services (i.e. Nova) so that we can take those information into account when scheduling the containers. Adopting placement is a straightforward and feasible approach to address that. As a summary, below are high-level requirements from Zun's perspective: * Have VMs and containers multiplex into a pool of compute nodes. * Make optimal scheduling decisions for containers based on information (i.e. VM allocations) query from placement. * Report container allocations to placement and hope external schedulers can make optimal decisions. We haven't figured out the technical details yet. However, to look forward, if Zun team decides to adopt placement, I would have the following concerns: * Is placement stable enough so that it won't break us often? * If there is a breaking change in placement and we contribute a fix, how fast the fix will be merged? * If there is a feature request from our side and we contribute patches to placement, will the patches be accepted? Regardless of whether placement is extracted or not, above are the concerns that I mostly care about. Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Aug 20 18:37:12 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 20 Aug 2018 18:37:12 +0000 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <2404d066-1048-3198-a537-311f98ceaf1e@redhat.com> <5B7B022C.7000202@windriver.com> Message-ID: <20180820183712.twz2urapp2ld7s2g@yuggoth.org> On 2018-08-20 14:25:06 -0400 (-0400), Zane Bitter wrote: > On 20/08/18 14:02, Chris Friesen wrote: > > In order to address the "velocity of change in placement" > > issues, how about making the main placement folks members of > > nova-core with the understanding that those powers would only be > > used in the new placement repo? > > That kind of 'understanding' is only needed (because of > limitations in Gerrit) when working in the same repo. Once it's in > a separate repo you just create a new 'placement-core' group and > make nova-core a member of it. More correctly, the effort you'd go through to correctly characterize subsets of a repository under control of different groups of people is within the same order of magnitude as just putting them in separate Git repositories (especially when you take in to consideration the knock-on effects of duplicating things like review dashboards for the various prolog rules defined for those different subsets of the repository). If you're going to attempt to delegate review on portions of a Git repository, in most cases you may as well go ahead and make it a separate repository anyway. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cdent+os at anticdent.org Mon Aug 20 19:03:14 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 20 Aug 2018 20:03:14 +0100 (BST) Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <2404d066-1048-3198-a537-311f98ceaf1e@redhat.com> References: <2404d066-1048-3198-a537-311f98ceaf1e@redhat.com> Message-ID: On Mon, 20 Aug 2018, Zane Bitter wrote: > If you want my personal opinion then I'm a big believer in incremental > change. So, despite recognising that it is born of long experience of which I > have been blissfully mostly unaware, I have to disagree with Chris's position > that if anybody lets you change something then you should try to change as > much as possible in case they don't let you try again. Because you called me out specifically, I feel obliged to say, this is neither what I said nor what I meant. It wasn't "in case they don't let you try again". It was "we've been trying to do some of this for two years and if we do it incrementally, the end game is further away, because it seems us take us forever to do anything." Perhaps not a huge difference. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From openstack at fried.cc Mon Aug 20 19:15:31 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 20 Aug 2018 14:15:31 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> Message-ID: This is great information, thanks Hongbin. If I'm understanding correctly, it sounds like Zun ultimately wants to be a peer of nova in terms of placement consumption. Using the resource information reported by nova, neutron, etc., you wish to be able to discover viable targets for a container deployment (GET /allocation_candidates) and claim resources to schedule to them (PUT /allocations/{uuid}). And you want to do it while Nova is doing the same for VMs, in the same cloud. Do I have that right? > * Is placement stable enough so that it won't break us often? Yes. > * If there is a breaking change in placement and we contribute a fix, > how fast the fix will be merged? > * If there is a feature request from our side and we contribute patches > to placement, will the patches be accepted? I believe this to be one of the main issues in the decision about independent governance. If placement remains under nova, it is more likely that fixes and features impacting the nova team would receive higher priority than those impacting zun. -efried > I express the Zun's point of view. > > Zun has a scheduler to schedule containers to nodes based on the > demanded and available compute resources (i.e. cpu, memory). Right now, > Zun's scheduler is independent of Nova so VMs and containers have to be > separated into two set of resource pools. One of the most demanding > features from our users (e.g. requested from Chinese UnionPay via > OpenStack Financial WG) is to have VMs and containers share the same set > of resource pool to maximize utilization. To satisfy this requirement, > Zun needs to know the current resource allocation that are made by > external services (i.e. Nova) so that we can take those information into > account when scheduling the containers. Adopting placement is a > straightforward and feasible approach to address that. > > As a summary, below are high-level requirements from Zun's perspective: > * Have VMs and containers multiplex into a pool of compute nodes. > * Make optimal scheduling decisions for containers based on information > (i.e. VM allocations) query from placement. > * Report container allocations to placement and hope external schedulers > can make optimal decisions. > > We haven't figured out the technical details yet. However, to look > forward, if Zun team decides to adopt placement, I would have the > following concerns: > * Is placement stable enough so that it won't break us often? > * If there is a breaking change in placement and we contribute a fix, > how fast the fix will be merged? > * If there is a feature request from our side and we contribute patches > to placement, will the patches be accepted? > > Regardless of whether placement is extracted or not, above are the > concerns that I mostly care about. > > Best regards, > Hongbin > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From hongbin034 at gmail.com Mon Aug 20 19:26:39 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Mon, 20 Aug 2018 15:26:39 -0400 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> Message-ID: On Mon, Aug 20, 2018 at 3:15 PM Eric Fried wrote: > This is great information, thanks Hongbin. > > If I'm understanding correctly, it sounds like Zun ultimately wants to > be a peer of nova in terms of placement consumption. Using the resource > information reported by nova, neutron, etc., you wish to be able to > discover viable targets for a container deployment (GET > /allocation_candidates) and claim resources to schedule to them (PUT > /allocations/{uuid}). And you want to do it while Nova is doing the same > for VMs, in the same cloud. Do I have that right? > Yes, your interpretation is right. > > > * Is placement stable enough so that it won't break us often? > > Yes. > > > * If there is a breaking change in placement and we contribute a fix, > > how fast the fix will be merged? > > * If there is a feature request from our side and we contribute patches > > to placement, will the patches be accepted? > > I believe this to be one of the main issues in the decision about > independent governance. If placement remains under nova, it is more > likely that fixes and features impacting the nova team would receive > higher priority than those impacting zun. > > -efried > > > I express the Zun's point of view. > > > > Zun has a scheduler to schedule containers to nodes based on the > > demanded and available compute resources (i.e. cpu, memory). Right now, > > Zun's scheduler is independent of Nova so VMs and containers have to be > > separated into two set of resource pools. One of the most demanding > > features from our users (e.g. requested from Chinese UnionPay via > > OpenStack Financial WG) is to have VMs and containers share the same set > > of resource pool to maximize utilization. To satisfy this requirement, > > Zun needs to know the current resource allocation that are made by > > external services (i.e. Nova) so that we can take those information into > > account when scheduling the containers. Adopting placement is a > > straightforward and feasible approach to address that. > > > > As a summary, below are high-level requirements from Zun's perspective: > > * Have VMs and containers multiplex into a pool of compute nodes. > > * Make optimal scheduling decisions for containers based on information > > (i.e. VM allocations) query from placement. > > * Report container allocations to placement and hope external schedulers > > can make optimal decisions. > > > > We haven't figured out the technical details yet. However, to look > > forward, if Zun team decides to adopt placement, I would have the > > following concerns: > > * Is placement stable enough so that it won't break us often? > > * If there is a breaking change in placement and we contribute a fix, > > how fast the fix will be merged? > > * If there is a feature request from our side and we contribute patches > > to placement, will the patches be accepted? > > > > Regardless of whether placement is extracted or not, above are the > > concerns that I mostly care about. > > > > Best regards, > > Hongbin > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From work at seanmooney.info Mon Aug 20 20:18:59 2018 From: work at seanmooney.info (Sean Mooney) Date: Mon, 20 Aug 2018 21:18:59 +0100 Subject: [openstack-dev] [all] [nova] [neutron] live migration with multiple port bindings. Message-ID: HI everyone, last week i spent some time testing the live migration capabilities now that nova has started to use neutron multiple port bindings. all testing unless otherwise specifed was done with Rocky RC1 or later commits on Centos 7.5 using devstack. test summary ~~~~~~~~~~ i have tested the following scenarios with different levels of success. linux bridge to linux bridge: Worked ovs iptables to ovs iptables: Worked ovs conntrack to ovs conntrack: Worked ovs iptables to ovs conntrack: Worked ovs conntrack to ovs iptables: Worked linux bridge to ovs: migrtation succeded network connectivity broken see bug 1788009 ovs to linux bridge: failed, libvirt error due to lack of destination bridge name see bug 1788012 ovs to ovs dpdk: failed qemu bug encountered on migrate. nova xml generation appears correct. ovs dpdk to ovs: failed another qemu bug encountered on migrate. nova xml generation appears correct. centos->ubuntu: failed emultor not found. see bug: 1788028 not that since iptables to conntrack migration now works operators will be able to change this value once they have upgraded to rocky via a rolling update using live migration. host config ~~~~~~~~ note that not all nodes were running the exact same commits as i added addtional nodes later in my testing. all nodes were at least at this level nova sha: afe4512bf66c89a061b1a7ccd3e7ac8e3b1b284d neutron sha: 1dda2bca862b1268c0f5ae39b7508f1b1cab6f15 nova was configured with [compute] live_migration_wait_for_vif_plug = True and the nova commit above contains the revirt of the slow migration change. test details ~~~~~~~~ in both the ovs-dpdk tests, when the migration failed and the vm contiuned to run on the source node however it had no network connectivity. on hard reboot of the vm, it went to error state because the vif binding was set to none as the vif:bidning-details:host_id was set to none so the vif_type was also set to none. i have opened a nova bug to track the fact that the vm is left in an invalid state even though the status is active. see bug 1788014 when i was testing live migration betweeen ovs with iptable and the connection tracking firewall i also did minimal testing to ensure the firewall work. i did this by booting 3 vm. 2 VM A and B in the same security group and one in a seperate security group VM C. VM A and B where intially on differnet node ovs compute nodes with VM A using iptables and VM B using conntrack security group driver. VM C was on the conntrac node. before VM c was setup to ping vm B which is block by security groups VM A was also configured to ping VM B which is allowed by security groups. VM B was then live migrate from the conntrack node to the iptables node and back while observing the ping out put of VM A and C druing this process it was observed that VM A contiued to ping VM B succesfully and at no point was VM C able to ping VM B. while this is by no means a complete test it indicates that security groups appear to be configred before network conenctive is restored on live migrating as expected. i also noticed that the interval where network connectivity was lost during live migrate was longer when going between the ip table node to the conntrack node the the reverse. i did not investage why but i suspect this is related to some flow timeouts in the contrack module. other testing ~~~~~~~~~ about two week ago i also tested the numa aware vswitch sepc. dureing that testing i confiimed that new isntaces were numa affined corectly i also confirmed that while live migration succeded the numa pinnning was not updated. as this was expected i have not opened a bug for this since it will be addressed in Stein by the numa aware live migration sepc. future testing ~~~~~~~~~ OVS-DPDK to OVS-DPDK =================== if i have time i will try and test live migration betwen two ovs-dpdk host. this has worked since before nova supported vhost-user. i did not test this case yet but its possible the qemu bug i hit in my ovs to ovs-dpdk testing could also break ovs-dpdk to ovs-dpdk migration. ovs to ovn ======== if i have time i may also test ovs to ovn migration. this should just work but i suspect that the same bug i hit with mixed ovs and linux bridge clouds may exist and the vxlan tunnels mesh may not be created. BUGS _____ nova ~~~~ when live migration fails due to a internal error rollback is not handeled correctly. :- https://bugs.launchpad.net/nova/+bug/1788014 libvirt: nova assumed dest emultor path is the same as source and fails to migrate if this is not true. :- https://bugs.launchpad.net/nova/+bug/1788028 neutron ~~~~~ neutron bridge name is not always set for ml2/ovs: - https://bugs.launchpad.net/neutron/+bug/1788009 bridge name not set in vif:binding-details by ml2/linux-bridge: - https://bugs.launchpad.net/neutron/+bug/1788012 neutron does not form mesh tunnel overly between different ml2 driver.: - https://bugs.launchpad.net/neutron/+bug/1788023 regards sean From doug at doughellmann.com Mon Aug 20 20:34:22 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 20 Aug 2018 16:34:22 -0400 Subject: [openstack-dev] [goal][python3] week 2 update In-Reply-To: References: <1534778701-sup-1930@lrrr.local> Message-ID: <1534796891-sup-969@lrrr.local> Excerpts from Telles Nobrega's message of 2018-08-20 15:07:29 -0300: > Hi Doug, > > I believe Sahara is ready to have those patches worked on. > > Do we have to do anything specific to get the env ready? Just be ready to do the reviews. I am generating the patches now and will propose them in a little while when the script finishes. Doug > > Thanks, > > On Mon, Aug 20, 2018 at 1:13 PM Nguyễn Trí Hải > wrote: > > > Hi, > > > > Vitrage team is going to finish the zuul job soon. As I see, only few > > patches in the old branches need to be merged. > > > > For the other projects, some patches have problems with different errors. > > Please help to fix them. > > > > Thanks for your cooperation. > > > > Nguyen Hai > > > > > > On Tue, Aug 21, 2018, 12:27 AM Doug Hellmann > > wrote: > > > >> This is week 2 of the roll-out of the "Run under Python 3 by default" > >> goal (https://governance.openstack.org/tc/goals/stein/python3-first.html > >> ). > >> > >> == What we learned last week == > >> > >> As we worked on approving the changes to add the zuul settings to > >> a few Oslo repositories, we had trouble with some of the older > >> branches because they were running newer versions of the jobs, as > >> configured in project-config. To work around this problem, we removed > >> those job templates in project-config by submitting separate patches > >> (rather than waiting for the full clean-up patch). We used the Oslo > >> team repos when we were testing some of the new jobs, so it is > >> possible this won't come up for any other teams, but I thought I > >> would mention the problem and solution, just in case. > >> > >> We had at least one question about the order in which the patches > >> need to land across the branches. We need the ones with the subject > >> "import zuul job settings from project-config" to land before the > >> others, but it doesn't make any difference which branches go first. > >> Those patches should be basically no-ops, neither adding nor changing > >> any of the existing testing. The other follow-up patches change or > >> add tests, and are submitted separately specifically so the changes > >> they contain can be managed and issues fixed to allow them to land. > >> > >> Nguyen found a couple of cases where older branches did not work > >> with the existing documentation job. The fix may require backporting > >> changes to remove tox_install.sh, or other changes that have been > >> made in newer stable branches but not backported all the way. Because > >> the new documentation job runs through tox we may be able to use > >> that in the older branches, as an alternative. > >> > >> We discovered last night that the version of git on CentOS does not > >> support the -C option, so we will need to change our scripts to be > >> compatible with the older platform. > >> > >> == Completed work == > >> > >> Congratulations to the Documentation team for approving all of the > >> patches to import their zuul job configuration! > >> > >> == Ongoing work == > >> > >> The Oslo team is working on migrating their zuul settings. > >> > >> The Ironic, Vitrage, Cyborg, Solum, Tacker, Masakari, Congress, > >> Designate, Mistral, Watcher, Glance, and Requirements teams have > >> started migrating their zuul settings. > >> > >> The Ironic team has started working on adding functional tests that > >> run under Python 3. > >> > >> Thanks to dtantsur for adding a variant of the python 3.6 jobs that > >> installs neutron from source, needed by several networking-related > >> projects that integrate tightly with neutron. > >> https://review.openstack.org/#/c/593643/ > >> > >> == Next Steps == > >> > >> If your team is ready to have your zuul settings migrated, please > >> let us know by following up to this email. We will start with the > >> volunteers, and then work our way through the other teams. > >> > >> After the Rocky cycle-trailing projects are released, I will propose > >> the change to project-config to change all of the packaging jobs > >> to use the new publish-to-pypi-python3 template. We should be able > >> to have that change in place before the first milestone for Stein > >> so that we have an opportunity to test it. > >> > >> == How can you help? == > >> > >> 1. Choose a patch that has failing tests and help fix it. > >> > >> https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) > >> 2. Review the patches for the zuul changes. Keep in mind that some of > >> those patches will be on the stable branches for projects. > >> 3. Work on adding functional test jobs that run under Python 3. > >> > >> == How can you ask for help? == > >> > >> If you have any questions, please post them here to the openstack-dev > >> list with the topic tag [python3] in the subject line. Posting > >> questions to the mailing list will give the widest audience the > >> chance to see the answers. > >> > >> We are using the #openstack-dev IRC channel for discussion as well, > >> but I'm not sure how good our timezone coverage is so it's probably > >> better to use the mailing list. > >> > >> == Reference Material == > >> > >> Goal description: > >> https://governance.openstack.org/tc/goals/stein/python3-first.html > >> Open patches needing reviews: > >> https://review.openstack.org/#/q/topic:python3-first+is:open > >> Storyboard: https://storyboard.openstack.org/#!/board/104 > >> Zuul migration notes: https://etherpad.openstack.org/p/python3-first > >> Zuul migration tracking: > >> https://storyboard.openstack.org/#!/story/2002586 > >> Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 > >> > >> > >> > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > -- > > > > *Nguyen Tri Hai */ Ph.D. Student > > > > ANDA Lab., Soongsil Univ., Seoul, South Korea > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -- > > TELLES NOBREGA > > SOFTWARE ENGINEER > > Red Hat Brasil > > Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo > > tenobreg at redhat.com > > TRIED. TESTED. TRUSTED. > Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil > pelo Great Place to Work. From doug at doughellmann.com Mon Aug 20 20:42:53 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 20 Aug 2018 16:42:53 -0400 Subject: [openstack-dev] [goal][python3] week 2 update In-Reply-To: <1534796891-sup-969@lrrr.local> References: <1534778701-sup-1930@lrrr.local> <1534796891-sup-969@lrrr.local> Message-ID: <1534797743-sup-4122@lrrr.local> Excerpts from Doug Hellmann's message of 2018-08-20 16:34:22 -0400: > Excerpts from Telles Nobrega's message of 2018-08-20 15:07:29 -0300: > > Hi Doug, > > > > I believe Sahara is ready to have those patches worked on. > > > > Do we have to do anything specific to get the env ready? > > Just be ready to do the reviews. I am generating the patches now and > will propose them in a little while when the script finishes. And here they are: +----------------------------------------------+---------------------------------+-------------------------------------+ | Subject | Repo | URL | +----------------------------------------------+---------------------------------+-------------------------------------+ | import zuul job settings from project-config | openstack/python-saharaclient | https://review.openstack.org/593904 | | switch documentation job to new PTI | openstack/python-saharaclient | https://review.openstack.org/593905 | | add python 3.6 unit test job | openstack/python-saharaclient | https://review.openstack.org/593906 | | import zuul job settings from project-config | openstack/python-saharaclient | https://review.openstack.org/593918 | | import zuul job settings from project-config | openstack/python-saharaclient | https://review.openstack.org/593923 | | import zuul job settings from project-config | openstack/python-saharaclient | https://review.openstack.org/593928 | | import zuul job settings from project-config | openstack/python-saharaclient | https://review.openstack.org/593933 | | import zuul job settings from project-config | openstack/sahara | https://review.openstack.org/593907 | | switch documentation job to new PTI | openstack/sahara | https://review.openstack.org/593908 | | add python 3.6 unit test job | openstack/sahara | https://review.openstack.org/593909 | | import zuul job settings from project-config | openstack/sahara | https://review.openstack.org/593919 | | import zuul job settings from project-config | openstack/sahara | https://review.openstack.org/593924 | | import zuul job settings from project-config | openstack/sahara | https://review.openstack.org/593929 | | import zuul job settings from project-config | openstack/sahara | https://review.openstack.org/593934 | | import zuul job settings from project-config | openstack/sahara-dashboard | https://review.openstack.org/593910 | | switch documentation job to new PTI | openstack/sahara-dashboard | https://review.openstack.org/593911 | | import zuul job settings from project-config | openstack/sahara-dashboard | https://review.openstack.org/593920 | | import zuul job settings from project-config | openstack/sahara-dashboard | https://review.openstack.org/593925 | | import zuul job settings from project-config | openstack/sahara-dashboard | https://review.openstack.org/593930 | | import zuul job settings from project-config | openstack/sahara-dashboard | https://review.openstack.org/593935 | | import zuul job settings from project-config | openstack/sahara-extra | https://review.openstack.org/593912 | | import zuul job settings from project-config | openstack/sahara-extra | https://review.openstack.org/593921 | | import zuul job settings from project-config | openstack/sahara-extra | https://review.openstack.org/593926 | | import zuul job settings from project-config | openstack/sahara-extra | https://review.openstack.org/593931 | | import zuul job settings from project-config | openstack/sahara-extra | https://review.openstack.org/593936 | | import zuul job settings from project-config | openstack/sahara-image-elements | https://review.openstack.org/593913 | | import zuul job settings from project-config | openstack/sahara-image-elements | https://review.openstack.org/593922 | | import zuul job settings from project-config | openstack/sahara-image-elements | https://review.openstack.org/593927 | | import zuul job settings from project-config | openstack/sahara-image-elements | https://review.openstack.org/593932 | | import zuul job settings from project-config | openstack/sahara-image-elements | https://review.openstack.org/593937 | | import zuul job settings from project-config | openstack/sahara-specs | https://review.openstack.org/593914 | | import zuul job settings from project-config | openstack/sahara-tests | https://review.openstack.org/593915 | | switch documentation job to new PTI | openstack/sahara-tests | https://review.openstack.org/593916 | | add python 3.6 unit test job | openstack/sahara-tests | https://review.openstack.org/593917 | +----------------------------------------------+---------------------------------+-------------------------------------+ From zbitter at redhat.com Mon Aug 20 20:46:34 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 20 Aug 2018 16:46:34 -0400 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> Message-ID: <606d2033-a7ab-4ec1-5172-d65ce4938e77@redhat.com> On 18/08/18 18:22, Eric Fried wrote: > A year ago we might have developed a feature where one patch would > straddle placement and nova. Six months ago we were developing features > where those patches were separate but in the same series. Today that's > becoming less and less the case: nrp, sharing providers, consumer > generations, and other things mentioned have had their placement side > completed and their nova side - if started at all - done completely > independently. The reshaper series is an exception - but looking back on > its development, Depends-On would have worked just as well. So you've given a list here of things that you think wouldn't gain any particular benefit from being under the same governance. (Or possibly this is just an argument for being in a separate repo, which everybody already agrees with?) Mel gave a list of things she thinks _would_ benefit from shared governance. Was there anything on her list that you'd disagree with? Is there anything on your list that Mel or Dan or anybody else would disagree with? Why? (Note: I personally don't even think it matters, but this is how you reach consensus.) > Agree the nova project is overloaded and would benefit from having > broader core reviewer coverage over placement code. The list Chris > gives above includes more than one non-nova core who should be made > placement cores as soon as that's a thing. I agree with this, but separate governance is not a prerequisite for it. Having a different/larger core team for a repo in Gerrit is technically very easy, and our governance rules leave it completely up to the project team (represented by the PTL) to decide. Mel indicated what I'd describe as non-opposition to that on IRC, provided that the nova-core team retained core review rights on the placement repo.[1] How does the Nova team as a whole feel about that? Would anybody object? Would that be sufficient to resolve the placement team's concerns about core reviewer coverage? cheers, Zane. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-20.log.html#t2018-08-20T17:36:58 From james.slagle at gmail.com Mon Aug 20 20:47:45 2018 From: james.slagle at gmail.com (James Slagle) Date: Mon, 20 Aug 2018 16:47:45 -0400 Subject: [openstack-dev] [TripleO]Addressing Edge/Multi-site/Multi-cloud deployment use cases (new squad) Message-ID: As we start looking at how TripleO will address next generation deployment needs such as Edge, multi-site, and multi-cloud, I'd like to kick off a discussion around how TripleO can evolve and adapt to meet these new challenges. What are these challenges? I think the OpenStack Edge Whitepaper does a good job summarizing some of them: https://www.openstack.org/assets/edge/OpenStack-EdgeWhitepaper-v3-online.pdf They include: - management of distributed infrastructure - massive scale (thousands instead of hundreds) - limited network connectivity - isolation of distributed sites - orchestration of federated services across multiple sites We already have a lot of ongoing work that directly or indirectly starts to address some of these challenges. That work includes things like config-download, split-controlplane, metalsmith integration, validations, all-in-one, and standalone. I laid out some initial ideas in a previous message: http://lists.openstack.org/pipermail/openstack-dev/2018-July/132398.html I'll be reviewing some of that here and going into a bit more detail. These are some of the high level ideas I'd like to see TripleO start to address: - More separation between planning and deploying (likely to be further defined in spec discussion). We've had these concepts for a while, but we need to do a better job of surfacing them to users as deployments grow in size and complexity. With config-download, we can more easily separate the phases of rendering, downloading, validating, and applying the configuration. As we increase in scale to managing many deployments, we should take advantage of what each of those phases offer. The separation also makes the deployment more portable, as we should eliminate any restrictions that force the undercloud to be the control node applying the configuration. - Management of multiple deployments from a single undercloud. This is of course already possible today, but we need better docs and polish and more testing to flush out any bugs. - Plan and template management in git. This could be an iterative step towards eliminating Swift in the undercloud. Swift seemed like a natural choice at the time because it was an existing OpenStack service. However, I think git would do a better job at tracking history and comparing changes and is much more lightweight than Swift. We've been managing the config-download directory as a git repo, and I like this direction. For now, we are just putting the whole git repo in Swift, but I wonder if it makes sense to consider eliminating Swift entirely. We need to consider the scale of managing thousands of plans for separate edge deployments. I also think this would be a step towards undercloud simplification. - Orchestration between plans. I think there's general agreement around scaling up the undercloud to be more effective at managing and deploying multiple plans. The plans could be different OpenStack deployments potentially sharing some resources. Or, they could be deployments of different software stacks (Kubernetes/OpenShift, Ceph, etc). We'll need to develop some common interfaces for some basic orchestration between plans. It could include dependencies, ordering, and sharing parameter data (such as passwords or connection info). There is already some ongoing discussion about some of this work: http://lists.openstack.org/pipermail/openstack-dev/2018-August/133247.html I would suspect this would start out as collecting specific use cases, and then figuring out the right generic interfaces. - Multiple deployments of a single plan. This could be useful for doing many deployments that are all the same. Of course some info might be different such as network IP's, hostnames, and node specific details. We could have some generic input interfaces for those sorts of things without having to create new Heat stacks, which would allow re-using the same plan/stack for multiple deployments. When scaling to hundreds/thousands of edge deployments this could be really effective at side-stepping managing hundreds/thousands of Heat stacks. We may also need further separation between a plan and it's deployment state to have this modularity. - Distributed management/application of configuration. Even though the configuration is portable (config-download), we may still want some automation around applying the deployment when not using the undercloud as a control node. I think things like ansible-runner or Ansible AWX could help here, or perhaps mistral-executor agents, or "mistral as a library". This would also make our workflows more portable. - New documentation highlighting some or all of the above features and how to take advantage of it for new use cases (thousands of edge deployments, etc). I see this as a sort of "TripleO Edge Deployment Guide" that would highlight how to take advantage of TripleO for Edge/multi-site use cases. Obviously all the ideas are a lot of work, and not something I think we'll complete in a single cycle. I'd like to pull a squad together focused on Edge/multi-site/multi-cloud and TripleO. On that note, this squad could also work together with other deployment projects that are looking at similar use cases and look to collaborate. If you're interested in working on this squad, I'd see our first tasks as being: - Brainstorming additional ideas to the above - Breaking down ideas into actionable specs/blueprints for stein (and possibly future releases). - Coming up with a consistent message around direction and vision for solving these deployment challenges. - Bringing together ongoing work that relates to these use cases together so that we're all collaborating with shared vision and purpose and we can help prioritize reviews/ci/etc. - Identifying any discussion items we need to work through in person at the upcoming Denver PTG. I'm happy to help facilitate the squad. If you have any feedback on these ideas or would like to join the squad, reply to the thread or sign up in the etherpad: https://etherpad.openstack.org/p/tripleo-edge-squad-status I'm just referring to the squad as "Edge" for now, but we can also pick a cooler owl themed name :). -- -- James Slagle -- From mriedemos at gmail.com Mon Aug 20 22:31:44 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 20 Aug 2018 17:31:44 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <9ACFA0BC-B345-479F-A050-F8EFFD6D27FD@leafe.com> References: <9ACFA0BC-B345-479F-A050-F8EFFD6D27FD@leafe.com> Message-ID: On 8/17/2018 10:59 AM, Ed Leafe wrote: > I would like to hear from the Cinder and Neutron teams, especially those who were around when those compute sub-projects were split off into their own projects. Did you feel that being independent of compute helped or hindered you? And to those who are in those projects now, is there any sense that things would be better if you were still part of compute? Neutron wasn't split out of nova. -- Thanks, Matt From mriedemos at gmail.com Mon Aug 20 22:35:29 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 20 Aug 2018 17:35:29 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <20180817192117.vrc3t4la3ypf77nb@barron.net> References: <9ACFA0BC-B345-479F-A050-F8EFFD6D27FD@leafe.com> <9729c622-a39e-d2b5-2b0b-f355153d9444@gmail.com> <20180817171307.s7hvfs6avi4mvu2d@barron.net> <20180817183426.GA30053@sm-workstation> <522145da-0a5b-6a7e-47ca-cdb3ef5d263c@gmail.com> <20180817192117.vrc3t4la3ypf77nb@barron.net> Message-ID: <5c28ba81-7ae2-afad-80ed-7615bfc03f46@gmail.com> On 8/17/2018 2:21 PM, Tom Barron wrote: > I think that even standalone if I'm running a scheduler (i.e., not doing > emberlib version of standalone) then I'm likely to want to run them > active-active on multiple nodes and will need a solution for the current > races.  So even standalone we face the question of do we use placement > to solve that issue or do we introduce some coordination among the > schedulers themselves to solve it. Why *wouldn't* you use placement in that case? It's extremely light weight (in its current form), it's just DB and API. It was meant to solve scheduler races (like we have had in nova since the beginning). -- Thanks, Matt From ed at leafe.com Mon Aug 20 22:40:13 2018 From: ed at leafe.com (Ed Leafe) Date: Mon, 20 Aug 2018 17:40:13 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <9ACFA0BC-B345-479F-A050-F8EFFD6D27FD@leafe.com> Message-ID: <658B0437-50D5-4D40-B307-A0FFE490FDB5@leafe.com> On Aug 20, 2018, at 5:31 PM, Matt Riedemann wrote: > >> I would like to hear from the Cinder and Neutron teams, especially those who were around when those compute sub-projects were split off into their own projects. Did you feel that being independent of compute helped or hindered you? And to those who are in those projects now, is there any sense that things would be better if you were still part of compute? > > Neutron wasn't split out of nova. Yes, that’s correct, and the continued existence of nova-network testifies to that. But what is also correct is that the networking effort was separated from Nova. Since the existing nova-network code wasn’t designed to handle the sort of networking that was envisioned to be needed, a separate Quantum project was started, by many of the people who contributed to nova-network in the past. That detail aside, the question is still valid: did the split from working within the Nova project to working as an independent project have positive or negative effects? Or both? -- Ed Leafe From mriedemos at gmail.com Mon Aug 20 22:41:09 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 20 Aug 2018 17:41:09 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <20180817160937.GB24275@sm-workstation> References: <20180817160937.GB24275@sm-workstation> Message-ID: <7660c4b7-a22f-37a6-00db-b03cffb7633c@gmail.com> On 8/17/2018 11:09 AM, Sean McGinnis wrote: > This reluctance on having it part of Nova may be real or just perceived, but > with it within Nova it will likely be an uphill battle for some time convincing > other projects that it is a nicely separated common service that they can use. Cyborg, Ironic and Neutron are all already involved in interfacing with placement to get things done in nova. I assume the majority of people that have a perception that it's part of nova don't know enough about it, or don't realize that placement is a separate service type entry in the service catalog. When you're talking to placement, you're not talking to nova. The code is just in the nova repo and the core team is the nova core team. The code was written as separate as possible from the start so it could be extracted to its own repo (no RPC usage for example with the nova services). The core team issue is a community problem at this point, which is the main source of conflict on whether or not placement remains within the compute program, at least for some interim, or if it's directly extracted into it's own program in governance. -- Thanks, Matt From mriedemos at gmail.com Mon Aug 20 22:44:36 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 20 Aug 2018 17:44:36 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <658B0437-50D5-4D40-B307-A0FFE490FDB5@leafe.com> References: <9ACFA0BC-B345-479F-A050-F8EFFD6D27FD@leafe.com> <658B0437-50D5-4D40-B307-A0FFE490FDB5@leafe.com> Message-ID: <1cc28450-c358-164d-6218-7609849d4bab@gmail.com> On 8/20/2018 5:40 PM, Ed Leafe wrote: > That detail aside, the question is still valid: did the split from working within the Nova project to working as an independent project have positive or negative effects? Or both? I'm sure the answer has got to be "both", right? Neutron integration with nova took several years. Just stabilizing neutron and getting it to the point of being able to run in production took a long time (I'm not an operator but I'm sure there are operators that can attest to this - hell it was even a performance/race problem in our gate for a long time). Where we're at now, and have been for the last several cycles, neutron is great* and I'm glad it's separate. But everyone working in both projects knew it took a long time to get there. *I still have to read the manual every time I want to create a port from scratch, but hey... -- Thanks, Matt From mriedemos at gmail.com Mon Aug 20 22:57:44 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 20 Aug 2018 17:57:44 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <08136ba6-9acc-1bb8-d73b-e5f51c82c62a@gmail.com> References: <08136ba6-9acc-1bb8-d73b-e5f51c82c62a@gmail.com> Message-ID: On 8/17/2018 11:56 AM, melanie witt wrote: > We've seen exciting progress in finally solving a lot of these issues as > we've been developing placement. But, there is still a significant > amount of important work to do in Nova that depends on placement. For > example, we need to integrate nested resource providers into the virt > drivers in Nova to leverage it for vGPUs and NUMA modeling. We need > affinity modeling in placement to properly handle affinity with multiple > cells. We need shared storage accounting to properly handle disk usage > for deployments on shared storage. As was mentioned in the epic #openstack-tc channel discussion today, most of this is either already done in placement and nova, as a client, is lagging (N-R-P and shared storage) or we don't have concrete plans for the rest (affinity modeling). Right? > > As we've worked to develop placement and use it in Nova, we've found in > most cases that we've had to develop the Nova side and the placement > side together, at the same time, to make things work. This isn't really > surprising, as with any brand new functionality, it's difficult to > fulfill a use case completely without integrating things together and > iterating until everything works. Given that, I'd rather see placement > stay under compute so we can iterate quickly, as we still need to > develop new features in placement and exercise them for the first time, > in Nova. Once the major aforementioned efforts have been figured out and > landed with close coordination, I think it would make more sense to look > at placement being outside of the compute project. It's definitely true that major changes done across two separate APIs and teams will be more complicated and take longer, case in point is volume multi-attach which took at least 3 microversions in cinder (3.27, 3.44, 3.48) before nova, as a client, was fully working properly with it. I can't say we're really iterating quickly as it stands today. And unless we have concrete plans on what we need out of placement *today* for these big things that nova needs (affinity modeling is probably the hardest) it's hard to justify not making it its own project in governance - otherwise we could delay that move for a very long time, like how many cycles did we push off fixing [1] because we said placement would solve this so just sit tight? Once we split, it will take leadership for major efforts from someone like ildiko did for volume multi-attach to bring both teams together to get things done, although I expect any split out placement would at least have nova-core as an initial subset of the placement-core team. I personally don't care much either way if the placement repo is under the compute program for some interim amount of time, but I don't think we can keep it from being a separately governed project for an undefined amount of time while nova figures out what major things we need first. [1] https://bugs.launchpad.net/nova/+bug/1469179 -- Thanks, Matt From mriedemos at gmail.com Mon Aug 20 23:09:03 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 20 Aug 2018 18:09:03 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> Message-ID: <69da09e3-106e-d8c1-a922-36bc4e61e482@gmail.com> On 8/17/2018 12:30 PM, Dan Smith wrote: > I know politics will be involved in this, but this is a really terrible > reason to do a thing, IMHO. After the most recent meeting we had with > the Cinder people on placement adoption, I'm about as convinced as ever > that Cinder won't (and won't need to)_consume_ placement any time > soon. I hope it will_report_ to placement so Nova can make better > decisions, just like Neutron does now, but I think that's the extent > we're likely to see if we're honest. [1] is a concrete example of where cinder would benefit from using placement to avoid scheduling conflicts, which was one of the primary reasons it was developed for nova as well. [1] https://review.openstack.org/#/c/559718/ -- Thanks, Matt From mriedemos at gmail.com Mon Aug 20 23:11:13 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 20 Aug 2018 18:11:13 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> Message-ID: On 8/17/2018 12:47 PM, Ed Leafe wrote: > I’d like this to be a technical discussion, with as little political overtones as possible. Everyone agrees that technically placement should be in its own repo. The entire debate is political and regards people and who will be making decisions in the placement repo once it's split out. It's just hard to say that because it's confrontational and awkward. -- Thanks, Matt From mriedemos at gmail.com Mon Aug 20 23:27:55 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 20 Aug 2018 18:27:55 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> Message-ID: <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> On 8/18/2018 7:25 AM, Chris Dent wrote: > 5. In OpenStack we have a tradition of the contributors having a > strong degree of self-determination. If that tradition is to be > upheld, then it would make sense that the people who designed and > wrote the code that is being extracted would get to choose what > happens with it. As much as Mel's and Dan's (only picking on them > here because they are the dissenting voices that have showed up so > far) input has been extremely important and helpful in the evolution > of placement, they are not those people. To be fair, lots of changes *in* placement *for* nova have been influenced by Dan even if Dan wasn't writing the placement side changes, because we definitely have a placement sub-team that works on the placement side of things and nova people that work on the client side nova things. For example, the atomic POST /allocations stuff Dan needed for fixing doubled-up allocations during move operations in nova. So my point is, a lot of the stuff done has been a team effort. > > So my hope is that (in no particular order) Jay Pipes, Eric Fried, > Takashi Natsume, Tetsuro Nakamura, Matt Riedemann, Andrey Volkov, > Alex Xu, Balazs Gibizer, Ed Leafe, and any other contributor to > placement whom I'm forgetting [1] would express their preference on > what they'd like to see happen. I'll try to summarize my position: 1. Placement should eventually be its own project under OpenStack governance, not under compute, because it's not just nova; I don't really care if it's under compute in some interim while it's technically extracted to a new repo. As Zane pointed out, that might be the best compromise for now to iterate and make progress on what is the hardest *technical* part of this extraction. 2. I don't think we can forever block the extraction on big changes that nova needs, especially if we don't already have concrete plans for what is needed to get those things done now. 3. The biggest fear is on the people involved in what placement on its own might be, because the current placement team is made of, for the most part, highly opinionated people that spend a lot of time arguing because they have, at times, conflicting design principles which can impede getting anything done. Concessions are made after (1) people weigh in from the "outside" or (2) exhaustion sets in. Related to the extraction question, I think if we want to make progress, keeping a new placement repo under compute in governance is an incremental step so we can add a new core team with nova-core being a subset of the initial placement-core team, and then we can add people that wouldn't have otherwise been made nova-core because of a sole focus on placement (cdent is an obvious candidate here). But I realize keeping it under compute means risking #2 could keep it under compute for a long time. I don't really know how you fix #3 except people being honest about it and actually talking through things to reach consensus, and doing what we've said to do in retrospectives many times before - reach out for external input earlier and have face-to-face conversations (hangouts) earlier *before* conflicts start to damage relationships. -- Thanks, Matt From ed at leafe.com Mon Aug 20 23:42:24 2018 From: ed at leafe.com (Ed Leafe) Date: Mon, 20 Aug 2018 18:42:24 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> Message-ID: On Aug 20, 2018, at 6:27 PM, Matt Riedemann wrote: > > 3. The biggest fear is on the people involved in what placement on its own might be, because the current placement team is made of, for the most part, highly opinionated people that spend a lot of time arguing because they have, at times, conflicting design principles which can impede getting anything done. Concessions are made after (1) people weigh in from the "outside" or (2) exhaustion sets in. While this is certainly true, the experience with Nova is not unusual in that regard. There have always been highly opinionated people with conflicting ideas. Eventually a choice is made; occasionally it is by persuasion, but the exhaustion bit is there too. What we've seen in Nova over the years is that generally those who have different opinions eventually fall by the wayside, leaving behind those who share the opinion of the choice. It becomes self-selecting. There isn't any reason that a similar process will happen among those highly-opinionated placement people. It was said in the #openstack-tc discussions, but for those on the mailing list, the biggest concern among the Nova core developers is that the consensus among Placement cores will certainly not align with the needs of Nova. I personally think that's ridiculous, and, as one of the very opinionated people involved, a bit insulting. No one wants to see either Nova or Placement to fail. -- Ed Leafe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From tony at bakeyournoodle.com Mon Aug 20 23:53:35 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 21 Aug 2018 09:53:35 +1000 Subject: [openstack-dev] [os-upstream-institute] Restarting meetings on August 20 In-Reply-To: References: <8D827CFA-946D-4C11-BBC1-4B8408FFCD0B@gmail.com> Message-ID: <20180820235335.GE26778@thor.bakeyournoodle.com> On Mon, Aug 20, 2018 at 10:23:07AM -0700, Kendall Nelson wrote: > Hello Everyone, > > To avoid meeting conflicts with the Women of OpenStack, we will actually be > doing meetings weekly on Mondays at 20:00 UTC on odd weeks. > > Long story short, our kickoff meeting after this luxurious summer break > will be a week from today on August 27th. > > Thanks everyone! > > See you next week :) I have a conflicting meeting on that schedule. I'll do my best to follow along from the meeting logs. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From mriedemos at gmail.com Tue Aug 21 00:46:32 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 20 Aug 2018 19:46:32 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> Message-ID: <6ceaca24-64c0-9d8e-f858-ced4ddbc34b7@gmail.com> On 8/20/2018 1:32 PM, Hongbin Lu wrote: > * Is placement stable enough so that it won't break us often? Yes, we use microversions for this reason. > * If there is a breaking change in placement and we contribute a fix, > how fast the fix will be merged? Eric hedged on this, but I think the answer is yes - if there is a thing that breaks you and you let us know it breaks you, we'll give attention to the fix, especially regressions. We've done this with Ironic when it comes up, and we've done it with other projects that consume not only placement but nova in general (trove, triple-o, etc). > * If there is a feature request from our side and we contribute patches > to placement, will the patches be accepted? As anything it depends on the feature request. API changes require deeper review because it's a long-term commitment to supporting that API, so they aren't taken lightly. But chances are if you need something from placement, someone else likely needs the same thing. -- Thanks, Matt From mriedemos at gmail.com Tue Aug 21 01:08:08 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 20 Aug 2018 20:08:08 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> Message-ID: <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> On 8/20/2018 6:42 PM, Ed Leafe wrote: > It was said in the #openstack-tc discussions, but for those on the mailing list, the biggest concern among the Nova core developers is that the consensus among Placement cores will certainly not align with the needs of Nova. I personally think that's ridiculous, and, as one of the very opinionated people involved, a bit insulting. No one wants to see either Nova or Placement to fail. I believe you're paraphrasing what I said, and I never said I was speaking for all nova core developers. I don't think anyone working on placement would intentionally block things nova needs or try to see nova fail. -- Thanks, Matt From mriedemos at gmail.com Tue Aug 21 01:23:49 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 20 Aug 2018 20:23:49 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> Message-ID: <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> On 8/20/2018 8:08 PM, Matt Riedemann wrote: > On 8/20/2018 6:42 PM, Ed Leafe wrote: >> It was said in the #openstack-tc discussions, but for those on the >> mailing list, the biggest concern among the Nova core developers is >> that the consensus among Placement cores will certainly not align with >> the needs of Nova. I personally think that's ridiculous, and, as one >> of the very opinionated people involved, a bit insulting. No one wants >> to see either Nova or Placement to fail. > > I believe you're paraphrasing what I said, and I never said I was > speaking for all nova core developers. I don't think anyone working on > placement would intentionally block things nova needs or try to see nova > fail. Here is an example of the concern. In Sydney we talked about adding types to the consumers resource in placement so that nova could use placement for counting quotas [1]. Chris considered it a weird hack but it's pretty straight-forward from a nova consumption point of view. So if placement were separately governed with let's say Chris as PTL, would something like that become a holy war type issue because it's "weird" and convolutes the desire for a minimalist API? I think Chris' stance on this particular item has softened over time as more of a "meh" but it's a worry about extracting with a separate team that is against changes because they are not ideal for Placement yet are needed for a consumer of Placement. I understand this is likely selfish on the part of the nova people that want this (including myself) and maybe close-minded to alternative solutions to the problem (I'm not sure if it's all been thought out end-to-end yet, Mel would likely know the latest on this item). Anyway, I like to have examples when I'm stating something to gain understanding, so that's what I'm trying to do here - explain, with an example, what I said in the tc channel discussion today. [1] Line 55 https://etherpad.openstack.org/p/SYD-forum-nova-placement-update -- Thanks, Matt From coolsvap at gmail.com Tue Aug 21 04:24:35 2018 From: coolsvap at gmail.com (Swapnil Kulkarni) Date: Tue, 21 Aug 2018 09:54:35 +0530 Subject: [openstack-dev] [all] PyCharm Licences Message-ID: I have renewed the Pycharm licenses for community till Aug 13, 2019. Everyone who is using it should have it updated automatically. Please do not request again for renewal. At the same time, I would request not to request multiple licenses with multiple email addresses. -- Best Regards, Swapnil Kulkarni irc : coolsvap From duc.openstack at gmail.com Tue Aug 21 04:34:46 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Mon, 20 Aug 2018 21:34:46 -0700 Subject: [openstack-dev] [senlin] Senlin Weekly Meeting Time Change Message-ID: Hi, As we are starting the Stein cycle, I would like to start having weekly meetings again for Senlin. I'm proposing to move the weekly meeting to the following time: Friday 5:30 UTC to 6:30 UTC in #senlin channel Please reply if this works for you or reply with an alternative time slot. Thanks, Duc From liu.xuefeng1 at zte.com.cn Tue Aug 21 05:03:53 2018 From: liu.xuefeng1 at zte.com.cn (liu.xuefeng1 at zte.com.cn) Date: Tue, 21 Aug 2018 13:03:53 +0800 (CST) Subject: [openstack-dev] =?utf-8?b?562U5aSNOiAgW3Nlbmxpbl0gU2VubGluIFdl?= =?utf-8?q?ekly_Meeting_Time_Change?= In-Reply-To: References: CAN81NT5jSv-xWN=LuySXmLsovWgYPrWF=YO=52JCELGvvFgMDA@mail.gmail.com Message-ID: <201808211303537097563@zte.com.cn> ok 原始邮件 发件人:DucTruong 收件人:openstack-dev at lists.openstack.org 日 期 :2018年08月21日 12:36 主 题 :[openstack-dev] [senlin] Senlin Weekly Meeting Time Change Hi, As we are starting the Stein cycle, I would like to start having weekly meetings again for Senlin. I'm proposing to move the weekly meeting to the following time: Friday 5:30 UTC to 6:30 UTC in #senlin channel Please reply if this works for you or reply with an alternative time slot. Thanks, Duc __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From xuanlangjian at gmail.com Tue Aug 21 06:22:00 2018 From: xuanlangjian at gmail.com (x Lyn) Date: Tue, 21 Aug 2018 14:22:00 +0800 Subject: [openstack-dev] [senlin] Senlin Weekly Meeting Time Change In-Reply-To: References: Message-ID: <7E441F59-75EB-4C52-8626-49EDFF1BB177@gmail.com> +1, works for me. > On Aug 21, 2018, at 12:34 PM, Duc Truong wrote: > > Hi, > > As we are starting the Stein cycle, I would like to start having weekly > meetings again for Senlin. I'm proposing to move the weekly meeting > to the following time: > > Friday 5:30 UTC to 6:30 UTC in #senlin channel > > Please reply if this works for you or reply with an alternative time > slot. > > Thanks, > > Duc > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gergely.csatari at nokia.com Tue Aug 21 06:40:19 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Tue, 21 Aug 2018 06:40:19 +0000 Subject: [openstack-dev] [TripleO]Addressing Edge/Multi-site/Multi-cloud deployment use cases (new squad) In-Reply-To: References: Message-ID: Hi, There was a two days workshop on edge requirements back in Dublin. The notes are stored here: https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG I think there are some areas there what can be interesting for the squad. Edge Computing Group plans to have a day long discussion in Denver. Maybe we could have a short discussion there about these requirements. Br, Gerg0 -----Original Message----- From: James Slagle Sent: Monday, August 20, 2018 10:48 PM To: OpenStack Development Mailing List Subject: [openstack-dev] [TripleO]Addressing Edge/Multi-site/Multi-cloud deployment use cases (new squad) As we start looking at how TripleO will address next generation deployment needs such as Edge, multi-site, and multi-cloud, I'd like to kick off a discussion around how TripleO can evolve and adapt to meet these new challenges. What are these challenges? I think the OpenStack Edge Whitepaper does a good job summarizing some of them: https://www.openstack.org/assets/edge/OpenStack-EdgeWhitepaper-v3-online.pdf They include: - management of distributed infrastructure - massive scale (thousands instead of hundreds) - limited network connectivity - isolation of distributed sites - orchestration of federated services across multiple sites We already have a lot of ongoing work that directly or indirectly starts to address some of these challenges. That work includes things like config-download, split-controlplane, metalsmith integration, validations, all-in-one, and standalone. I laid out some initial ideas in a previous message: http://lists.openstack.org/pipermail/openstack-dev/2018-July/132398.html I'll be reviewing some of that here and going into a bit more detail. These are some of the high level ideas I'd like to see TripleO start to address: - More separation between planning and deploying (likely to be further defined in spec discussion). We've had these concepts for a while, but we need to do a better job of surfacing them to users as deployments grow in size and complexity. With config-download, we can more easily separate the phases of rendering, downloading, validating, and applying the configuration. As we increase in scale to managing many deployments, we should take advantage of what each of those phases offer. The separation also makes the deployment more portable, as we should eliminate any restrictions that force the undercloud to be the control node applying the configuration. - Management of multiple deployments from a single undercloud. This is of course already possible today, but we need better docs and polish and more testing to flush out any bugs. - Plan and template management in git. This could be an iterative step towards eliminating Swift in the undercloud. Swift seemed like a natural choice at the time because it was an existing OpenStack service. However, I think git would do a better job at tracking history and comparing changes and is much more lightweight than Swift. We've been managing the config-download directory as a git repo, and I like this direction. For now, we are just putting the whole git repo in Swift, but I wonder if it makes sense to consider eliminating Swift entirely. We need to consider the scale of managing thousands of plans for separate edge deployments. I also think this would be a step towards undercloud simplification. - Orchestration between plans. I think there's general agreement around scaling up the undercloud to be more effective at managing and deploying multiple plans. The plans could be different OpenStack deployments potentially sharing some resources. Or, they could be deployments of different software stacks (Kubernetes/OpenShift, Ceph, etc). We'll need to develop some common interfaces for some basic orchestration between plans. It could include dependencies, ordering, and sharing parameter data (such as passwords or connection info). There is already some ongoing discussion about some of this work: http://lists.openstack.org/pipermail/openstack-dev/2018-August/133247.html I would suspect this would start out as collecting specific use cases, and then figuring out the right generic interfaces. - Multiple deployments of a single plan. This could be useful for doing many deployments that are all the same. Of course some info might be different such as network IP's, hostnames, and node specific details. We could have some generic input interfaces for those sorts of things without having to create new Heat stacks, which would allow re-using the same plan/stack for multiple deployments. When scaling to hundreds/thousands of edge deployments this could be really effective at side-stepping managing hundreds/thousands of Heat stacks. We may also need further separation between a plan and it's deployment state to have this modularity. - Distributed management/application of configuration. Even though the configuration is portable (config-download), we may still want some automation around applying the deployment when not using the undercloud as a control node. I think things like ansible-runner or Ansible AWX could help here, or perhaps mistral-executor agents, or "mistral as a library". This would also make our workflows more portable. - New documentation highlighting some or all of the above features and how to take advantage of it for new use cases (thousands of edge deployments, etc). I see this as a sort of "TripleO Edge Deployment Guide" that would highlight how to take advantage of TripleO for Edge/multi-site use cases. Obviously all the ideas are a lot of work, and not something I think we'll complete in a single cycle. I'd like to pull a squad together focused on Edge/multi-site/multi-cloud and TripleO. On that note, this squad could also work together with other deployment projects that are looking at similar use cases and look to collaborate. If you're interested in working on this squad, I'd see our first tasks as being: - Brainstorming additional ideas to the above - Breaking down ideas into actionable specs/blueprints for stein (and possibly future releases). - Coming up with a consistent message around direction and vision for solving these deployment challenges. - Bringing together ongoing work that relates to these use cases together so that we're all collaborating with shared vision and purpose and we can help prioritize reviews/ci/etc. - Identifying any discussion items we need to work through in person at the upcoming Denver PTG. I'm happy to help facilitate the squad. If you have any feedback on these ideas or would like to join the squad, reply to the thread or sign up in the etherpad: https://etherpad.openstack.org/p/tripleo-edge-squad-status I'm just referring to the squad as "Edge" for now, but we can also pick a cooler owl themed name :). -- -- James Slagle -- __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From balazs.gibizer at ericsson.com Tue Aug 21 07:39:27 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 21 Aug 2018 09:39:27 +0200 Subject: [openstack-dev] [nova] Notification subteem meeting is cancelled this week Message-ID: <1534837167.4321.0@smtp.office365.com> Hi, There won't be subteam meeting this week. Cheers, gibi From dangtrinhnt at gmail.com Tue Aug 21 08:07:47 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 21 Aug 2018 17:07:47 +0900 Subject: [openstack-dev] [TC][Searchlight] Setting up milestones for Searchlight on Launchpad Message-ID: Dear TC and Searchlight team, In an effort to get Searchlight back on track, I would like to set up milestones as well as clean up the incomplete bugs, blueprints etc. on Launchpad [1] I was added to the Searchlight Drivers team but I still can not touch the milestone configuration. In addition, I would like to move forward with unreviewed patched on Gerrit so I need PTL privileges on Searchlight project. Do I have to wait for [2] to be merged? [1] https://launchpad.net/searchlight [2] https://review.openstack.org/#/c/590601/ Thanks and regards, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivolinengong at gmail.com Tue Aug 21 08:21:58 2018 From: ivolinengong at gmail.com (Ivoline Ngong) Date: Tue, 21 Aug 2018 08:21:58 +0000 Subject: [openstack-dev] New Contributor In-Reply-To: References: <49b15368-a67b-dfcb-0501-6b527a42c71c@openstack.org> Message-ID: <58b5c6ab-fe55-25ae-769b-6cc5f55f9b03@mixmax.com> Hello Kendall and Telles, Thanks so much for warm welcome. I feel at home already.The links sent were quire helpful and gave me an insight into what OpenStack is all about.After reading lightly about the different projects, the Sahara project caught my attention. Probably because I am interested in data science. I will love to explore the Sahara project some more. Cheers,Ivoline On Mon, Aug 20, 2018 8:42 PM, Telles Nobrega tenobreg at redhat.com wrote: Hi Ivoline, Also a little late but wanted to say welcome aboard, hopefully you will find a very welcoming community here and of course a lot of work to do. I work with Sahara, the big data processing project of OpenStack, we need help for sure. If this area interests you in any way, feel free to join us at #openstack-sahara on IRC or email me and we can send some work at your direction. On Mon, Aug 20, 2018 at 2:37 PM Kendall Nelson wrote: Hello Ivoline, While I'm a little late to the party, I still wanted to say welcome and offer my help :) If you have any questions based about the links you've been sent, I'm happy to answer them! I can also help you find/get started with a team and introduce you to community members whenever you're ready. -Kendall Nelson (diablo_rojo) On Mon, 20 Aug 2018, 4:08 am Ivoline Ngong, wrote: Thanks so much for help Josh and Thierry. I'll check out the links and hopefully find a way forward from there. Will get back here in case I have any questions. Cheers,Ivoline On Mon, Aug 20, 2018, 12:01 Thierry Carrez wrote: Ivoline Ngong wrote: > I am Ivoline Ngong. I am a Cameroonian who lives in Turkey. I will love > to contribute to Open source through OpenStack. I code in Java and > Python and I think OpenStack is a good fit for me. > I'll appreciate it if you can point me to the right direction on how I > can get started. Hi Ivoline, Welcome to the OpenStack community ! The OpenStack Technical Committee maintains a list of areas in most need of help: https://governance.openstack.org/tc/reference/help-most-needed.html Depending on your interest, you could pick one of those projects and reach out to the mentioned contact points. For more general information on how to contribute, you can check out our contribution portal: https://www.openstack.org/community/ -- Thierry Carrez (ttx) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- TELLESNOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil peloGreat Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougal at redhat.com Tue Aug 21 08:22:47 2018 From: dougal at redhat.com (Dougal Matthews) Date: Tue, 21 Aug 2018 09:22:47 +0100 Subject: [openstack-dev] [mistral] No Denver PTG Sessions Message-ID: Hi all, Unfortunately due to some personal conflicts and trouble with travel plans, there will be no Mistral cores at the Denver PTG. This means that we have had to cancel the Mistral sessions. I recently asked if anyone was planning to attend and only got one maybe. I am considering trying to arrange a "virtual PTG", so we can do some planning for Stein. However, I'm not sure how/if that could work. Do you think this would be a good idea? Suggestions how to organise one would be very welcome! Thanks, Dougal -------------- next part -------------- An HTML attachment was scrubbed... URL: From tengqim at cn.ibm.com Tue Aug 21 09:12:13 2018 From: tengqim at cn.ibm.com (Qiming Teng) Date: Tue, 21 Aug 2018 09:12:13 +0000 Subject: [openstack-dev] [senlin] Senlin Weekly Meeting Time Change In-Reply-To: References: Message-ID: <20180821091212.GA13959@rcp.sl.cloud9.ibm.com> Works for me. -Qiming On Mon, Aug 20, 2018 at 09:34:46PM -0700, Duc Truong wrote: > Hi, > > As we are starting the Stein cycle, I would like to start having weekly > meetings again for Senlin. I'm proposing to move the weekly meeting > to the following time: > > Friday 5:30 UTC to 6:30 UTC in #senlin channel > > Please reply if this works for you or reply with an alternative time > slot. > > Thanks, > > Duc From thierry at openstack.org Tue Aug 21 09:15:19 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 21 Aug 2018 11:15:19 +0200 Subject: [openstack-dev] [TC][Searchlight] Setting up milestones for Searchlight on Launchpad In-Reply-To: References: Message-ID: <0ff3b148-2e46-02ba-9835-796540e7a6df@openstack.org> Trinh Nguyen wrote: > In an effort to get Searchlight back on track, I would like to set up > milestones as well as clean up the incomplete bugs, blueprints etc. on > Launchpad [1] I was added to the Searchlight Drivers team but I still > can not touch the milestone configuration. As a member of the "maintainer" team in Launchpad you should be able to register a series ("stein") and then add milestones to that series. You should see a "Register a series" link under "Series and milestones" at https://launchpad.net/searchlight > In addition, I would like to move forward with unreviewed patched on > Gerrit so I need PTL privileges on Searchlight project. Do I have to > wait for [2] to be merged? For the TC to step in and add you to searchlight-core, yes, we'll have to wait for the merging of that patch. To go faster, you could ask any of the existing members in that group to directly add you: https://review.openstack.org/#/admin/groups/964,members (NB: this group looks like it should be updated :) ) -- Thierry Carrez (ttx) From dangtrinhnt at gmail.com Tue Aug 21 09:21:40 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 21 Aug 2018 18:21:40 +0900 Subject: [openstack-dev] [TC][Searchlight] Setting up milestones for Searchlight on Launchpad In-Reply-To: <0ff3b148-2e46-02ba-9835-796540e7a6df@openstack.org> References: <0ff3b148-2e46-02ba-9835-796540e7a6df@openstack.org> Message-ID: Hi Thierry, I just saw that link. Thanks :) Because I couldn't contact any of the core members I emailed this list. I will update the searchlight-core as planned after I am added. Thanks for your response, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Tue, Aug 21, 2018 at 6:15 PM Thierry Carrez wrote: > Trinh Nguyen wrote: > > In an effort to get Searchlight back on track, I would like to set up > > milestones as well as clean up the incomplete bugs, blueprints etc. on > > Launchpad [1] I was added to the Searchlight Drivers team but I still > > can not touch the milestone configuration. > > As a member of the "maintainer" team in Launchpad you should be able to > register a series ("stein") and then add milestones to that series. You > should see a "Register a series" link under "Series and milestones" at > https://launchpad.net/searchlight > > > In addition, I would like to move forward with unreviewed patched on > > Gerrit so I need PTL privileges on Searchlight project. Do I have to > > wait for [2] to be merged? > > For the TC to step in and add you to searchlight-core, yes, we'll have > to wait for the merging of that patch. > > To go faster, you could ask any of the existing members in that group to > directly add you: > > https://review.openstack.org/#/admin/groups/964,members > > (NB: this group looks like it should be updated :) ) > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Aug 21 09:28:26 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 21 Aug 2018 10:28:26 +0100 (BST) Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> Message-ID: On Mon, 20 Aug 2018, Matt Riedemann wrote: > Here is an example of the concern. In Sydney we talked about adding types to > the consumers resource in placement so that nova could use placement for > counting quotas [1]. Chris considered it a weird hack but it's pretty > straight-forward from a nova consumption point of view. So if placement were > separately governed with let's say Chris as PTL, would something like that > become a holy war type issue because it's "weird" and convolutes the desire > for a minimalist API? I think Chris' stance on this particular item has > softened over time as more of a "meh" but it's a worry about extracting with > a separate team that is against changes because they are not ideal for > Placement yet are needed for a consumer of Placement. I understand this is > likely selfish on the part of the nova people that want this (including > myself) and maybe close-minded to alternative solutions to the problem (I'm > not sure if it's all been thought out end-to-end yet, Mel would likely know > the latest on this item). Anyway, I like to have examples when I'm stating > something to gain understanding, so that's what I'm trying to do here - > explain, with an example, what I said in the tc channel discussion today. Since we're airing things out (which I think is a good thing, at least in the long run), I'll add to this. I think that's a pretty good example of where I did express some resistance, especially since were it to come up again, I still would express some (see below). But let's place that resistance in some context. In the epic irc discussion you mentioned that one fear is that I might want to change the handling of microversions [2] because I'm somewhat famously ambivalent about them. That's correct, I am. However, I would hope that the fact that placement has one of the easier and more flexible microversions systems around (written by me) and I went to the trouble to extract it to a library [3] and I'm the author of the latest revision on how to microversion [4] is powerful evidence that once consensus is reached I will do my utmost to make things align with our shared plans and goals. So, with the notion of allocation or consumer types (both have been discussed): If we start from the position that I've been with placement from very early on and am cognizant of its several goals and at the same time also aware of its limited "human resources" it seems normal and appropriate to me that at least some members of the group responsible for making it must make sure that we work to choose the right things (of several choices) to do, in part by by rigorously questioning additional features when existing planned features are not yet done. In this case we might ask: is it right to focus on incompletely thought out consumer type management for the eventual support of quota handling (and other introspection) when we haven't yet satisfied what has been described by some downstream people (eglynn is example, to be specific) as job 1: getting shared disk working correctly (which we still haven't managed across the combined picture of nova and placement)? >From my perspective questioning additional features, so that they are well proven, is simply part of the job and we all should be doing it. If we are never hitting questions and disagreements we are almost certainly running blind and our results will be less good. Once we've hashed things out, I'll help make what we've chosen happen. The evidence of this is everywhere. Consider this: I've known (at least subconsciously) about the big reveal in yesterday's IRC discussion for a long time, but I keep working to make nova, placement and OpenStack better. Day in, day out, in the face of what is perhaps the biggest insult to my professional integrity that I've ever experienced. If this were a different time some portion of "we" would need to do pistols at dawn, but that's dumb. I just want to get on with making stuff. The right stuff. Please don't question my commitment, but do question my designs and plans and help me make them the best they can be. Elephant alert, to keep this healthy full exposure rolling: The kind of questioning and "proving" described above happens all the time in Nova with specs and other proposals that are presented. We ask proposers to demonstrate that their ideas are necessary and sound, and if they are not _or_ we don't have time, we say "no" or "later". This is good and correct and part of the job and helps make nova the best it can be given the many constraints it experiences. As far as I can tell the main differences between me asking questions about proposed placement features when they are presented by nova cores and the more general nova-spec situation is who is being subjected to the questions and by whom. > [1] Line 55 https://etherpad.openstack.org/p/SYD-forum-nova-placement-update [2] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-20.log.html#t2018-08-20T20:35:51 [3] https://pypi.org/project/microversion_parse/ [4] http://specs.openstack.org/openstack/api-sig/guidelines/api_interoperability.html -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From dtantsur at redhat.com Tue Aug 21 10:15:15 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 21 Aug 2018 12:15:15 +0200 Subject: [openstack-dev] [mistral] No Denver PTG Sessions In-Reply-To: References: Message-ID: <8de32629-82a1-bf54-f667-f50de88da144@redhat.com> On 08/21/2018 10:22 AM, Dougal Matthews wrote: > Hi all, > > Unfortunately due to some personal conflicts and trouble with travel plans, > there will be no Mistral cores at the Denver PTG. This means that we have had to > cancel the Mistral sessions. I recently asked if anyone was planning to attend > and only got one maybe. > > I am considering trying to arrange a "virtual PTG", so we can do some planning > for Stein. However, I'm not sure how/if that could work. Do you think this would > be a good idea? Suggestions how to organise one would be very welcome! We did a few virtual midcycles for ironic, and it ended up quite well. While it did require some people to stay awake at unusual times, it did allow people without travel budget to attend. Initially we used the OpenStack SIP system, but we found Bluejeans to be a bit easier to use. I think it has a limit of 300 participants, which is more than enough. Anyone from Red Hat can host it. We dedicated 1-2 days with 4-5 hours each. I'd recommend against taking up the whole day - will be too exhausting. The first time we tried splitting the slots into two per day: APAC friendly and EMEA friendly. Relatively few people showed up at the former, so the next time we only had one slot. As with the PTG, having an agenda upfront helps a lot. We did synchronization and notes through an etherpad - exactly the same was as on the PTG. Hope that helps, Dmitry > > Thanks, > Dougal > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From lyarwood at redhat.com Tue Aug 21 10:36:28 2018 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 21 Aug 2018 11:36:28 +0100 Subject: [openstack-dev] [Openstack-operators] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration) In-Reply-To: References: Message-ID: <20180821103628.dk3ok76fdruwsaut@lyarwood.usersys.redhat.com> On 20-08-18 16:29:52, Matthew Booth wrote: > For those who aren't familiar with it, nova's volume-update (also > called swap volume by nova devs) is the nova part of the > implementation of cinder's live migration (also called retype). > Volume-update is essentially an internal cinder<->nova api, but as > that's not a thing it's also unfortunately exposed to users. Some > users have found it and are using it, but because it's essentially an > internal cinder<->nova api it breaks pretty easily if you don't treat > it like a special snowflake. It looks like we've finally found a way > it's broken for non-cinder callers that we can't fix, even with a > dirty hack. > > volume-update essentially does a live copy of the > data on volume to volume, then seamlessly swaps the > attachment to from to . The guest OS on > will not notice anything at all as the hypervisor swaps the storage > backing an attached volume underneath it. > > When called by cinder, as intended, cinder does some post-operation > cleanup such that is deleted and inherits the same > volume_id; that is effectively becomes . When called any > other way, however, this cleanup doesn't happen, which breaks a bunch > of assumptions. One of these is that a disk's serial number is the > same as the attached volume_id. Disk serial number, in KVM at least, > is immutable, so can't be updated during volume-update. This is fine > if we were called via cinder, because the cinder cleanup means the > volume_id stays the same. If called any other way, however, they no > longer match, at least until a hard reboot when it will be reset to > the new volume_id. It turns out this breaks live migration, but > probably other things too. We can't think of a workaround. > > I wondered why users would want to do this anyway. It turns out that > sometimes cinder won't let you migrate a volume, but nova > volume-update doesn't do those checks (as they're specific to cinder > internals, none of nova's business, and duplicating them would be > fragile, so we're not adding them!). Specifically we know that cinder > won't let you migrate a volume with snapshots. There may be other > reasons. If cinder won't let you migrate your volume, you can still > move your data by using nova's volume-update, even though you'll end > up with a new volume on the destination, and a slightly broken > instance. Apparently the former is a trade-off worth making, but the > latter has been reported as a bug. > > I'd like to make it very clear that nova's volume-update, isn't > expected to work correctly except when called by cinder. Specifically > there was a proposal that we disable volume-update from non-cinder > callers in some way, possibly by asserting volume state that can only > be set by cinder. However, I'm also very aware that users are calling > volume-update because it fills a need, and we don't want to trap data > that wasn't previously trapped. > > Firstly, is anybody aware of any other reasons to use nova's > volume-update directly? > > Secondly, is there any reason why we shouldn't just document then you > have to delete snapshots before doing a volume migration? Hopefully > some cinder folks or operators can chime in to let me know how to back > them up or somehow make them independent before doing this, at which > point the volume itself should be migratable? > > If we can establish that there's an acceptable alternative to calling > volume-update directly for all use-cases we're aware of, I'm going to > propose heading off this class of bug by disabling it for non-cinder > callers. I'm definitely in favor of hiding this from users eventually but wouldn't this require some form of deprecation cycle? Warnings within the API documentation would also be useful and even something we could backport to stable to highlight just how fragile this API is ahead of any policy change. Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From tenobreg at redhat.com Tue Aug 21 11:40:05 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Tue, 21 Aug 2018 08:40:05 -0300 Subject: [openstack-dev] [goal][python3] week 2 update In-Reply-To: <1534797743-sup-4122@lrrr.local> References: <1534778701-sup-1930@lrrr.local> <1534796891-sup-969@lrrr.local> <1534797743-sup-4122@lrrr.local> Message-ID: Thanks. We merged most of them, there is only one that failed the tests so I'm rechecking it. On Mon, Aug 20, 2018 at 5:43 PM Doug Hellmann wrote: > Excerpts from Doug Hellmann's message of 2018-08-20 16:34:22 -0400: > > Excerpts from Telles Nobrega's message of 2018-08-20 15:07:29 -0300: > > > Hi Doug, > > > > > > I believe Sahara is ready to have those patches worked on. > > > > > > Do we have to do anything specific to get the env ready? > > > > Just be ready to do the reviews. I am generating the patches now and > > will propose them in a little while when the script finishes. > > And here they are: > > > +----------------------------------------------+---------------------------------+-------------------------------------+ > | Subject | Repo > | URL | > > +----------------------------------------------+---------------------------------+-------------------------------------+ > | import zuul job settings from project-config | > openstack/python-saharaclient | https://review.openstack.org/593904 | > | switch documentation job to new PTI | > openstack/python-saharaclient | https://review.openstack.org/593905 | > | add python 3.6 unit test job | > openstack/python-saharaclient | https://review.openstack.org/593906 | > | import zuul job settings from project-config | > openstack/python-saharaclient | https://review.openstack.org/593918 | > | import zuul job settings from project-config | > openstack/python-saharaclient | https://review.openstack.org/593923 | > | import zuul job settings from project-config | > openstack/python-saharaclient | https://review.openstack.org/593928 | > | import zuul job settings from project-config | > openstack/python-saharaclient | https://review.openstack.org/593933 | > | import zuul job settings from project-config | openstack/sahara > | https://review.openstack.org/593907 | > | switch documentation job to new PTI | openstack/sahara > | https://review.openstack.org/593908 | > | add python 3.6 unit test job | openstack/sahara > | https://review.openstack.org/593909 | > | import zuul job settings from project-config | openstack/sahara > | https://review.openstack.org/593919 | > | import zuul job settings from project-config | openstack/sahara > | https://review.openstack.org/593924 | > | import zuul job settings from project-config | openstack/sahara > | https://review.openstack.org/593929 | > | import zuul job settings from project-config | openstack/sahara > | https://review.openstack.org/593934 | > | import zuul job settings from project-config | > openstack/sahara-dashboard | https://review.openstack.org/593910 | > | switch documentation job to new PTI | > openstack/sahara-dashboard | https://review.openstack.org/593911 | > | import zuul job settings from project-config | > openstack/sahara-dashboard | https://review.openstack.org/593920 | > | import zuul job settings from project-config | > openstack/sahara-dashboard | https://review.openstack.org/593925 | > | import zuul job settings from project-config | > openstack/sahara-dashboard | https://review.openstack.org/593930 | > | import zuul job settings from project-config | > openstack/sahara-dashboard | https://review.openstack.org/593935 | > | import zuul job settings from project-config | openstack/sahara-extra > | https://review.openstack.org/593912 | > | import zuul job settings from project-config | openstack/sahara-extra > | https://review.openstack.org/593921 | > | import zuul job settings from project-config | openstack/sahara-extra > | https://review.openstack.org/593926 | > | import zuul job settings from project-config | openstack/sahara-extra > | https://review.openstack.org/593931 | > | import zuul job settings from project-config | openstack/sahara-extra > | https://review.openstack.org/593936 | > | import zuul job settings from project-config | > openstack/sahara-image-elements | https://review.openstack.org/593913 | > | import zuul job settings from project-config | > openstack/sahara-image-elements | https://review.openstack.org/593922 | > | import zuul job settings from project-config | > openstack/sahara-image-elements | https://review.openstack.org/593927 | > | import zuul job settings from project-config | > openstack/sahara-image-elements | https://review.openstack.org/593932 | > | import zuul job settings from project-config | > openstack/sahara-image-elements | https://review.openstack.org/593937 | > | import zuul job settings from project-config | openstack/sahara-specs > | https://review.openstack.org/593914 | > | import zuul job settings from project-config | openstack/sahara-tests > | https://review.openstack.org/593915 | > | switch documentation job to new PTI | openstack/sahara-tests > | https://review.openstack.org/593916 | > | add python 3.6 unit test job | openstack/sahara-tests > | https://review.openstack.org/593917 | > > +----------------------------------------------+---------------------------------+-------------------------------------+ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tenobreg at redhat.com Tue Aug 21 11:44:13 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Tue, 21 Aug 2018 08:44:13 -0300 Subject: [openstack-dev] New Contributor In-Reply-To: <58b5c6ab-fe55-25ae-769b-6cc5f55f9b03@mixmax.com> References: <49b15368-a67b-dfcb-0501-6b527a42c71c@openstack.org> <58b5c6ab-fe55-25ae-769b-6cc5f55f9b03@mixmax.com> Message-ID: That is great to hear. Plese join us at #openstack-sahara so we can discuss a little more of what work you want to do. Welcome aboard. On Tue, Aug 21, 2018 at 5:22 AM Ivoline Ngong wrote: > Hello Kendall and Telles, > > Thanks so much for warm welcome. I feel at home already. > The links sent were quire helpful and gave me an insight into what > OpenStack is all about. > After reading lightly about the different projects, the Sahara project > caught my attention. > > Probably because I am interested in data science. I will love to explore > the Sahara project some more. > > Cheers, > Ivoline > > > > On Mon, Aug 20, 2018 8:42 PM, Telles Nobrega tenobreg at redhat.com wrote: > >> Hi Ivoline, >> >> Also a little late but wanted to say welcome aboard, hopefully you will >> find a very welcoming community here and of course a lot of work to do. >> >> I work with Sahara, the big data processing project of OpenStack, we need >> help for sure. >> >> If this area interests you in any way, feel free to join us at >> #openstack-sahara on IRC or email me and we can send some work at your >> direction. >> >> >> On Mon, Aug 20, 2018 at 2:37 PM Kendall Nelson >> wrote: >> >> Hello Ivoline, >> >> While I'm a little late to the party, I still wanted to say welcome and >> offer my help :) >> >> If you have any questions based about the links you've been sent, I'm >> happy to answer them! I can also help you find/get started with a team and >> introduce you to community members whenever you're ready. >> >> -Kendall Nelson (diablo_rojo) >> >> >> On Mon, 20 Aug 2018, 4:08 am Ivoline Ngong, >> wrote: >> >> Thanks so much for help Josh and Thierry. I'll check out the links and >> hopefully find a way forward from there. Will get back here in case I have >> any questions. >> >> Cheers, >> Ivoline >> >> On Mon, Aug 20, 2018, 12:01 Thierry Carrez wrote: >> >> Ivoline Ngong wrote: >> > I am Ivoline Ngong. I am a Cameroonian who lives in Turkey. I will love >> > to contribute to Open source through OpenStack. I code in Java and >> > Python and I think OpenStack is a good fit for me. >> > I'll appreciate it if you can point me to the right direction on how I >> > can get started. >> >> Hi Ivoline, >> >> Welcome to the OpenStack community ! >> >> The OpenStack Technical Committee maintains a list of areas in most need >> of help: >> >> https://governance.openstack.org/tc/reference/help-most-needed.html >> >> Depending on your interest, you could pick one of those projects and >> reach out to the mentioned contact points. >> >> For more general information on how to contribute, you can check out our >> contribution portal: >> >> https://www.openstack.org/community/ >> >> -- >> Thierry Carrez (ttx) >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> -- >> >> TELLES NOBREGA >> >> SOFTWARE ENGINEER >> >> Red Hat Brasil >> >> Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo >> >> tenobreg at redhat.com >> >> TRIED. TESTED. TRUSTED. >> Red Hat é reconhecida entre as melhores empresas para trabalhar no >> Brasil pelo Great Place to Work. >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Aug 21 11:50:56 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 21 Aug 2018 06:50:56 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> Message-ID: <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> On 8/21/2018 4:28 AM, Chris Dent wrote: > Since we're airing things out (which I think is a good thing, at > least in the long run), I'll add to this. > > I think that's a pretty good example of where I did express some > resistance, especially since were it to come up again, I still would > express some (see below). But let's place that resistance in some > context. > > In the epic irc discussion you mentioned that one fear is that I > might want to change the handling of microversions [2] because I'm > somewhat famously ambivalent about them. That's correct, I am. > However, I would hope that the fact that placement has one of the > easier and more flexible microversions systems around (written by > me) and I went to the trouble to extract it to a library [3] and I'm > the author of the latest revision on how to microversion [4] is > powerful evidence that once consensus is reached I will do my utmost > to make things align with our shared plans and goals. Regarding microversions I was mostly thinking of the various times I've been asked in the placement channel if something warrants a microversion or if we can just bug fix it in, like microversion 1.26. I then generally feel like I need to be defensive when I say, "yes it's a behavior change in the API so it should." That makes me question how stringent others would be about upholding interoperability concerns if I weren't around. Maybe I'm admittedly too stringent and opt to be conservative at times, but I do make exceptions, e.g.: https://review.openstack.org/#/c/583907/ Suffice it to say I realize "does this need a microversion?" is not always an easy question to answer, and I appreciate that you, jaypipes and efried at least ask me for my input on the matter. I have obviously failed to appreciate that. > > So, with the notion of allocation or consumer types (both have been > discussed): If we start from the position that I've been with > placement from very early on and am cognizant of its several goals > and at the same time also aware of its limited "human resources" it > seems normal and appropriate to me that at least some members of the > group responsible for making it must make sure that we work to > choose the right things (of several choices) to do, in part by by > rigorously questioning additional features when existing planned > features are not yet done. In this case we might ask: is it right to > focus on incompletely thought out consumer type management for the > eventual support of quota handling (and other introspection) when we > haven't yet satisfied what has been described by some downstream > people (eglynn is example, to be specific) as job 1: getting shared > disk working correctly (which we still haven't managed across the > combined picture of nova and placement)? If the question is, should nova be talking about solving one problem while there are still more unsolved problems? Ideally we should not, but that's not the nature of probably anything in openstack, at least in a project as big as nova. If it were, the compute API would be 100% compatible with volume-backed instances, and shelve wouldn't be such a dumpster fire. :) But we don't live in an ideal situation with infinite time and resources nor the luxury of forethought at all times so we must move forward with *something* lest we get nothing done. > > From my perspective questioning additional features, so that they > are well proven, is simply part of the job and we all should be > doing it. If we are never hitting questions and disagreements we are > almost certainly running blind and our results will be less good. I totally agree, and realize there can be an echo chamber within nova which can be less than productive. As I noted earlier, I'm not sure the entire consumer types for counting qoutas solution is fully thought out at this point, so questioning it is appropriate until that's happened. > > Once we've hashed things out, I'll help make what we've chosen > happen. The evidence of this is everywhere. Consider this: I've > known (at least subconsciously) about the big reveal in yesterday's > IRC discussion for a long time, but I keep working to make nova, > placement and OpenStack better. Day in, day out, in the face of what > is perhaps the biggest insult to my professional integrity that I've > ever experienced. If this were a different time some portion of "we" > would need to do pistols at dawn, but that's dumb. I just want to > get on with making stuff. The right stuff. Please don't question my > commitment, but do question my designs and plans and help me make > them the best they can be. > > Elephant alert, to keep this healthy full exposure rolling: The kind > of questioning and "proving" described above happens all the time in > Nova with specs and other proposals that are presented. We ask > proposers to demonstrate that their ideas are necessary and sound, > and if they are not _or_ we don't have time, we say "no" or "later". > This is good and correct and part of the job and helps make nova the > best it can be given the many constraints it experiences. As far as > I can tell the main differences between me asking questions about > proposed placement features when they are presented by nova cores > and the more general nova-spec situation is who is being subjected > to the questions and by whom. Yup, again I agree with you. I've had more than one reply written in this thread where after re-reading it, realized I was being hypocritical and deleted my reply (I'm amazed I've had the restraint at times to re-read my replies before sending, I'm usually putting my foot in my mouth). For example, nova wants consumer types in placement and there was pushback on that as convoluting an otherwise minimal consumers API. At the same time, nova is actively rejecting people every release that want to pass volume type through the compute API during boot from volume. Our reason being, "you can already achieve this by calling cinder, and our API is already terribly complex, so let's not add fuel to the fire." So I realize it goes both ways and I'm trying to keep that in mind when replying on this thread. At this point, I think we're at: 1. Should placement be extracted into it's own git repo in Stein while nova still has known major issues which will have dependencies on placement changes, mainly modeling affinity? 2. If we extract, does it go under compute governance or a new project with a new PTL. As I've said, I personally believe that unless we have concrete plans for the big items in #1, we shouldn't hold up the extraction. We said in Dublin we wouldn't extract to a new git repo in Rocky but we'd work up to that point so we could do it in Stein, so this shouldn't surprise anyone. The actual code extraction and re-packaging and all that is going to be the biggest technical issue with all of this, and will likely take all of stein to complete it after all the bugs are shaken out. For #2, I think for now, in the interim, while we deal with the technical headache of the code extraction itself, it's best to leave the new repo under compute governance so the existing team is intact and we don't conflate the people issue with the technical issue at the same time. Get the hard technical part done first, and then we can move it out of compute governance. Once it's in its own git repo, we can change the core team as needed but I think it should be initialized with existing nova-core. I'm only speaking for myself here. Others on the nova core team have their own thoughts (Dan, Jay and Mel have all mentioned theirs). The rest of the core team probably doesn't even care either way. Except Vek. Vek cares *deeply*. -- Thanks, Matt From no-reply at openstack.org Tue Aug 21 12:23:45 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Tue, 21 Aug 2018 12:23:45 -0000 Subject: [openstack-dev] nova 18.0.0.0rc2 (rocky) Message-ID: Hello everyone, A new release candidate for nova for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/nova/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/nova/log/?h=stable/rocky Release notes for nova can be found at: https://docs.openstack.org/releasenotes/nova/ From mriedemos at gmail.com Tue Aug 21 12:28:11 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 21 Aug 2018 07:28:11 -0500 Subject: [openstack-dev] [all] [nova] [neutron] live migration with multiple port bindings. In-Reply-To: References: Message-ID: On 8/20/2018 3:18 PM, Sean Mooney wrote: > in both the ovs-dpdk tests, when the migration failed and the vm > contiuned to run on the source node however > it had no network connectivity. on hard reboot of the vm, it went to > error state because the vif binding > was set to none as the vif:bidning-details:host_id was set to none so > the vif_type was also set to none. > i have opened a nova bug to track the fact that the vm is left in an > invalid state even though the status is active. > see bug 1788014 I've got a nova patch for this here: https://review.openstack.org/#/c/594139/ However, I'd like Miguel to look at that bug because I assumed that when nova deletes the dest host port binding, the only remaining port binding is the inactive one for the source host, and neutron would automatically activate it, similar to how neutron will automatically deactivate all other bindings for a port when one of the other bindings is activated (like when nova activates the dest host port binding during live migration, the source host port binding is automatically deactivated because only one port binding can be active at any time). If there is a good reason why neutron doesn't do this on port binding delete, then I guess we go with fixing this in nova. -- Thanks, Matt From mriedemos at gmail.com Tue Aug 21 12:29:20 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 21 Aug 2018 07:29:20 -0500 Subject: [openstack-dev] [all] [nova] [neutron] live migration with multiple port bindings. In-Reply-To: References: Message-ID: <0d780cf6-0fd4-30af-5538-e9d67ac33a69@gmail.com> On 8/21/2018 7:28 AM, Matt Riedemann wrote: > On 8/20/2018 3:18 PM, Sean Mooney wrote: >> in both the ovs-dpdk tests, when the migration failed and the vm >> contiuned to run on the source node however >> it had no network connectivity. on hard reboot of the vm, it went to >> error state because the vif binding >> was set to none as the vif:bidning-details:host_id  was set to none so >> the vif_type was also set to none. >> i have opened a nova bug to track the fact that the vm is left in an >> invalid state even though the status is active. >> see bug 1788014 > > I've got a nova patch for this here: > > https://review.openstack.org/#/c/594139/ > > However, I'd like Miguel to look at that bug because I assumed that when > nova deletes the dest host port binding, the only remaining port binding > is the inactive one for the source host, and neutron would automatically > activate it, similar to how neutron will automatically deactivate all > other bindings for a port when one of the other bindings is activated > (like when nova activates the dest host port binding during live > migration, the source host port binding is automatically deactivated > because only one port binding can be active at any time). If there is a > good reason why neutron doesn't do this on port binding delete, then I > guess we go with fixing this in nova. > By the way, Sean, thanks a ton for doing all of this testing. It's super helpful and way above anything I could have gotten setup myself for the various neutron backend configurations. -- Thanks, Matt From james.slagle at gmail.com Tue Aug 21 12:41:33 2018 From: james.slagle at gmail.com (James Slagle) Date: Tue, 21 Aug 2018 08:41:33 -0400 Subject: [openstack-dev] [TripleO]Addressing Edge/Multi-site/Multi-cloud deployment use cases (new squad) In-Reply-To: References: Message-ID: On Tue, Aug 21, 2018 at 2:40 AM Csatari, Gergely (Nokia - HU/Budapest) wrote: > > Hi, > > There was a two days workshop on edge requirements back in Dublin. The notes are stored here: https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG I think there are some areas there what can be interesting for the squad. > Edge Computing Group plans to have a day long discussion in Denver. Maybe we could have a short discussion there about these requirements. Thanks! I've added my name to the etherpad for the PTG and will plan on spending Tuesday with the group. https://etherpad.openstack.org/p/EdgeComputingGroupPTG4 -- -- James Slagle -- From thierry at openstack.org Tue Aug 21 12:55:10 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 21 Aug 2018 14:55:10 +0200 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> Message-ID: Matt Riedemann wrote: > [...] > Regarding microversions I was mostly thinking of the various times I've > been asked in the placement channel if something warrants a microversion > or if we can just bug fix it in, like microversion 1.26. I then > generally feel like I need to be defensive when I say, "yes it's a > behavior change in the API so it should." That makes me question how > stringent others would be about upholding interoperability concerns if I > weren't around. [...] The issue with that kind of distrust by default is that it's not sustainable... In a large project you can't have every individual review everything because they trust noone else. That is why in OpenStack we instituted a culture of "trust by default, then escalate to PTL or TC if shit ever hits the fan". And the fact is, the PTL (at team level) or the TC (between teams) rarely had to arbitrate conflicts, because there aren't so many conflicts that are escalated rather than solved by consensus at the lower level. Restoring "trust by default" between placement and the rest of Nova seems to be the root of the problem here. In a community, it's generally done by documenting general expectations and shared understandings, so that you create a common culture and trust by default people to apply it. What would you suggest we do to improve that in this specific case? -- Thierry Carrez (ttx) From emilien at redhat.com Tue Aug 21 13:29:50 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 21 Aug 2018 09:29:50 -0400 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> Message-ID: If I would be a standalone consummer of OpenStack Placement (e.g. I only run cinder or ironic to manage volume / baremetal), and I had to run something like: $ pip install -U placement I would prefer "placement" to be a project driven by diverse people interested by Infrastructure resources placement and not just nova. In other words, I would be afraid of seeing this project owned by the nova team since the scope of placement seems to go beyond compute. Instead I would be at ease to see a separated PTL and core team, who closely work with OpenStack projects consuming placement service. People writting placement's code would *own* this project, and decide of their future. They would serve projects like nova, cinder, maybe ironic one day, etc. By making this team more independent, I believe they could build trust in our community, which is something we desperately need nowadays and have been encouraging over the last years. I have an high level of confidence that this new team would be smart enough to collaborate when it comes to code design decisions, no matter what happened in the past. Let's reset a little bit and give these people a chance here. Let's create this independent team. I believe we could even write down a (short) vision for placement, and a (short) mission statement, then we can set expectations for the near future. On Tue, Aug 21, 2018 at 8:55 AM Thierry Carrez wrote: > Matt Riedemann wrote: > > [...] > > Regarding microversions I was mostly thinking of the various times I've > > been asked in the placement channel if something warrants a microversion > > or if we can just bug fix it in, like microversion 1.26. I then > > generally feel like I need to be defensive when I say, "yes it's a > > behavior change in the API so it should." That makes me question how > > stringent others would be about upholding interoperability concerns if I > > weren't around. [...] > > The issue with that kind of distrust by default is that it's not > sustainable... In a large project you can't have every individual review > everything because they trust noone else. > > That is why in OpenStack we instituted a culture of "trust by default, > then escalate to PTL or TC if shit ever hits the fan". And the fact is, > the PTL (at team level) or the TC (between teams) rarely had to > arbitrate conflicts, because there aren't so many conflicts that are > escalated rather than solved by consensus at the lower level. > > Restoring "trust by default" between placement and the rest of Nova > seems to be the root of the problem here. In a community, it's generally > done by documenting general expectations and shared understandings, so > that you create a common culture and trust by default people to apply it. > > What would you suggest we do to improve that in this specific case? > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Tue Aug 21 13:35:54 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 21 Aug 2018 09:35:54 -0400 Subject: [openstack-dev] [all] [nova] [neutron] live migration with multiple port bindings. In-Reply-To: <0d780cf6-0fd4-30af-5538-e9d67ac33a69@gmail.com> References: <0d780cf6-0fd4-30af-5538-e9d67ac33a69@gmail.com> Message-ID: On Tue, Aug 21, 2018, 8:29 AM Matt Riedemann wrote: > On 8/21/2018 7:28 AM, Matt Riedemann wrote: > > On 8/20/2018 3:18 PM, Sean Mooney wrote: > >> in both the ovs-dpdk tests, when the migration failed and the vm > >> contiuned to run on the source node however > >> it had no network connectivity. on hard reboot of the vm, it went to > >> error state because the vif binding > >> was set to none as the vif:bidning-details:host_id was set to none so > >> the vif_type was also set to none. > >> i have opened a nova bug to track the fact that the vm is left in an > >> invalid state even though the status is active. > >> see bug 1788014 > > > > I've got a nova patch for this here: > > > > https://review.openstack.org/#/c/594139/ > > > > However, I'd like Miguel to look at that bug because I assumed that when > > nova deletes the dest host port binding, the only remaining port binding > > is the inactive one for the source host, and neutron would automatically > > activate it, similar to how neutron will automatically deactivate all > > other bindings for a port when one of the other bindings is activated > > (like when nova activates the dest host port binding during live > > migration, the source host port binding is automatically deactivated > > because only one port binding can be active at any time). If there is a > > good reason why neutron doesn't do this on port binding delete, then I > > guess we go with fixing this in nova. > > > > By the way, Sean, thanks a ton for doing all of this testing. It's super > helpful and way above anything I could have gotten setup myself for the > various neutron backend configurations. > Agreed, big +1 and thanks to Sean for doing this. However, I'd like to point out that this highlights the unfortunate situation we're in: only a select couple contributors actually are able to understand the overly complex, ludicrously inconsistent, and all too often incompatible networking technologies that OpenStack has come to rely on 😐 This reminds me of a recent conversation I had on Twitter with an old coworker of mine who is now at booking. com who stated the frustrating complexity of networking and SDN setup in OpenStack was the reason he switched to Kubernetes and hasn't looked back since. -jay > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Aug 21 13:47:02 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 21 Aug 2018 08:47:02 -0500 Subject: [openstack-dev] [all] PyCharm Licences In-Reply-To: References: Message-ID: On 8/20/2018 11:24 PM, Swapnil Kulkarni wrote: > I have renewed the Pycharm licenses for community till Aug 13, 2019. > Everyone who is using it should have it updated automatically. Please > do not request again for renewal. > > At the same time, I would request not to request multiple licenses > with multiple email addresses. Thanks Swapnil. I use PyCharm daily so appreciate you handling this for the community. -- Thanks, Matt From renat.akhmerov at gmail.com Tue Aug 21 13:48:54 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Tue, 21 Aug 2018 20:48:54 +0700 Subject: [openstack-dev] [mistral] No Denver PTG Sessions In-Reply-To: <8de32629-82a1-bf54-f667-f50de88da144@redhat.com> References: <8de32629-82a1-bf54-f667-f50de88da144@redhat.com> Message-ID: <126b5bdd-0825-49c0-a65d-bdd6fda13527@Spark> It’s disappointing that we can’t make it this time. Really bad coincidence of different issues.. I’d love to have a virtual PTG and Dmitry’s notes look very helpful to me. Thanks Renat Akhmerov @Nokia On 21 Aug 2018, 17:15 +0700, Dmitry Tantsur , wrote: > On 08/21/2018 10:22 AM, Dougal Matthews wrote: > > Hi all, > > > > Unfortunately due to some personal conflicts and trouble with travel plans, > > there will be no Mistral cores at the Denver PTG. This means that we have had to > > cancel the Mistral sessions. I recently asked if anyone was planning to attend > > and only got one maybe. > > > > I am considering trying to arrange a "virtual PTG", so we can do some planning > > for Stein. However, I'm not sure how/if that could work. Do you think this would > > be a good idea? Suggestions how to organise one would be very welcome! > > We did a few virtual midcycles for ironic, and it ended up quite well. While it > did require some people to stay awake at unusual times, it did allow people > without travel budget to attend. > > Initially we used the OpenStack SIP system, but we found Bluejeans to be a bit > easier to use. I think it has a limit of 300 participants, which is more than > enough. Anyone from Red Hat can host it. > > We dedicated 1-2 days with 4-5 hours each. I'd recommend against taking up the > whole day - will be too exhausting. The first time we tried splitting the slots > into two per day: APAC friendly and EMEA friendly. Relatively few people showed > up at the former, so the next time we only had one slot. > > As with the PTG, having an agenda upfront helps a lot. We did synchronization > and notes through an etherpad - exactly the same was as on the PTG. > > Hope that helps, > Dmitry > > > > > Thanks, > > Dougal > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Aug 21 13:53:51 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 21 Aug 2018 08:53:51 -0500 Subject: [openstack-dev] [Searchlight] Reaching out to the Searchlight core members for Stein - Call for team meeting In-Reply-To: References: Message-ID: <1610b2d5-71cc-a29c-8466-e706c8c344b0@gmail.com> On 8/20/2018 10:10 AM, Trinh Nguyen wrote: > > Thanks for your response. What is your IRC handler? Kevin_Zheng. -- Thanks, Matt From mriedemos at gmail.com Tue Aug 21 14:04:02 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 21 Aug 2018 09:04:02 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> Message-ID: <20c27641-3ace-68be-31a5-8b3bcce69bcf@gmail.com> On 8/21/2018 7:55 AM, Thierry Carrez wrote: > Matt Riedemann wrote: >> [...] >> Regarding microversions I was mostly thinking of the various times >> I've been asked in the placement channel if something warrants a >> microversion or if we can just bug fix it in, like microversion 1.26. >> I then generally feel like I need to be defensive when I say, "yes >> it's a behavior change in the API so it should." That makes me >> question how stringent others would be about upholding >> interoperability concerns if I weren't around. [...] > > The issue with that kind of distrust by default is that it's not > sustainable... In a large project you can't have every individual review > everything because they trust noone else. It's not distrust by default. I said, "thinking of the *various times*". Which means more than once. But I also said I was asked for input, and failed to reflect on that until I actually wrote it down. That's my fault. > > That is why in OpenStack we instituted a culture of "trust by default, > then escalate to PTL or TC if shit ever hits the fan". And the fact is, > the PTL (at team level) or the TC (between teams) rarely had to > arbitrate conflicts, because there aren't so many conflicts that are > escalated rather than solved by consensus at the lower level. Sure, but I'm sure there are also times where people don't escalate simply because they want to avoid conflict. There have been many times where I've questioned another nova core's +2/+W on a change and rather than make a big deal out of it, I push that frustration way down but it comes out in other ways, like distrust later. Again, that's my fault, but I suspect I'm not the only person in OpenStack that does this. On a good day I'll ask the person directly in IRC, or failing that on the review, "hey why did you do this? Did you think about X?". > > Restoring "trust by default" between placement and the rest of Nova > seems to be the root of the problem here. In a community, it's generally > done by documenting general expectations and shared understandings, so > that you create a common culture and trust by default people to apply it. > > What would you suggest we do to improve that in this specific case? > Trust falls! I don't know, Thierry. Likely just improved direct communication with the people involved rather than back-channeling and distrust/hurt feelings which lead to "sides" being setup. As I said above, direct communication can be hard because of the confrontation and awkwardness so it's easier at times to take the shitty way out and just complain about so-and-so to someone else that thinks the same way you do rather than try to gain understanding and listen to other viewpoints. We (I) go over this every retrospective but still fail to learn from and practice it. -- Thanks, Matt From doug at doughellmann.com Tue Aug 21 14:17:36 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 21 Aug 2018 10:17:36 -0400 Subject: [openstack-dev] [goal][python3] dealing with new jobs failing on old branches Message-ID: <1534860661-sup-5278@lrrr.local> Goal champions, Most of the jobs in the project-templates do not have branch specifiers. That allows us to add a job to a repository and then not realize that it doesn't work on an old branch. We're finding some of those with this zuul migration (for example, https://review.openstack.org/#/c/593012/ and https://review.openstack.org/#/c/593016/). To deal with these, we need to remove that job or template from the repository's settings in the project-config repository, and not include it in the import patches. 1. First we want to wait for the team to land as many of the unaltered import patches as possible, so those jobs stay on the master branch and recent stable branches where they work. 2. Then, propose a patch to project-config to remove just the problem jobs and templates from the repositories where they are a problem. 3. Then, rebase the patch that removes all of a team's project settings on top of the one created in step 2. 4. Finally, modify the import patch(es) on the older stable branches where the jobs fail and remove the jobs or templates that cause problems. Set those patches to depend on the patch created in step 2, since they cannot land without the project-config change. Doug From jungleboyj at gmail.com Tue Aug 21 14:22:09 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Tue, 21 Aug 2018 09:22:09 -0500 Subject: [openstack-dev] [cinder][manila] Team Dinner Planning at PTG... Message-ID: <3394aab5-a15f-9583-e57c-03dff9faf4de@gmail.com> All, We talked in the Cinder team meeting about doing a joint Cinder/Manila team dinner at the PTG in Denver. I have created a Doodle Poll to indicate what night would work best for everyone. [1]  Also, if you are planning to come please add your name in the Cinder Etherpad [2]. Look forward to seeing you all at the PTG! Jay [1] https://doodle.com/poll/8rm3ahdyhmrtx5gp#table [2] https://etherpad.openstack.org/p/cinder-ptg-planning-denver-9-2018 From mokats at intracom-telecom.com Tue Aug 21 14:22:22 2018 From: mokats at intracom-telecom.com (Katsaounis Molyvas Stamatios) Date: Tue, 21 Aug 2018 17:22:22 +0300 Subject: [openstack-dev] Pending patches of openstack/intel-nfv-ci-tests Project Message-ID: <6d4b41f3e86f7f4397357a734fef00f6@iris> Dear all, I am sending you this email because there exists an effort to integrate Intel NFV CI tests tempest plugin with Opnfv Functional Testing project (https://wiki.opnfv.org/display/functest). In order to integrate them though, there are some pending patches which should be merged in order to make intel test cases functional to multi-node environments. I have already managed to run them successfully on a multi node environment by cherry-picking all the patches together in master branch. Apart from Artom' s patches, me and my colleague Dimitris (cc), have pushed some changes too. The pending patches are the following: https://review.openstack.org/576606 https://review.openstack.org/593604 https://review.openstack.org/576607 https://review.openstack.org/576605 https://review.openstack.org/590303 https://review.openstack.org/576604 https://review.openstack.org/571004 I would like to know when will they be merged in order to proceed to the completion of the integration with the FuncTest project. I am willing to offer my help, if it is needed, in order to get the job done. Thank you all in advance. Kind regards, Stamatis Stamatis Katsaounis Software Enginner Software Development Center ______________________________________ Intracom Telecom 19.7 km Markopoulou Ave., Peania, GR 19002 t: +30 2106677689 mokats at intracom-telecom.com http://www.intracom-telecom.com/ JOIN US Mobile World Congress Americas 12-14 September Los Angeles, USA Gitex Technology Week 14-18 October Dubai, UAE FutureCom 15-18 October Sao Paulo, Brazil AfricaCom 13-15 November Cape Town, S. Africa Mobile World Congress 25-28 February 2019 Barcelona, Spain Mobile World Congress Shanghai 26-28 June 2019 Shanghai, China The information in this e-mail message and any attachments are intended only for the individual or entity to whom it is addressed and may be confidential. If you have received this transmission in error, and you are not an intended recipient, be aware that any copying, disclosure, distribution or use of this transmission or its contents is prohibited. Intracom Telecom and the sender accept no liability for any loss, disruption or damage to your data or computer system that may occur while using data contained in, or transmitted with, this email. Views or opinions expressed in this message may be those of the author and may not necessarily represent those of Intracom Telecom. -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack.org at sodarock.com Tue Aug 21 14:38:29 2018 From: openstack.org at sodarock.com (John Villalovos) Date: Tue, 21 Aug 2018 07:38:29 -0700 Subject: [openstack-dev] Stepping down from Ironic core Message-ID: Good morning Ironic, I have come to realize that I don't have the time needed to be able to devote the attention needed to continue as an Ironic core. I'm hopeful that in the future I will work on Ironic or OpenStack again! :) The Ironic (and OpenStack) community has been a great one and I have really enjoyed my time working on it and working with all the people. I will still be hanging around on IRC and you may see me submitting a patch here and there too :) Thanks again, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue Aug 21 15:00:21 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 21 Aug 2018 09:00:21 -0600 Subject: [openstack-dev] Stepping down from Ironic core In-Reply-To: References: Message-ID: This is sad news to read, but completely understandable! Thank you for all of your excellent work on Ironic, and should time and focus ever make sense later down the road for you to rejoin ironic-core, know you'll be welcomed back. Thanks again! -Julia On Tue, Aug 21, 2018 at 8:38 AM, John Villalovos wrote: > Good morning Ironic, > > I have come to realize that I don't have the time needed to be able to > devote the attention needed to continue as an Ironic core. > > I'm hopeful that in the future I will work on Ironic or OpenStack again! :) > > The Ironic (and OpenStack) community has been a great one and I have really > enjoyed my time working on it and working with all the people. I will still > be hanging around on IRC and you may see me submitting a patch here and > there too :) > > Thanks again, > John > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jim at jimrollenhagen.com Tue Aug 21 15:25:32 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Tue, 21 Aug 2018 11:25:32 -0400 Subject: [openstack-dev] Stepping down from Ironic core In-Reply-To: References: Message-ID: On Tue, Aug 21, 2018 at 11:00 AM, Julia Kreger wrote: > This is sad news to read, but completely understandable! > > Thank you for all of your excellent work on Ironic, and should time > and focus ever make sense later down the road for you to rejoin > ironic-core, know you'll be welcomed back. > +1000. Thanks for all the great work you've done, even if it means you now have dents in your forehead from banging it against the wall making grenade and friends happy. :) Hope to see you around \o // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Aug 21 15:44:45 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 21 Aug 2018 10:44:45 -0500 Subject: [openstack-dev] [nova][neutron] Proposal to make SR-IOV port attach explicitly fail in nova Message-ID: None of the in-tree nova virt drivers support attaching SR-IOV ports to a running instance, you can only create a server with an SR-IOV port. I have a patch proposed [1] to make this an explicit failure rather than the user getting an obscure 500 KeyError failure. Supporting this would be a feature change [2]. However, the out of tree powervm driver apparently supports attaching SR-IOV ports to running servers, so I'm sending this email to make any other out of tree virt drivers aware of the change to make it explicitly fail in case they also support his functionality downstream. [1] https://review.openstack.org/#/c/591898/ [2] https://blueprints.launchpad.net/nova/+spec/sriov-interface-attach-detach -- Thanks, Matt From mjturek at linux.vnet.ibm.com Tue Aug 21 15:56:03 2018 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Tue, 21 Aug 2018 11:56:03 -0400 Subject: [openstack-dev] [ironic] Next bug day is Tuesday August 28th! Vote for timeslot! In-Reply-To: References: Message-ID: <8495ac13-5eb8-8d3a-04fd-0cd837c4c7bb@linux.vnet.ibm.com> Hello, With the next bug day coming in a week from today, I wanted to bring up the timeslot poll we have going again. https://doodle.com/poll/ef4m9zmacm2ey7ce I'd like to finalize a time slot for this on Thursday so if you want to cast your vote, please do it soon! Hope to see you there! Thanks, Mike Turek On 8/2/18 11:24 AM, Michael Turek wrote: > Hey all! > > Bug day was pretty productive today and we decided to schedule another > one for the end of this month, on Tuesday the 28th. For details see > the etherpad for the event [0] > > Also since we're changing things up, we decided to also put up a vote > for the timeslot [1] > > If you have any questions or suggestions on how to improve bug day, I > am all ears! Hope to see you there! > > Thanks, > Mike Turek > > [0] https://etherpad.openstack.org/p/ironic-bug-day-august-28-2018 > [1] https://doodle.com/poll/ef4m9zmacm2ey7ce > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Kevin.Fox at pnnl.gov Tue Aug 21 16:38:41 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 21 Aug 2018 16:38:41 +0000 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com>, <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01C18195F@EX10MBOX03.pnnl.gov> So, nova's worried about having to be in the boat many of us have been in where they depend on another project not recognizing their important use cases? heh... ok, so, yeah. that is a legitimate concern. You need someone like the TC to be able to step in, in those cases to help sort that kind of issue out. In the past, the TC was not willing to do so. My gut feeling though is that is finally changing. This is a bigger problem then just Nova, so getting a proper procedure in place to handle this is really important for OpenStack in general. Lets solve that rather then one offing a solution by keeping placement under Nova's control. How do I say this nicely.... Better to talk about it instead of continuing to ignore the hard issues I guess. Nova has been very self centered compared to other projects in the tent. This thread feels like more of the same. If OpenStack as a whole is to get healthier, we need to stop having selfish projects and encourage ones that help each other. I think splitting out placement from Nova's control has at least two benefits 1. Nova has complained a lot about having too much code so they can't take other projects requests. This reduces Nova's code base so they can focus on their core functionality, and more importantly, their users use cases. This will make OpenStack as a whole, healthier. 2. It reduces Nova's special project status leveling the playing field a bit. Nova can help influence the TC to solving difficult cross project problems along with the rest of us rather then going off and doing things on their own. Thanks, Kevin ________________________________________ From: Matt Riedemann [mriedemos at gmail.com] Sent: Monday, August 20, 2018 6:23 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? On 8/20/2018 8:08 PM, Matt Riedemann wrote: > On 8/20/2018 6:42 PM, Ed Leafe wrote: >> It was said in the #openstack-tc discussions, but for those on the >> mailing list, the biggest concern among the Nova core developers is >> that the consensus among Placement cores will certainly not align with >> the needs of Nova. I personally think that's ridiculous, and, as one >> of the very opinionated people involved, a bit insulting. No one wants >> to see either Nova or Placement to fail. > > I believe you're paraphrasing what I said, and I never said I was > speaking for all nova core developers. I don't think anyone working on > placement would intentionally block things nova needs or try to see nova > fail. Here is an example of the concern. In Sydney we talked about adding types to the consumers resource in placement so that nova could use placement for counting quotas [1]. Chris considered it a weird hack but it's pretty straight-forward from a nova consumption point of view. So if placement were separately governed with let's say Chris as PTL, would something like that become a holy war type issue because it's "weird" and convolutes the desire for a minimalist API? I think Chris' stance on this particular item has softened over time as more of a "meh" but it's a worry about extracting with a separate team that is against changes because they are not ideal for Placement yet are needed for a consumer of Placement. I understand this is likely selfish on the part of the nova people that want this (including myself) and maybe close-minded to alternative solutions to the problem (I'm not sure if it's all been thought out end-to-end yet, Mel would likely know the latest on this item). Anyway, I like to have examples when I'm stating something to gain understanding, so that's what I'm trying to do here - explain, with an example, what I said in the tc channel discussion today. [1] Line 55 https://etherpad.openstack.org/p/SYD-forum-nova-placement-update -- Thanks, Matt __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From fungi at yuggoth.org Tue Aug 21 16:53:03 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 21 Aug 2018 16:53:03 +0000 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C18195F@EX10MBOX03.pnnl.gov> References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <1A3C52DFCD06494D8528644858247BF01C18195F@EX10MBOX03.pnnl.gov> Message-ID: <20180821165302.pwwah7xwq452apng@yuggoth.org> On 2018-08-21 16:38:41 +0000 (+0000), Fox, Kevin M wrote: [...] > You need someone like the TC to be able to step in, in those cases > to help sort that kind of issue out. In the past, the TC was not > willing to do so. My gut feeling though is that is finally > changing. [...] To be clear, it's not that TC members are unwilling to step into these discussions. Rather, it's that most times when a governing body has to tell volunteers to do something they don't want to do, it tends to not be particularly helpful in solving the underlying disagreement. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Kevin.Fox at pnnl.gov Tue Aug 21 17:18:40 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 21 Aug 2018 17:18:40 +0000 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <20180821165302.pwwah7xwq452apng@yuggoth.org> References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <1A3C52DFCD06494D8528644858247BF01C18195F@EX10MBOX03.pnnl.gov>, <20180821165302.pwwah7xwq452apng@yuggoth.org> Message-ID: <1A3C52DFCD06494D8528644858247BF01C1819C2@EX10MBOX03.pnnl.gov> Heh. And some things don't change... Having a large project such as OpenStack, made up of large numbers of volunteers, each with their own desires means it will be impossible to make everyone happy all of the time. For the good of the community, the community needs to decide on a common direction, and sometimes individuals need to be asked to go against their own desires for the betterment of the entire community. Yes, that risks an individual contributor leaving. But if it really is in the best interest of the community, others will continue on. We've ignored that for so long, we've built a huge system on letting individuals set their own course without common direction and with their own desires. The projects don't integrate as well as they should, the whole of OpenStack gets overly complex and unwieldy to use or worse, large gaps in user needed functionality, and users end up leaving. I'm really sure at this point that you can't have a project as large as OpenStack without leadership setting a course and sometimes making hard choices for the betterment of the whole. That doesn't mean a benevolent dictator. But our self govened model with elected officials should be a good balance. If they are too unreasonable, they don't get reelected. But not leading isn't an option either anymore. Thanks, Kevin ________________________________________ From: Jeremy Stanley [fungi at yuggoth.org] Sent: Tuesday, August 21, 2018 9:53 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? On 2018-08-21 16:38:41 +0000 (+0000), Fox, Kevin M wrote: [...] > You need someone like the TC to be able to step in, in those cases > to help sort that kind of issue out. In the past, the TC was not > willing to do so. My gut feeling though is that is finally > changing. [...] To be clear, it's not that TC members are unwilling to step into these discussions. Rather, it's that most times when a governing body has to tell volunteers to do something they don't want to do, it tends to not be particularly helpful in solving the underlying disagreement. -- Jeremy Stanley From kennelson11 at gmail.com Tue Aug 21 18:10:12 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 21 Aug 2018 11:10:12 -0700 Subject: [openstack-dev] New Contributor In-Reply-To: References: <49b15368-a67b-dfcb-0501-6b527a42c71c@openstack.org> <58b5c6ab-fe55-25ae-769b-6cc5f55f9b03@mixmax.com> Message-ID: Ivoline, If you need help getting on IRC[1] or setup with any of the other tools we use, please let me know! [1] https://docs.openstack.org/contributors/common/irc.html -Kendall Nelson On Tue, Aug 21, 2018 at 4:44 AM Telles Nobrega wrote: > That is great to hear. Plese join us at #openstack-sahara so we can > discuss a little more of what work you want to do. > > Welcome aboard. > > On Tue, Aug 21, 2018 at 5:22 AM Ivoline Ngong > wrote: > >> Hello Kendall and Telles, >> >> Thanks so much for warm welcome. I feel at home already. >> The links sent were quire helpful and gave me an insight into what >> OpenStack is all about. >> After reading lightly about the different projects, the Sahara project >> caught my attention. >> >> Probably because I am interested in data science. I will love to explore >> the Sahara project some more. >> >> Cheers, >> Ivoline >> >> >> >> On Mon, Aug 20, 2018 8:42 PM, Telles Nobrega tenobreg at redhat.com wrote: >> >>> Hi Ivoline, >>> >>> Also a little late but wanted to say welcome aboard, hopefully you will >>> find a very welcoming community here and of course a lot of work to do. >>> >>> I work with Sahara, the big data processing project of OpenStack, we >>> need help for sure. >>> >>> If this area interests you in any way, feel free to join us at >>> #openstack-sahara on IRC or email me and we can send some work at your >>> direction. >>> >>> >>> On Mon, Aug 20, 2018 at 2:37 PM Kendall Nelson >>> wrote: >>> >>> Hello Ivoline, >>> >>> While I'm a little late to the party, I still wanted to say welcome and >>> offer my help :) >>> >>> If you have any questions based about the links you've been sent, I'm >>> happy to answer them! I can also help you find/get started with a team and >>> introduce you to community members whenever you're ready. >>> >>> -Kendall Nelson (diablo_rojo) >>> >>> >>> On Mon, 20 Aug 2018, 4:08 am Ivoline Ngong, >>> wrote: >>> >>> Thanks so much for help Josh and Thierry. I'll check out the links and >>> hopefully find a way forward from there. Will get back here in case I have >>> any questions. >>> >>> Cheers, >>> Ivoline >>> >>> On Mon, Aug 20, 2018, 12:01 Thierry Carrez >>> wrote: >>> >>> Ivoline Ngong wrote: >>> > I am Ivoline Ngong. I am a Cameroonian who lives in Turkey. I will >>> love >>> > to contribute to Open source through OpenStack. I code in Java and >>> > Python and I think OpenStack is a good fit for me. >>> > I'll appreciate it if you can point me to the right direction on how I >>> > can get started. >>> >>> Hi Ivoline, >>> >>> Welcome to the OpenStack community ! >>> >>> The OpenStack Technical Committee maintains a list of areas in most need >>> of help: >>> >>> https://governance.openstack.org/tc/reference/help-most-needed.html >>> >>> Depending on your interest, you could pick one of those projects and >>> reach out to the mentioned contact points. >>> >>> For more general information on how to contribute, you can check out our >>> contribution portal: >>> >>> https://www.openstack.org/community/ >>> >>> -- >>> Thierry Carrez (ttx) >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> -- >>> >>> TELLES NOBREGA >>> >>> SOFTWARE ENGINEER >>> >>> Red Hat Brasil >>> >>> Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo >>> >>> tenobreg at redhat.com >>> >>> TRIED. TESTED. TRUSTED. >>> Red Hat é reconhecida entre as melhores empresas para trabalhar no >>> Brasil pelo Great Place to Work. >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -- > > TELLES NOBREGA > > SOFTWARE ENGINEER > > Red Hat Brasil > > Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo > > tenobreg at redhat.com > > TRIED. TESTED. TRUSTED. > Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil > pelo Great Place to Work. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Aug 21 18:13:16 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 21 Aug 2018 18:13:16 +0000 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C1819C2@EX10MBOX03.pnnl.gov> References: <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <1A3C52DFCD06494D8528644858247BF01C18195F@EX10MBOX03.pnnl.gov> <20180821165302.pwwah7xwq452apng@yuggoth.org> <1A3C52DFCD06494D8528644858247BF01C1819C2@EX10MBOX03.pnnl.gov> Message-ID: <20180821181315.hfnauwr224himxkk@yuggoth.org> On 2018-08-21 17:18:40 +0000 (+0000), Fox, Kevin M wrote: [...] > I'm really sure at this point that you can't have a project as > large as OpenStack without leadership setting a course and > sometimes making hard choices for the betterment of the whole. > That doesn't mean a benevolent dictator. But our self govened > model with elected officials should be a good balance. If they are > too unreasonable, they don't get reelected. But not leading isn't > an option either anymore. [...] Divining a consensual direction in which to steer the community is not the same thing as telling people what to do, but is still very much leadership. But I'd rather stop dancing in generalities and just talk about concrete examples instead. In this case, separation of governance between Nova and (as of yet unnamed) placement teams. If the Nova team is against wholly handing over control of the placement service to the current placement contributors, then having the OpenStack Technical Committee tell them to get over it isn't the way to foster productive future relationships between those two groups of people. The placement team is already entirely empowered, should they wish, to fork the placement service out of the nova repository and then apply to the TC to have that recognized as a separate team but doing so in no way guarantees the Nova team will work with them to use that version of placement and deprecate the one on which they currently rely. For that, there needs to be a positive working relationship, one we can't simply demand into being, so it's in their best interests to work things out amicably and directly instead of asking someone else (the TC) to decide this for them. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From kennelson11 at gmail.com Tue Aug 21 18:15:47 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 21 Aug 2018 11:15:47 -0700 Subject: [openstack-dev] [TC][Searchlight] Setting up milestones for Searchlight on Launchpad In-Reply-To: References: <0ff3b148-2e46-02ba-9835-796540e7a6df@openstack.org> Message-ID: Hello Trinh, Since Searchlight is in flux right now, it might be an appropriate time to consider migrating to StoryBoard from Launchpad. Since you are working on getting organized and figuring out the state of Seachlight, it might make more sense to do that in the new tool, rather than doing it now in Launchpad and then again in the future when you migrate to Storyboard down the road. If this sounds like something you want to look into, let me know and I can do a test migration into our dev environment. If that works out, I expect we could migrate you before the end of the week. -Kendall Nelson (diablo_rojo) On Tue, Aug 21, 2018 at 2:21 AM Trinh Nguyen wrote: > Hi Thierry, > > I just saw that link. Thanks :) > > Because I couldn't contact any of the core members I emailed this list. I > will update the searchlight-core as planned after I am added. > > Thanks for your response, > > *Trinh Nguyen *| Founder & Chief Architect > > > > *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * > > > > On Tue, Aug 21, 2018 at 6:15 PM Thierry Carrez > wrote: > >> Trinh Nguyen wrote: >> > In an effort to get Searchlight back on track, I would like to set up >> > milestones as well as clean up the incomplete bugs, blueprints etc. on >> > Launchpad [1] I was added to the Searchlight Drivers team but I still >> > can not touch the milestone configuration. >> >> As a member of the "maintainer" team in Launchpad you should be able to >> register a series ("stein") and then add milestones to that series. You >> should see a "Register a series" link under "Series and milestones" at >> https://launchpad.net/searchlight >> >> > In addition, I would like to move forward with unreviewed patched on >> > Gerrit so I need PTL privileges on Searchlight project. Do I have to >> > wait for [2] to be merged? >> >> For the TC to step in and add you to searchlight-core, yes, we'll have >> to wait for the merging of that patch. >> >> To go faster, you could ask any of the existing members in that group to >> directly add you: >> >> https://review.openstack.org/#/admin/groups/964,members >> >> (NB: this group looks like it should be updated :) ) >> >> -- >> Thierry Carrez (ttx) >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Tue Aug 21 18:21:00 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 21 Aug 2018 11:21:00 -0700 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> Message-ID: On Tue, 21 Aug 2018 10:28:26 +0100 (BST), Chris Dent wrote: > On Mon, 20 Aug 2018, Matt Riedemann wrote: > >> Here is an example of the concern. In Sydney we talked about adding types to >> the consumers resource in placement so that nova could use placement for >> counting quotas [1]. Chris considered it a weird hack but it's pretty >> straight-forward from a nova consumption point of view. So if placement were >> separately governed with let's say Chris as PTL, would something like that >> become a holy war type issue because it's "weird" and convolutes the desire >> for a minimalist API? I think Chris' stance on this particular item has >> softened over time as more of a "meh" but it's a worry about extracting with >> a separate team that is against changes because they are not ideal for >> Placement yet are needed for a consumer of Placement. I understand this is >> likely selfish on the part of the nova people that want this (including >> myself) and maybe close-minded to alternative solutions to the problem (I'm >> not sure if it's all been thought out end-to-end yet, Mel would likely know >> the latest on this item). Anyway, I like to have examples when I'm stating >> something to gain understanding, so that's what I'm trying to do here - >> explain, with an example, what I said in the tc channel discussion today. > > Since we're airing things out (which I think is a good thing, at > least in the long run), I'll add to this. > > I think that's a pretty good example of where I did express some > resistance, especially since were it to come up again, I still would > express some (see below). But let's place that resistance in some > context. > > In the epic irc discussion you mentioned that one fear is that I > might want to change the handling of microversions [2] because I'm > somewhat famously ambivalent about them. That's correct, I am. > However, I would hope that the fact that placement has one of the > easier and more flexible microversions systems around (written by > me) and I went to the trouble to extract it to a library [3] and I'm > the author of the latest revision on how to microversion [4] is > powerful evidence that once consensus is reached I will do my utmost > to make things align with our shared plans and goals. > > So, with the notion of allocation or consumer types (both have been > discussed): If we start from the position that I've been with > placement from very early on and am cognizant of its several goals > and at the same time also aware of its limited "human resources" it > seems normal and appropriate to me that at least some members of the > group responsible for making it must make sure that we work to > choose the right things (of several choices) to do, in part by by > rigorously questioning additional features when existing planned > features are not yet done. In this case we might ask: is it right to > focus on incompletely thought out consumer type management for the > eventual support of quota handling (and other introspection) when we > haven't yet satisfied what has been described by some downstream > people (eglynn is example, to be specific) as job 1: getting shared > disk working correctly (which we still haven't managed across the > combined picture of nova and placement)? On this, my recollection of what happened was that I had a topic for the PTG to discuss *how* we could solve the problem of quota resource counting by querying placement for resource usage information, given that one instance of placement can be shared among multiple nova deployments, for example. As we know, there is no way to differentiate in placement, which resources Nova A PUT /allocations into placement vs which resources Nova B PUT /allocations into placement. I was looking for input from the placement experts on how that could possibly be supported, how Nova A could GET /usages for only itself and not all other Novas. From what I remember, the response was that the idea of being able to differentiate between the owners of resource allocations was disliked and I felt I had no direction to go forward after the discussion, even to do the legwork myself to research or contribute support to placement. I never thought we should *focus* on the lower priority quota handling work vs a higher priority item like shared storage support. But I had hoped to get some direction on what work or research I could do on my own to make progress toward being able to solve my quota problem, after a PTG discussion about it. Not looking for a response here -- just sharing my experience since the quota handling discussion was brought up. > From my perspective questioning additional features, so that they > are well proven, is simply part of the job and we all should be > doing it. If we are never hitting questions and disagreements we are > almost certainly running blind and our results will be less good. > > Once we've hashed things out, I'll help make what we've chosen > happen. The evidence of this is everywhere. Consider this: I've > known (at least subconsciously) about the big reveal in yesterday's > IRC discussion for a long time, but I keep working to make nova, > placement and OpenStack better. Day in, day out, in the face of what > is perhaps the biggest insult to my professional integrity that I've > ever experienced. If this were a different time some portion of "we" > would need to do pistols at dawn, but that's dumb. I just want to > get on with making stuff. The right stuff. Please don't question my > commitment, but do question my designs and plans and help me make > them the best they can be. I hope it is understood that the "reveal" Matt said in yesterday's IRC discussion does not represent me or other members of the team. That is really something where people would have to speak for themselves. > Elephant alert, to keep this healthy full exposure rolling: The kind > of questioning and "proving" described above happens all the time in > Nova with specs and other proposals that are presented. We ask > proposers to demonstrate that their ideas are necessary and sound, > and if they are not _or_ we don't have time, we say "no" or "later". > This is good and correct and part of the job and helps make nova the > best it can be given the many constraints it experiences. As far as > I can tell the main differences between me asking questions about > proposed placement features when they are presented by nova cores > and the more general nova-spec situation is who is being subjected > to the questions and by whom. > >> [1] Line 55 https://etherpad.openstack.org/p/SYD-forum-nova-placement-update > [2] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-20.log.html#t2018-08-20T20:35:51 > [3] https://pypi.org/project/microversion_parse/ > [4] http://specs.openstack.org/openstack/api-sig/guidelines/api_interoperability.html > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From kennelson11 at gmail.com Tue Aug 21 18:30:23 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 21 Aug 2018 11:30:23 -0700 Subject: [openstack-dev] [Freezer] Reactivate the team In-Reply-To: References: Message-ID: If you also wanted to add migrating from Launchpad to Storyboard to this list I am happy to help do the test migration and coordinate the real migration. -Kendall (diablo_rojo) On Fri, Aug 17, 2018 at 6:50 PM Trinh Nguyen wrote: > Dear Freezer team, > > Since we have appointed a new PTL for the Stein cycle (gengchc2), I > suggest that we should reactivate the team follows these actions: > > 1. Have a team meeting to formalize the new leader as well as discuss > the new direction. > 2. Grant PTL privileges for gengchc2 on Launchpad and Project Gerrit > repositories. > 3. Reorganize the core team to make sure we have enough active core > reviewers for new patches. > 4. Clean up bug reports, blueprints on Launchpad, as well as > unreviewed patched on Gerrit. > > I hope that we can revive Freezer. > > Best regards, > > *Trinh Nguyen *| Founder & Chief Architect > > > > *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Aug 21 18:31:22 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 21 Aug 2018 14:31:22 -0400 Subject: [openstack-dev] [goals][python3] please check with me before submitting any zuul migration patches Message-ID: <1534875835-sup-7809@lrrr.local> We have a few folks eager to join in and contribute to the python3 goal by helping with the patches to migrate zuul settings. That's great! However, many of the patches being proposed are incorrect, which means there is either something wrong with the tool or the way it is used. The intent was to have a very small group, 3-4 people, who knew how the tools worked to propose all of those patches. Having incorrect patches can break the CI for a project, so we need to be especially careful with them. We do not want every team writing the patches for themselves, and we do not want lots and lots of people who we have to train to use the tools. If you are not one of the people already listed as a goal champion on [1], please PLEASE stop writing patches and get in touch with me personally and directly (via IRC or email) BEFORE doing any more work on the goal. Thanks, Doug [1] https://governance.openstack.org/tc/goals/stein/python3-first.html From no-reply at openstack.org Tue Aug 21 18:35:45 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Tue, 21 Aug 2018 18:35:45 -0000 Subject: [openstack-dev] octavia-dashboard 2.0.0.0rc2 (rocky) Message-ID: Hello everyone, A new release candidate for octavia-dashboard for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/octavia-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/octavia-dashboard/log/?h=stable/rocky Release notes for octavia-dashboard can be found at: https://docs.openstack.org/releasenotes/octavia-dashboard/ If you find an issue that could be considered release-critical, please file it at: https://storyboard.openstack.org/#!/project/909 and tag it *rocky-rc-potential* to bring it to the octavia-dashboard release crew's attention. From no-reply at openstack.org Tue Aug 21 18:36:27 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Tue, 21 Aug 2018 18:36:27 -0000 Subject: [openstack-dev] neutron-lbaas-dashboard 5.0.0.0rc2 (rocky) Message-ID: Hello everyone, A new release candidate for neutron-lbaas-dashboard for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/neutron-lbaas-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/neutron-lbaas-dashboard/log/?h=stable/rocky Release notes for neutron-lbaas-dashboard can be found at: https://docs.openstack.org/releasenotes/neutron-lbaas-dashboard/ If you find an issue that could be considered release-critical, please file it at: https://storyboard.openstack.org/#!/project/907 and tag it *rocky-rc-potential* to bring it to the neutron-lbaas-dashboard release crew's attention. From no-reply at openstack.org Tue Aug 21 18:40:26 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Tue, 21 Aug 2018 18:40:26 -0000 Subject: [openstack-dev] octavia 3.0.0.0rc2 (rocky) Message-ID: Hello everyone, A new release candidate for octavia for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/octavia/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/octavia/log/?h=stable/rocky Release notes for octavia can be found at: https://docs.openstack.org/releasenotes/octavia/ From openstack at nemebean.com Tue Aug 21 19:00:41 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 21 Aug 2018 14:00:41 -0500 Subject: [openstack-dev] [barbican][oslo][release][requirements] FFE request for castellan In-Reply-To: <8f8add49-cb63-3452-cc7c-c812bfab0877@nemebean.com> References: <1533914109.23178.37.camel@redhat.com> <20180814185634.GA26658@sm-workstation> <1534352313.5705.35.camel@redhat.com> <8f8add49-cb63-3452-cc7c-c812bfab0877@nemebean.com> Message-ID: Because castellan is in global-requirements, we need an FFE from requirements too. Can someone from the requirements team respond to the review? Thanks. On 08/16/2018 04:34 PM, Ben Nemec wrote: > The backport has merged and I've proposed the release here: > https://review.openstack.org/592746 > > On 08/15/2018 11:58 AM, Ade Lee wrote: >> Done. >> >> https://review.openstack.org/#/c/592154/ >> >> Thanks, >> Ade >> >> On Wed, 2018-08-15 at 09:20 -0500, Ben Nemec wrote: >>> >>> On 08/14/2018 01:56 PM, Sean McGinnis wrote: >>>>> On 08/10/2018 10:15 AM, Ade Lee wrote: >>>>>> Hi all, >>>>>> >>>>>> I'd like to request a feature freeze exception to get the >>>>>> following >>>>>> change in for castellan. >>>>>> >>>>>> https://review.openstack.org/#/c/575800/ >>>>>> >>>>>> This extends the functionality of the vault backend to provide >>>>>> previously uninmplemented functionality, so it should not break >>>>>> anyone. >>>>>> >>>>>> The castellan vault plugin is used behind barbican in the >>>>>> barbican- >>>>>> vault plugin.  We'd like to get this change into Rocky so that >>>>>> we can >>>>>> release Barbican with complete functionality on this backend >>>>>> (along >>>>>> with a complete set of passing functional tests). >>>>> >>>>> This does seem fairly low risk since it's just implementing a >>>>> function that >>>>> previously raised a NotImplemented exception.  However, with it >>>>> being so >>>>> late in the cycle I think we need the release team's input on >>>>> whether this >>>>> is possible.  Most of the release FFE's I've seen have been for >>>>> critical >>>>> bugs, not actual new features.  I've added that tag to this >>>>> thread so >>>>> hopefully they can weigh in. >>>>> >>>> >>>> As far as releases go, this should be fine. If this doesn't affect >>>> any other >>>> projects and would just be a late merging feature, as long as the >>>> castellan >>>> team has considered the risk of adding code so late and is >>>> comfortable with >>>> that, this is OK. >>>> >>>> Castellan follows the cycle-with-intermediary release model, so the >>>> final Rocky >>>> release just needs to be done by next Thursday. I do see the >>>> stable/rocky >>>> branch has already been created for this repo, so it would need to >>>> merge to >>>> master first (technically stein), then get cherry-picked to >>>> stable/rocky. >>> >>> Okay, sounds good.  It's already merged to master so we're good >>> there. >>> >>> Ade, can you get the backport proposed? >>> >>> _____________________________________________________________________ >>> _____ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs >>> cribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From prometheanfire at gentoo.org Tue Aug 21 19:16:55 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Tue, 21 Aug 2018 14:16:55 -0500 Subject: [openstack-dev] [barbican][oslo][release][requirements] FFE request for castellan In-Reply-To: References: <1533914109.23178.37.camel@redhat.com> <20180814185634.GA26658@sm-workstation> <1534352313.5705.35.camel@redhat.com> <8f8add49-cb63-3452-cc7c-c812bfab0877@nemebean.com> Message-ID: <20180821191655.xw37baq4q6ikfqts@gentoo.org> On 18-08-21 14:00:41, Ben Nemec wrote: > Because castellan is in global-requirements, we need an FFE from > requirements too. Can someone from the requirements team respond to the > review? Thanks. > > On 08/16/2018 04:34 PM, Ben Nemec wrote: > > The backport has merged and I've proposed the release here: > > https://review.openstack.org/592746 > > > > On 08/15/2018 11:58 AM, Ade Lee wrote: > > > Done. > > > > > > https://review.openstack.org/#/c/592154/ > > > > > > Thanks, > > > Ade > > > > > > On Wed, 2018-08-15 at 09:20 -0500, Ben Nemec wrote: > > > > > > > > On 08/14/2018 01:56 PM, Sean McGinnis wrote: > > > > > > On 08/10/2018 10:15 AM, Ade Lee wrote: > > > > > > > Hi all, > > > > > > > > > > > > > > I'd like to request a feature freeze exception to get the > > > > > > > following > > > > > > > change in for castellan. > > > > > > > > > > > > > > https://review.openstack.org/#/c/575800/ > > > > > > > > > > > > > > This extends the functionality of the vault backend to provide > > > > > > > previously uninmplemented functionality, so it should not break > > > > > > > anyone. > > > > > > > > > > > > > > The castellan vault plugin is used behind barbican in the > > > > > > > barbican- > > > > > > > vault plugin.  We'd like to get this change into Rocky so that > > > > > > > we can > > > > > > > release Barbican with complete functionality on this backend > > > > > > > (along > > > > > > > with a complete set of passing functional tests). > > > > > > > > > > > > This does seem fairly low risk since it's just implementing a > > > > > > function that > > > > > > previously raised a NotImplemented exception.  However, with it > > > > > > being so > > > > > > late in the cycle I think we need the release team's input on > > > > > > whether this > > > > > > is possible.  Most of the release FFE's I've seen have been for > > > > > > critical > > > > > > bugs, not actual new features.  I've added that tag to this > > > > > > thread so > > > > > > hopefully they can weigh in. > > > > > > > > > > > > > > > > As far as releases go, this should be fine. If this doesn't affect > > > > > any other > > > > > projects and would just be a late merging feature, as long as the > > > > > castellan > > > > > team has considered the risk of adding code so late and is > > > > > comfortable with > > > > > that, this is OK. > > > > > > > > > > Castellan follows the cycle-with-intermediary release model, so the > > > > > final Rocky > > > > > release just needs to be done by next Thursday. I do see the > > > > > stable/rocky > > > > > branch has already been created for this repo, so it would need to > > > > > merge to > > > > > master first (technically stein), then get cherry-picked to > > > > > stable/rocky. > > > > > > > > Okay, sounds good.  It's already merged to master so we're good > > > > there. > > > > > > > > Ade, can you get the backport proposed? > > > > I've approved it for a UC only bump -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ed at leafe.com Tue Aug 21 19:44:04 2018 From: ed at leafe.com (Ed Leafe) Date: Tue, 21 Aug 2018 14:44:04 -0500 Subject: [openstack-dev] UC Elections will not be held Message-ID: <49D533BF-F818-4642-AD23-F93E1F6E8F05@leafe.com> As there were only 2 nominations for the 2 open seats, elections will not be needed. Congratulations to Matt Van Winkle and Joseph Sandoval! -- Ed Leafe From melwittt at gmail.com Tue Aug 21 19:53:43 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 21 Aug 2018 12:53:43 -0700 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> Message-ID: On Tue, 21 Aug 2018 06:50:56 -0500, Matt Riedemann wrote: > At this point, I think we're at: > > 1. Should placement be extracted into it's own git repo in Stein while > nova still has known major issues which will have dependencies on > placement changes, mainly modeling affinity? > > 2. If we extract, does it go under compute governance or a new project > with a new PTL. > > As I've said, I personally believe that unless we have concrete plans > for the big items in #1, we shouldn't hold up the extraction. We said in > Dublin we wouldn't extract to a new git repo in Rocky but we'd work up > to that point so we could do it in Stein, so this shouldn't surprise > anyone. The actual code extraction and re-packaging and all that is > going to be the biggest technical issue with all of this, and will > likely take all of stein to complete it after all the bugs are shaken out. > > For #2, I think for now, in the interim, while we deal with the > technical headache of the code extraction itself, it's best to leave the > new repo under compute governance so the existing team is intact and we > don't conflate the people issue with the technical issue at the same > time. Get the hard technical part done first, and then we can move it > out of compute governance. Once it's in its own git repo, we can change > the core team as needed but I think it should be initialized with > existing nova-core. I'm in support of extracting placement into its own git repo because Chris has done a lot of work to reduce dependencies in placement and moving it into its own repo would help in not having to keep chasing that. As has been said before, I think all of us agree that placement should be separate as an end goal. The question is when to fully separate it from governance. It's true that we don't have concrete plans for affinity modeling and shared storage modeling. But I think we do have concrete plans for vGPU enhancements (being able to have different vGPU types on one compute host and adding support for traits). vGPU support is an important and highly sought after feature for operators and users, as we witnessed at the last Summit in Vancouver. vGPU support is currently using a flat resource provider structure that needs to be migrated to nested in order to do the enhancement work, and that's how the reshaper work came about. (Reshaper work will migrate a flat resource provider structure to a nested one.) We have the nested resource provider support in placement but we need to integrate the Nova side, leveraging the reshaper code. The reshaper code is still going through code review, then next we have the integration to do. I think things are bound to break when we integrate it, just because nothing is ever perfect, as much as we scrutinize it and the real test is when we start using it for real. I think going through this integration would be best done *before* extraction to a new repo. But given that there is never a "good" time to extract something to a new repo, I am OK with the idea of doing the extraction first, if that is what most people want to do. What I'm concerned about on the governance piece is how things look as far as project priorities between the two projects if they are split. Affinity modeling and shared storage support are compute features OpenStack operators and users need. Operators need affinity modeling in the placement is needed to achieve parity for affinity scheduling with multiple cells. That means, affinity scheduling in Nova with multiple cells is susceptible to races and does *not* work as well as the previous single cell support. Shared storage support is something operators have badly needed for years now and was envisioned to be solved with placement. Given all of that, I'm not seeing how *now* is a good time to separate the placement project under separate governance with separate goals and priorities. If operators need things for compute, that are well-known and that placement was created to solve, how will placement have a shared interest in solving compute problems, if it is not part of the compute project? I understand that placement wants to appeal to more consumers (by way of splitting governance) but at present, Nova is the only consumer. And by consumer, I mean Nova is the only one consuming data *from* placement and relying on it to do something. I don't understand why it's really important *right now* to separate priorities before there are other viable consumers. I would like to share priorities and goals, for now, under the compute program to best serve operators and users in solving the specific problems I've mentioned in my replies to this thread. Best, -melanie From edgar.magana at workday.com Tue Aug 21 19:57:38 2018 From: edgar.magana at workday.com (Edgar Magana) Date: Tue, 21 Aug 2018 19:57:38 +0000 Subject: [openstack-dev] [User-committee] UC Elections will not be held In-Reply-To: <49D533BF-F818-4642-AD23-F93E1F6E8F05@leafe.com> References: <49D533BF-F818-4642-AD23-F93E1F6E8F05@leafe.com> Message-ID: Congratulations Matt and Joseph! Our community is in good hands with your leadership, looking forward to seeing you in Berlin. Do not hesitate to ask for help at any time. Edgar On 8/21/18, 12:45 PM, "Ed Leafe" wrote: As there were only 2 nominations for the 2 open seats, elections will not be needed. Congratulations to Matt Van Winkle and Joseph Sandoval! -- Ed Leafe _______________________________________________ User-committee mailing list User-committee at lists.openstack.org https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_user-2Dcommittee&d=DwIGaQ&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=zJVnmWwuk3H0ySNWzMvn_WFZHaXuHfYFrGXivVpZ4I8&s=b5cPci7YTmu4pkYg7k429mism5WKSUOkJpnub4U_Fp8&e= From amy at demarco.com Tue Aug 21 20:26:44 2018 From: amy at demarco.com (Amy Marrich) Date: Tue, 21 Aug 2018 15:26:44 -0500 Subject: [openstack-dev] [Openstack-sigs] UC Elections will not be held In-Reply-To: <49D533BF-F818-4642-AD23-F93E1F6E8F05@leafe.com> References: <49D533BF-F818-4642-AD23-F93E1F6E8F05@leafe.com> Message-ID: Congrats to VW and Joseph. Thank you to Saverio for his hard work. And lastly thank you to Ed, Chandan, and Mohamed for serving as our election officials! Amy (spotz) User Committee On Tue, Aug 21, 2018 at 2:44 PM, Ed Leafe wrote: > As there were only 2 nominations for the 2 open seats, elections will not > be needed. Congratulations to Matt Van Winkle and Joseph Sandoval! > > -- Ed Leafe > > > > > > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Aug 21 20:41:11 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 21 Aug 2018 16:41:11 -0400 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> Message-ID: <1534883437-sup-4403@lrrr.local> Excerpts from melanie witt's message of 2018-08-21 12:53:43 -0700: > On Tue, 21 Aug 2018 06:50:56 -0500, Matt Riedemann wrote: > > At this point, I think we're at: > > > > 1. Should placement be extracted into it's own git repo in Stein while > > nova still has known major issues which will have dependencies on > > placement changes, mainly modeling affinity? > > > > 2. If we extract, does it go under compute governance or a new project > > with a new PTL. > > > > As I've said, I personally believe that unless we have concrete plans > > for the big items in #1, we shouldn't hold up the extraction. We said in > > Dublin we wouldn't extract to a new git repo in Rocky but we'd work up > > to that point so we could do it in Stein, so this shouldn't surprise > > anyone. The actual code extraction and re-packaging and all that is > > going to be the biggest technical issue with all of this, and will > > likely take all of stein to complete it after all the bugs are shaken out. > > > > For #2, I think for now, in the interim, while we deal with the > > technical headache of the code extraction itself, it's best to leave the > > new repo under compute governance so the existing team is intact and we > > don't conflate the people issue with the technical issue at the same > > time. Get the hard technical part done first, and then we can move it > > out of compute governance. Once it's in its own git repo, we can change > > the core team as needed but I think it should be initialized with > > existing nova-core. > > I'm in support of extracting placement into its own git repo because > Chris has done a lot of work to reduce dependencies in placement and > moving it into its own repo would help in not having to keep chasing > that. As has been said before, I think all of us agree that placement > should be separate as an end goal. The question is when to fully > separate it from governance. > > It's true that we don't have concrete plans for affinity modeling and > shared storage modeling. But I think we do have concrete plans for vGPU > enhancements (being able to have different vGPU types on one compute > host and adding support for traits). vGPU support is an important and > highly sought after feature for operators and users, as we witnessed at > the last Summit in Vancouver. vGPU support is currently using a flat > resource provider structure that needs to be migrated to nested in order > to do the enhancement work, and that's how the reshaper work came about. > (Reshaper work will migrate a flat resource provider structure to a > nested one.) > > We have the nested resource provider support in placement but we need to > integrate the Nova side, leveraging the reshaper code. The reshaper code > is still going through code review, then next we have the integration to > do. I think things are bound to break when we integrate it, just because > nothing is ever perfect, as much as we scrutinize it and the real test > is when we start using it for real. I think going through this > integration would be best done *before* extraction to a new repo. But > given that there is never a "good" time to extract something to a new > repo, I am OK with the idea of doing the extraction first, if that is > what most people want to do. > > What I'm concerned about on the governance piece is how things look as > far as project priorities between the two projects if they are split. > Affinity modeling and shared storage support are compute features > OpenStack operators and users need. Operators need affinity modeling in > the placement is needed to achieve parity for affinity scheduling with > multiple cells. That means, affinity scheduling in Nova with multiple > cells is susceptible to races and does *not* work as well as the > previous single cell support. Shared storage support is something > operators have badly needed for years now and was envisioned to be > solved with placement. > > Given all of that, I'm not seeing how *now* is a good time to separate > the placement project under separate governance with separate goals and > priorities. If operators need things for compute, that are well-known > and that placement was created to solve, how will placement have a > shared interest in solving compute problems, if it is not part of the > compute project? > Who are candidates to be members of a review team for the placement repository after the code is moved out of openstack/nova? How many of them are also members of the nova-core team? What do you think those folks are more interested in working on than the things you listed as needing to be done to support the nova use cases? What can they do to reassure you that they will work on the items nova needs, regardless of the governance structure? Doug From chris.friesen at windriver.com Tue Aug 21 20:55:26 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 21 Aug 2018 14:55:26 -0600 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> Message-ID: <5B7C7C3E.8040009@windriver.com> On 08/21/2018 01:53 PM, melanie witt wrote: > Given all of that, I'm not seeing how *now* is a good time to separate the > placement project under separate governance with separate goals and priorities. > If operators need things for compute, that are well-known and that placement was > created to solve, how will placement have a shared interest in solving compute > problems, if it is not part of the compute project? As someone who is not involved in the governance of nova, this seems like kind of an odd statement for an open-source project. From the outside, it seems like there is a fairly small pool of active placement developers. And either the placement developers are willing to implement the capabilities desired by compute or else they're not. And if they're not, I don't see how being under compute governance would resolve that since the only official hard leverage the compute governance has is refusing to review/merge placement patches (which wouldn't really help implement compute's desires anyways). Chris From ivolinengong at gmail.com Tue Aug 21 21:15:38 2018 From: ivolinengong at gmail.com (Ivoline Ngong) Date: Tue, 21 Aug 2018 21:15:38 +0000 Subject: [openstack-dev] New Contributor In-Reply-To: References: <49b15368-a67b-dfcb-0501-6b527a42c71c@openstack.org> <58b5c6ab-fe55-25ae-769b-6cc5f55f9b03@mixmax.com> Message-ID: Thanks KendallWith the link you sent, I succesfully joined IRC On Tue, Aug 21, 2018 9:10 PM, Kendall Nelson kennelson11 at gmail.com wrote: Ivoline, If you need help getting on IRC[1] or setup with any of the other tools we use, please let me know! [1]https://docs.openstack.org/contributors/common/irc.html -Kendall Nelson On Tue, Aug 21, 2018 at 4:44 AM Telles Nobrega wrote: That is great to hear. Plese join us at #openstack-sahara so we can discuss a little more of what work you want to do. Welcome aboard. On Tue, Aug 21, 2018 at 5:22 AM Ivoline Ngong wrote: Hello Kendall and Telles, Thanks so much for warm welcome. I feel at home already.The links sent were quire helpful and gave me an insight into what OpenStack is all about.After reading lightly about the different projects, the Sahara project caught my attention. Probably because I am interested in data science. I will love to explore the Sahara project some more. Cheers,Ivoline On Mon, Aug 20, 2018 8:42 PM, Telles Nobrega tenobreg at redhat.com wrote: Hi Ivoline, Also a little late but wanted to say welcome aboard, hopefully you will find a very welcoming community here and of course a lot of work to do. I work with Sahara, the big data processing project of OpenStack, we need help for sure. If this area interests you in any way, feel free to join us at #openstack-sahara on IRC or email me and we can send some work at your direction. On Mon, Aug 20, 2018 at 2:37 PM Kendall Nelson wrote: Hello Ivoline, While I'm a little late to the party, I still wanted to say welcome and offer my help :) If you have any questions based about the links you've been sent, I'm happy to answer them! I can also help you find/get started with a team and introduce you to community members whenever you're ready. -Kendall Nelson (diablo_rojo) On Mon, 20 Aug 2018, 4:08 am Ivoline Ngong, wrote: Thanks so much for help Josh and Thierry. I'll check out the links and hopefully find a way forward from there. Will get back here in case I have any questions. Cheers,Ivoline On Mon, Aug 20, 2018, 12:01 Thierry Carrez wrote: Ivoline Ngong wrote: > I am Ivoline Ngong. I am a Cameroonian who lives in Turkey. I will love > to contribute to Open source through OpenStack. I code in Java and > Python and I think OpenStack is a good fit for me. > I'll appreciate it if you can point me to the right direction on how I > can get started. Hi Ivoline, Welcome to the OpenStack community ! The OpenStack Technical Committee maintains a list of areas in most need of help: https://governance.openstack.org/tc/reference/help-most-needed.html Depending on your interest, you could pick one of those projects and reach out to the mentioned contact points. For more general information on how to contribute, you can check out our contribution portal: https://www.openstack.org/community/ -- Thierry Carrez (ttx) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- TELLESNOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil peloGreat Place to Work. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- TELLESNOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil peloGreat Place to Work. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Tue Aug 21 22:05:00 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 21 Aug 2018 15:05:00 -0700 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <1534883437-sup-4403@lrrr.local> References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> <1534883437-sup-4403@lrrr.local> Message-ID: <06afaecc-158c-a6d2-2e4d-c586116eac73@gmail.com> On Tue, 21 Aug 2018 16:41:11 -0400, Doug Hellmann wrote: > Excerpts from melanie witt's message of 2018-08-21 12:53:43 -0700: >> On Tue, 21 Aug 2018 06:50:56 -0500, Matt Riedemann wrote: >>> At this point, I think we're at: >>> >>> 1. Should placement be extracted into it's own git repo in Stein while >>> nova still has known major issues which will have dependencies on >>> placement changes, mainly modeling affinity? >>> >>> 2. If we extract, does it go under compute governance or a new project >>> with a new PTL. >>> >>> As I've said, I personally believe that unless we have concrete plans >>> for the big items in #1, we shouldn't hold up the extraction. We said in >>> Dublin we wouldn't extract to a new git repo in Rocky but we'd work up >>> to that point so we could do it in Stein, so this shouldn't surprise >>> anyone. The actual code extraction and re-packaging and all that is >>> going to be the biggest technical issue with all of this, and will >>> likely take all of stein to complete it after all the bugs are shaken out. >>> >>> For #2, I think for now, in the interim, while we deal with the >>> technical headache of the code extraction itself, it's best to leave the >>> new repo under compute governance so the existing team is intact and we >>> don't conflate the people issue with the technical issue at the same >>> time. Get the hard technical part done first, and then we can move it >>> out of compute governance. Once it's in its own git repo, we can change >>> the core team as needed but I think it should be initialized with >>> existing nova-core. >> >> I'm in support of extracting placement into its own git repo because >> Chris has done a lot of work to reduce dependencies in placement and >> moving it into its own repo would help in not having to keep chasing >> that. As has been said before, I think all of us agree that placement >> should be separate as an end goal. The question is when to fully >> separate it from governance. >> >> It's true that we don't have concrete plans for affinity modeling and >> shared storage modeling. But I think we do have concrete plans for vGPU >> enhancements (being able to have different vGPU types on one compute >> host and adding support for traits). vGPU support is an important and >> highly sought after feature for operators and users, as we witnessed at >> the last Summit in Vancouver. vGPU support is currently using a flat >> resource provider structure that needs to be migrated to nested in order >> to do the enhancement work, and that's how the reshaper work came about. >> (Reshaper work will migrate a flat resource provider structure to a >> nested one.) >> >> We have the nested resource provider support in placement but we need to >> integrate the Nova side, leveraging the reshaper code. The reshaper code >> is still going through code review, then next we have the integration to >> do. I think things are bound to break when we integrate it, just because >> nothing is ever perfect, as much as we scrutinize it and the real test >> is when we start using it for real. I think going through this >> integration would be best done *before* extraction to a new repo. But >> given that there is never a "good" time to extract something to a new >> repo, I am OK with the idea of doing the extraction first, if that is >> what most people want to do. >> >> What I'm concerned about on the governance piece is how things look as >> far as project priorities between the two projects if they are split. >> Affinity modeling and shared storage support are compute features >> OpenStack operators and users need. Operators need affinity modeling in >> the placement is needed to achieve parity for affinity scheduling with >> multiple cells. That means, affinity scheduling in Nova with multiple >> cells is susceptible to races and does *not* work as well as the >> previous single cell support. Shared storage support is something >> operators have badly needed for years now and was envisioned to be >> solved with placement. >> >> Given all of that, I'm not seeing how *now* is a good time to separate >> the placement project under separate governance with separate goals and >> priorities. If operators need things for compute, that are well-known >> and that placement was created to solve, how will placement have a >> shared interest in solving compute problems, if it is not part of the >> compute project? >> > > Who are candidates to be members of a review team for the placement > repository after the code is moved out of openstack/nova? > > How many of them are also members of the nova-core team? I assume you pose this question in the proposed situation I described where placement is a repo under compute. I expect the review team to be nova-core as a start with consideration for new additions or removals based on our usual process of discussion and consensus as a group. I expect there to be members of one group who are not members of the other group. But all are members of the compute project and have shared interest in achieving shared goals for operators and users. > What do you think those folks are more interested in working on than the > things you listed as needing to be done to support the nova use cases? I'm not thinking of anything specific here. At a high level, I don't see how separating into two separate groups under separate leadership helps us deliver the listed things for operators and users. I tend to think that a unified group will be more successful at that. > What can they do to reassure you that they will work on the items > nova needs, regardless of the governance structure? If they were separate groups, I don't see why the leadership of placement would necessarily share goals and priorities with compute. I think that is why it's much more difficult to get things done with two separate groups, in general. I want to reiterate again that the only thing I care about here is delivering functionality that operators and users need. vGPUs, in particular, has been highly sought after at a community-wide level, not just from the compute community. I want to deliver the features that people are depending on and IMHO, being a unified group helps that. I don't see how being two separate groups helps that. -melanie From melwittt at gmail.com Tue Aug 21 22:33:01 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 21 Aug 2018 15:33:01 -0700 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <5B7C7C3E.8040009@windriver.com> References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> <5B7C7C3E.8040009@windriver.com> Message-ID: <3dd03e80-19b7-23a8-6ef0-f23749cbe8cf@gmail.com> On Tue, 21 Aug 2018 14:55:26 -0600, Chris Friesen wrote: > On 08/21/2018 01:53 PM, melanie witt wrote: > >> Given all of that, I'm not seeing how *now* is a good time to separate the >> placement project under separate governance with separate goals and priorities. >> If operators need things for compute, that are well-known and that placement was >> created to solve, how will placement have a shared interest in solving compute >> problems, if it is not part of the compute project? > > As someone who is not involved in the governance of nova, this seems like kind > of an odd statement for an open-source project. > > From the outside, it seems like there is a fairly small pool of active > placement developers. And either the placement developers are willing to > implement the capabilities desired by compute or else they're not. And if > they're not, I don't see how being under compute governance would resolve that > since the only official hard leverage the compute governance has is refusing to > review/merge placement patches (which wouldn't really help implement compute's > desires anyways). I'm not sure I follow. As of now, placement developers are participating in the same priorities and goals setting as the rest of compute, each cycle. We discuss work that needs to be done and how to prioritize it, in the context of compute. We are one group. If we separate into two different groups, all of the items I discussed in my previous reply will become cross-project efforts. To me, this means that the placement group will have their own priorities and goal setting process and if their priorities and goals happen to align with ours on certain items, we can agree to work on those in collaboration. But I won't make assumptions about how much alignment we will have. The placement group, as a hypothetical example, won't necessarily find helping us fix issues with compute functionality like vGPUs as important as we do, if we need additional work in placement to support it. That's how I'm thinking about it, from a practical standpoint. I'm thinking about what it will look like delivering the functionality I discussed in my previous reply, for operators and users. I think it helps to be one group. -melanie From openstack at fried.cc Tue Aug 21 22:36:18 2018 From: openstack at fried.cc (Eric Fried) Date: Tue, 21 Aug 2018 17:36:18 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> Message-ID: <6685287d-9aaf-c703-0bc9-db32a2937ac9@fried.cc> > The reshaper code > is still going through code review, then next we have the integration to > do. To clarify: we're doing the integration in concert with the API side. Right now the API side patches [1][2] are in series underneath the nova side [3]. In a placement-in-its-own-repo world, the only difference would have been that these would be separate series with a Depends-On linking them, and would require a placement release. (In fact, with a couple of additional "placement cores", the API side could have been completed faster and we might have landed the whole in Rocky.) In a placement-under-separate-governance world, I contend there would have been *zero* additional difference. Speculating on who the "placement team" would be, the exact same people would have been present at the hangouts and participating in the spec and code reviews. [1] https://review.openstack.org/#/c/576927/ [2] https://review.openstack.org/#/c/585033/ [3] https://review.openstack.org/#/c/584598/ and up > I think going through this > integration would be best done *before* extraction to a new repo. Agree. That could happen this week with some focused reviewing. > I am OK with the idea of doing the extraction first, if that is > what most people want to do. Sweet. To close on this part of the discussion, is there anyone who still objects to doing at least the repository-and-code part of the extraction now? > Affinity modeling and shared storage support are compute features > OpenStack operators and users need. Operators need affinity modeling in > the placement is needed to achieve parity for affinity scheduling with > multiple cells. That means, affinity scheduling in Nova with multiple > cells is susceptible to races and does *not* work as well as the > previous single cell support. Sorry, I'm confused - are we talking about NUMA cells or cellsv2 cells? If the latter, what additional placement-side support is needed to support it? > Shared storage support is something > operators have badly needed for years now and was envisioned to be > solved with placement. Again, I'm pretty sure the placement side work for this is done, or very close to it; the remaining work is on the nova side. But regardless, let's assume both of the above require significant placement work in close coordination with nova for specs, design, implementation, etc. How would separating governance have a negative impact on that? As for reshaper, it would be all the same people in the room. As Doug says: > What do you think those folks are more interested in working on than the > things you listed as needing to be done to support the nova use cases? > > What can they do to reassure you that they will work on the items > nova needs, regardless of the governance structure? More... > If operators need things for compute, that are well-known > and that placement was created to solve, how will placement have a > shared interest in solving compute problems, if it is not part of the > compute project? You answered your own question. If operators need a thing that involves placement and nova, placement and nova have a shared interest in making it happen. s/placement|nova/$openstack_project/. It's what we're about... > separate goals and priorities ...because those priorities should largely overlap and be aligned with OpenStack's goals and priorities, right? > Who are candidates to be members of a review team for the placement > repository after the code is moved out of openstack/nova? > > How many of them are also members of the nova-core team? This brings us to another part of the discussion I think we can close on right now. I don't think I've heard any objections to: "The initial placement-core team should be a superset of the nova-core team." Do we have a consensus on that? (Deferring discussion of who the additional members ought to be. That probably needs its own thread and/or a different audience.) -efried From chris at openstack.org Tue Aug 21 22:38:27 2018 From: chris at openstack.org (Chris Hoge) Date: Tue, 21 Aug 2018 15:38:27 -0700 Subject: [openstack-dev] [magnum] K8s Conformance Testing Message-ID: <20521C80-A676-4D09-B56A-3B2A913A5095@openstack.org> As discussed at the Vancouver SIG-K8s and Copenhagen SIG-OpenStack sessions, we're moving forward with obtaining Kubernetes Conformance certification for Magnum. While conformance test jobs aren't reliably running in the gate yet, the requirements of the program make sumbitting results manually on an infrequent basis something that we can work with while we wait for more stable nested virtualization resources. The OpenStack Foundation has signed the license agreement, and Feilong Wang is preparing an initial conformance run to submit for certification. My thanks to the Magnum team for their impressive work on building out an API for deploying Kubernetes on OpenStack clusters. [1] https://www.cncf.io/certification/software-conformance/ From Kevin.Fox at pnnl.gov Tue Aug 21 22:42:45 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 21 Aug 2018 22:42:45 +0000 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <06afaecc-158c-a6d2-2e4d-c586116eac73@gmail.com> References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> <1534883437-sup-4403@lrrr.local>, <06afaecc-158c-a6d2-2e4d-c586116eac73@gmail.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01C181C68@EX10MBOX03.pnnl.gov> The stuff you are pushing back against are the very same things that other folks are trying to do at a higher level. You want control so you can prioritize the things you feel your users are most interested in. Folks in other projects have decided the same. Really, where should the priorities come from? You are concerned another projects priorities will trump your own. Legitimate. But have you considered, maybe other priorities, not just Nova's actually are more important in the grand scheme of OpenStack? What entity in OpenStack is deciding the operators/users needs get what priorities? Nova currently thinks it knows whats best. Is it really? I've wanted shared storage for a long long time. But i also have wanted proper secret management, and between the two, I'd much rather have good secret management. Where is that vote in things? How do I even express that? And, to whom? Yes, I realize shared storage was Cinders priority and Nova's now way behind in implementing it. so it is kind of a priority to get it done. Just using it as an example though in the bigger context. Having operators approach individual projects stating their needs, and then having the individual projects fight it out for priorities isn't a good plan. The priorities should be prioritized at a higher level then projects so the operators/users needs can be seen in a global light, not just filtered though each projects views of things. Yes, some folks volunteer to work on the things that they want to work on. Thats great. But some folks volunteer to work on priorities to help users/operators in general. Getting clear, unbiased priorities for them is really important. I'm not trying to pick on you here. I truly believe you are trying to do the right thing for your users/operators. And for that, I thank you. But I'm a user/operator too and have had a lot of issues ignored due to this kind of governance issue preventing traction on my own user/operator needs. And I'm sure there are others besides me too. Its not due to malice. But the governance structure we have in place is letting important things slip through the cracks because its setup walls that make it hard to see the bigger picture. This email thread has been exposing some of the walls, and thats why we've been talking about them. To try and fix it. Thanks, Kevin ________________________________________ From: melanie witt [melwittt at gmail.com] Sent: Tuesday, August 21, 2018 3:05 PM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? On Tue, 21 Aug 2018 16:41:11 -0400, Doug Hellmann wrote: > Excerpts from melanie witt's message of 2018-08-21 12:53:43 -0700: >> On Tue, 21 Aug 2018 06:50:56 -0500, Matt Riedemann wrote: >>> At this point, I think we're at: >>> >>> 1. Should placement be extracted into it's own git repo in Stein while >>> nova still has known major issues which will have dependencies on >>> placement changes, mainly modeling affinity? >>> >>> 2. If we extract, does it go under compute governance or a new project >>> with a new PTL. >>> >>> As I've said, I personally believe that unless we have concrete plans >>> for the big items in #1, we shouldn't hold up the extraction. We said in >>> Dublin we wouldn't extract to a new git repo in Rocky but we'd work up >>> to that point so we could do it in Stein, so this shouldn't surprise >>> anyone. The actual code extraction and re-packaging and all that is >>> going to be the biggest technical issue with all of this, and will >>> likely take all of stein to complete it after all the bugs are shaken out. >>> >>> For #2, I think for now, in the interim, while we deal with the >>> technical headache of the code extraction itself, it's best to leave the >>> new repo under compute governance so the existing team is intact and we >>> don't conflate the people issue with the technical issue at the same >>> time. Get the hard technical part done first, and then we can move it >>> out of compute governance. Once it's in its own git repo, we can change >>> the core team as needed but I think it should be initialized with >>> existing nova-core. >> >> I'm in support of extracting placement into its own git repo because >> Chris has done a lot of work to reduce dependencies in placement and >> moving it into its own repo would help in not having to keep chasing >> that. As has been said before, I think all of us agree that placement >> should be separate as an end goal. The question is when to fully >> separate it from governance. >> >> It's true that we don't have concrete plans for affinity modeling and >> shared storage modeling. But I think we do have concrete plans for vGPU >> enhancements (being able to have different vGPU types on one compute >> host and adding support for traits). vGPU support is an important and >> highly sought after feature for operators and users, as we witnessed at >> the last Summit in Vancouver. vGPU support is currently using a flat >> resource provider structure that needs to be migrated to nested in order >> to do the enhancement work, and that's how the reshaper work came about. >> (Reshaper work will migrate a flat resource provider structure to a >> nested one.) >> >> We have the nested resource provider support in placement but we need to >> integrate the Nova side, leveraging the reshaper code. The reshaper code >> is still going through code review, then next we have the integration to >> do. I think things are bound to break when we integrate it, just because >> nothing is ever perfect, as much as we scrutinize it and the real test >> is when we start using it for real. I think going through this >> integration would be best done *before* extraction to a new repo. But >> given that there is never a "good" time to extract something to a new >> repo, I am OK with the idea of doing the extraction first, if that is >> what most people want to do. >> >> What I'm concerned about on the governance piece is how things look as >> far as project priorities between the two projects if they are split. >> Affinity modeling and shared storage support are compute features >> OpenStack operators and users need. Operators need affinity modeling in >> the placement is needed to achieve parity for affinity scheduling with >> multiple cells. That means, affinity scheduling in Nova with multiple >> cells is susceptible to races and does *not* work as well as the >> previous single cell support. Shared storage support is something >> operators have badly needed for years now and was envisioned to be >> solved with placement. >> >> Given all of that, I'm not seeing how *now* is a good time to separate >> the placement project under separate governance with separate goals and >> priorities. If operators need things for compute, that are well-known >> and that placement was created to solve, how will placement have a >> shared interest in solving compute problems, if it is not part of the >> compute project? >> > > Who are candidates to be members of a review team for the placement > repository after the code is moved out of openstack/nova? > > How many of them are also members of the nova-core team? I assume you pose this question in the proposed situation I described where placement is a repo under compute. I expect the review team to be nova-core as a start with consideration for new additions or removals based on our usual process of discussion and consensus as a group. I expect there to be members of one group who are not members of the other group. But all are members of the compute project and have shared interest in achieving shared goals for operators and users. > What do you think those folks are more interested in working on than the > things you listed as needing to be done to support the nova use cases? I'm not thinking of anything specific here. At a high level, I don't see how separating into two separate groups under separate leadership helps us deliver the listed things for operators and users. I tend to think that a unified group will be more successful at that. > What can they do to reassure you that they will work on the items > nova needs, regardless of the governance structure? If they were separate groups, I don't see why the leadership of placement would necessarily share goals and priorities with compute. I think that is why it's much more difficult to get things done with two separate groups, in general. I want to reiterate again that the only thing I care about here is delivering functionality that operators and users need. vGPUs, in particular, has been highly sought after at a community-wide level, not just from the compute community. I want to deliver the features that people are depending on and IMHO, being a unified group helps that. I don't see how being two separate groups helps that. -melanie __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mnaser at vexxhost.com Tue Aug 21 22:44:13 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 21 Aug 2018 18:44:13 -0400 Subject: [openstack-dev] [magnum] K8s Conformance Testing In-Reply-To: <20521C80-A676-4D09-B56A-3B2A913A5095@openstack.org> References: <20521C80-A676-4D09-B56A-3B2A913A5095@openstack.org> Message-ID: Hi Chris, This is an awesome effort. We can provide nested virt resources which are leveraged by Kata at the moment. Thanks! Mohammed Sent from my iPhone > On Aug 21, 2018, at 6:38 PM, Chris Hoge wrote: > > As discussed at the Vancouver SIG-K8s and Copenhagen SIG-OpenStack sessions, > we're moving forward with obtaining Kubernetes Conformance certification for > Magnum. While conformance test jobs aren't reliably running in the gate yet, > the requirements of the program make sumbitting results manually on an > infrequent basis something that we can work with while we wait for more > stable nested virtualization resources. The OpenStack Foundation has signed > the license agreement, and Feilong Wang is preparing an initial conformance > run to submit for certification. > > My thanks to the Magnum team for their impressive work on building out an > API for deploying Kubernetes on OpenStack clusters. > > [1] https://www.cncf.io/certification/software-conformance/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From chris.friesen at windriver.com Tue Aug 21 22:48:49 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 21 Aug 2018 16:48:49 -0600 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <3dd03e80-19b7-23a8-6ef0-f23749cbe8cf@gmail.com> References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> <5B7C7C3E.8040009@windriver.com> <3dd03e80-19b7-23a8-6ef0-f23749cbe8cf@gmail.com> Message-ID: <5B7C96D1.1090108@windriver.com> On 08/21/2018 04:33 PM, melanie witt wrote: > If we separate into two different groups, all of the items I discussed in my > previous reply will become cross-project efforts. To me, this means that the > placement group will have their own priorities and goal setting process and if > their priorities and goals happen to align with ours on certain items, we can > agree to work on those in collaboration. But I won't make assumptions about how > much alignment we will have. The placement group, as a hypothetical example, > won't necessarily find helping us fix issues with compute functionality like > vGPUs as important as we do, if we need additional work in placement to support it. I guess what I'm saying is that even with placement under nova governance, if the placement developers don't want to implement what the nova cores want them to implement there really isn't much that the nova cores can do to force them to implement it. And if the placement developers/cores are on board with what nova wants, I don't see how it makes a difference if placement is a separate project, especially if all existing nova cores are also placement cores to start. (Note that this is in the context of scratch-your-own-itch developers. It would be very different if the PTL could order developers to work on something.) Chris From fungi at yuggoth.org Tue Aug 21 23:10:53 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 21 Aug 2018 23:10:53 +0000 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C181C68@EX10MBOX03.pnnl.gov> References: <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> <1534883437-sup-4403@lrrr.local> <06afaecc-158c-a6d2-2e4d-c586116eac73@gmail.com> <1A3C52DFCD06494D8528644858247BF01C181C68@EX10MBOX03.pnnl.gov> Message-ID: <20180821231052.23jhawema2l5qkg3@yuggoth.org> On 2018-08-21 22:42:45 +0000 (+0000), Fox, Kevin M wrote: [...] > Yes, I realize shared storage was Cinders priority and Nova's now > way behind in implementing it. so it is kind of a priority to get > it done. Just using it as an example though in the bigger context. > > Having operators approach individual projects stating their needs, > and then having the individual projects fight it out for > priorities isn't a good plan. The priorities should be prioritized > at a higher level then projects so the operators/users needs can > be seen in a global light, not just filtered though each projects > views of things. > > Yes, some folks volunteer to work on the things that they want to > work on. Thats great. But some folks volunteer to work on > priorities to help users/operators in general. Getting clear, > unbiased priorities for them is really important. [...] I remain unconvinced that if someone (the TC, the OSF board, the now defunct product management nee hidden influencers working group) declared for example that secrets management was a higher priority than shared storage, that any significant number of people who could work on the latter would work on the former instead. The TC has enough trouble getting developers in different projects to cooperate on more than a couple of prioritized cross-project goals per cycle. The OSF board has repeatedly shown its members are rarely in the positions within member companies that they have any influence over what upstream features/projects those companies work on. The product management working group thought they had that influence over the companies in which they worked, but were similarly unable to find developers who could make progress toward their identified goals. OpenStack is an extremely complex (arguably too complex) combination of software, for which there are necessarily people with very strong understanding of very specific pieces and at best a loose understanding of the whole. In contrast, there are few people who have both the background and interest (much less leeway from their employers in the case of paid contributors) to work on any random feature of any random project and be quickly effective at it. If you're familiar with, say, Linux kernel development, you see very much the same sort of specialization with developers who are familiar with specific kernel subsystems and vanishingly few who can efficiently (that is to say without investing lots of time to come up to speed) implement novel features in multiple unrelated subsystems. We'll all continue to work on get better at this, but hard things are... well... hard. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From melwittt at gmail.com Tue Aug 21 23:13:03 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 21 Aug 2018 16:13:03 -0700 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <6685287d-9aaf-c703-0bc9-db32a2937ac9@fried.cc> References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> <6685287d-9aaf-c703-0bc9-db32a2937ac9@fried.cc> Message-ID: <1cbc467d-0194-6e22-749b-8b9477b85ce7@gmail.com> On Tue, 21 Aug 2018 17:36:18 -0500, Eric Fried wrote: >> Affinity modeling and shared storage support are compute features >> OpenStack operators and users need. Operators need affinity modeling in >> the placement is needed to achieve parity for affinity scheduling with >> multiple cells. That means, affinity scheduling in Nova with multiple >> cells is susceptible to races and does*not* work as well as the >> previous single cell support. > Sorry, I'm confused - are we talking about NUMA cells or cellsv2 cells? > If the latter, what additional placement-side support is needed to > support it? Cells v2 cells. We were thinking about native affinity modeling in placement for this one because the single cell and legacy case relied on compute calling up to the API database to do one last check about whether affinity policy was violated, once the instance landed on compute, in a race situation. If the check failed, the build was aborted and sent back for rescheduling. With multiple cells and split message queues, compute cannot call up to the API database to do the late-affinity check any longer (cannot reach the API database via message queue). So we are susceptible to affinity policy violations during races with multiple cells and split message queues. If we were able to model affinity in placement, placement could tell us which compute host to place the instance on, satisfying affinity policy and protected from races (via claiming we already do in placement). -melanie From dangtrinhnt at gmail.com Tue Aug 21 23:44:06 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 22 Aug 2018 08:44:06 +0900 Subject: [openstack-dev] [TC][Searchlight] Setting up milestones for Searchlight on Launchpad In-Reply-To: References: <0ff3b148-2e46-02ba-9835-796540e7a6df@openstack.org> Message-ID: Hi Kendall, Thanks for offering the help. Sure, please do that. Let me know if you need anything. Bests, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Wed, Aug 22, 2018 at 3:16 AM Kendall Nelson wrote: > Hello Trinh, > > Since Searchlight is in flux right now, it might be an appropriate time to > consider migrating to StoryBoard from Launchpad. Since you are working on > getting organized and figuring out the state of Seachlight, it might make > more sense to do that in the new tool, rather than doing it now in > Launchpad and then again in the future when you migrate to Storyboard down > the road. > > If this sounds like something you want to look into, let me know and I can > do a test migration into our dev environment. If that works out, I expect > we could migrate you before the end of the week. > > -Kendall Nelson (diablo_rojo) > > On Tue, Aug 21, 2018 at 2:21 AM Trinh Nguyen > wrote: > >> Hi Thierry, >> >> I just saw that link. Thanks :) >> >> Because I couldn't contact any of the core members I emailed this list. I >> will update the searchlight-core as planned after I am added. >> >> Thanks for your response, >> >> *Trinh Nguyen *| Founder & Chief Architect >> >> >> >> *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * >> >> >> >> On Tue, Aug 21, 2018 at 6:15 PM Thierry Carrez >> wrote: >> >>> Trinh Nguyen wrote: >>> > In an effort to get Searchlight back on track, I would like to set up >>> > milestones as well as clean up the incomplete bugs, blueprints etc. on >>> > Launchpad [1] I was added to the Searchlight Drivers team but I still >>> > can not touch the milestone configuration. >>> >>> As a member of the "maintainer" team in Launchpad you should be able to >>> register a series ("stein") and then add milestones to that series. You >>> should see a "Register a series" link under "Series and milestones" at >>> https://launchpad.net/searchlight >>> >>> > In addition, I would like to move forward with unreviewed patched on >>> > Gerrit so I need PTL privileges on Searchlight project. Do I have to >>> > wait for [2] to be merged? >>> >>> For the TC to step in and add you to searchlight-core, yes, we'll have >>> to wait for the merging of that patch. >>> >>> To go faster, you could ask any of the existing members in that group to >>> directly add you: >>> >>> https://review.openstack.org/#/admin/groups/964,members >>> >>> (NB: this group looks like it should be updated :) ) >>> >>> -- >>> Thierry Carrez (ttx) >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Wed Aug 22 00:06:50 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 22 Aug 2018 09:06:50 +0900 Subject: [openstack-dev] [Searchlight] Reaching out to the Searchlight core members for Stein - Call for team meeting In-Reply-To: <1610b2d5-71cc-a29c-8466-e706c8c344b0@gmail.com> References: <1610b2d5-71cc-a29c-8466-e706c8c344b0@gmail.com> Message-ID: Thanks Matt. *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Tue, Aug 21, 2018 at 10:53 PM Matt Riedemann wrote: > On 8/20/2018 10:10 AM, Trinh Nguyen wrote: > > > > Thanks for your response. What is your IRC handler? > > Kevin_Zheng. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Wed Aug 22 00:17:41 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Wed, 22 Aug 2018 00:17:41 +0000 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <20180821231052.23jhawema2l5qkg3@yuggoth.org> References: <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> <1534883437-sup-4403@lrrr.local> <06afaecc-158c-a6d2-2e4d-c586116eac73@gmail.com> <1A3C52DFCD06494D8528644858247BF01C181C68@EX10MBOX03.pnnl.gov>, <20180821231052.23jhawema2l5qkg3@yuggoth.org> Message-ID: <1A3C52DFCD06494D8528644858247BF01C181CFF@EX10MBOX03.pnnl.gov> There have been plenty of cross project goals set forth from the TC and implemented by the various projects such as wsgi or python3. Those have been worked on by each of the projects in priority to some project specific goals by devs interested in bettering OpenStack. Why is it so hard to believe if the TC gave out a request for a grander user/ops supporting feature, that the community wouldn't step up? PTL's are supposed to be neutral to vendor specific issues and work for the betterment of the Project. I don't buy the complexity argument either. Other non OpenStack projects are implementing similar functionality with far less complexity. I attribute a lot of that to difference in governence. Through governence we've made hard things much harder. They can't be fixed until the governence issues are fixed first I think. Thanks, Kevin ________________________________________ From: Jeremy Stanley [fungi at yuggoth.org] Sent: Tuesday, August 21, 2018 4:10 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? On 2018-08-21 22:42:45 +0000 (+0000), Fox, Kevin M wrote: [...] > Yes, I realize shared storage was Cinders priority and Nova's now > way behind in implementing it. so it is kind of a priority to get > it done. Just using it as an example though in the bigger context. > > Having operators approach individual projects stating their needs, > and then having the individual projects fight it out for > priorities isn't a good plan. The priorities should be prioritized > at a higher level then projects so the operators/users needs can > be seen in a global light, not just filtered though each projects > views of things. > > Yes, some folks volunteer to work on the things that they want to > work on. Thats great. But some folks volunteer to work on > priorities to help users/operators in general. Getting clear, > unbiased priorities for them is really important. [...] I remain unconvinced that if someone (the TC, the OSF board, the now defunct product management nee hidden influencers working group) declared for example that secrets management was a higher priority than shared storage, that any significant number of people who could work on the latter would work on the former instead. The TC has enough trouble getting developers in different projects to cooperate on more than a couple of prioritized cross-project goals per cycle. The OSF board has repeatedly shown its members are rarely in the positions within member companies that they have any influence over what upstream features/projects those companies work on. The product management working group thought they had that influence over the companies in which they worked, but were similarly unable to find developers who could make progress toward their identified goals. OpenStack is an extremely complex (arguably too complex) combination of software, for which there are necessarily people with very strong understanding of very specific pieces and at best a loose understanding of the whole. In contrast, there are few people who have both the background and interest (much less leeway from their employers in the case of paid contributors) to work on any random feature of any random project and be quickly effective at it. If you're familiar with, say, Linux kernel development, you see very much the same sort of specialization with developers who are familiar with specific kernel subsystems and vanishingly few who can efficiently (that is to say without investing lots of time to come up to speed) implement novel features in multiple unrelated subsystems. We'll all continue to work on get better at this, but hard things are... well... hard. -- Jeremy Stanley From ekcs.openstack at gmail.com Wed Aug 22 03:11:27 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Tue, 21 Aug 2018 20:11:27 -0700 Subject: [openstack-dev] [congress] meeting cancelled 8/24 Message-ID: Hi all, I¹m not going to be able to make the meeting this week. Let¹s resume next week =) I'm still available by email. Cheers! From gmann at ghanshyammann.com Wed Aug 22 03:30:51 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 22 Aug 2018 12:30:51 +0900 Subject: [openstack-dev] [release][qa] QA Rocky release status In-Reply-To: <16541f221b2.ae38c79726900.95614780576267481@ghanshyammann.com> References: <16541f221b2.ae38c79726900.95614780576267481@ghanshyammann.com> Message-ID: <1655faf566b.fb5bfef9145166.1104982290743157928@ghanshyammann.com> Hi All, Here is updated status on QA projects releases. only Devstack and Grenade left which are waiting for swift release - https://review.openstack.org/#/c/594537/ IN-PROGRESS: 1. devstack: Branch. Patch is pushed to branch for Rocky which is in hold state - IN-PROGRESS [1] 2. grenade: Branch. Patch is pushed to branch for Rocky which is in hold state - IN-PROGRESS [1] COMPLETED (Done or no release required): 3. patrole: Release done, patch is under review[2] - COMPLETED 4. tempest: Release done, patch is under review[3] - COMPLETED 5. bashate: independent release | Branch-less. version 0.6.0 is released last month and no further release required in Rocky cycle. - COMPLETED 6. coverage2sql: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED 7. devstack-plugin-ceph: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED 8. devstack-plugin-cookiecutter: Branch-less. Not any release yet and no specific release required for Rocky. - COMPLETED 9. devstack-tools: Branch-less. version 0.4.0 is the latest version released and no further release required in Rocky cycle. - COMPLETED 10. devstack-vagrant: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED 11. eslint-config-openstack: Branch-less. version 4.0.1 is the latest version released. no further release required in Rocky cycle. - COMPLETED 12. hacking: Branch-less. version 11.1.0 is the latest version released. no further release required in Rocky cycle. - COMPLETED 13. karma-subunit-reporter: Branch-less. version v0.0.4 is the latest version released. no further release required in Rocky cycle. - COMPLETED 14. openstack-health: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED 15. os-performance-tools: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED 16. os-testr: Branch-less. version 1.0.0 is the latest version released. no further release required in Rocky cycle. - COMPLETED 17. qa-specs: Spec repo, no release needed. - COMPLETED 18. stackviz: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED 19. tempest-plugin-cookiecutter: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED 20. tempest-lib: Deprecated repo, No released needed for Rocky - COMPLETED 21. tempest-stress: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED 22. devstack-plugin-container: Branch. Release and Branched done[4] - COMPLETED [1] https://review.openstack.org/#/q/topic:rocky-branch-devstack-grenade+(status:open+OR+status:merged) [2] https://review.openstack.org/#/c/592277/ [3] https://review.openstack.org/#/c/592276/ [4] https://review.openstack.org/#/c/591804/ -gmann ---- On Thu, 16 Aug 2018 17:55:12 +0900 Ghanshyam Mann wrote ---- > Hi All, > > QA has lot of sub-projects and this mail is to track their release status for Rocky cycle. I will be on vacation from coming Monday for next 2 weeks (visiting India) but will be online to complete the below IN-PROGRESS items and update the status here. > > IN-PROGRESS: > > 1. devstack: Branch. Patch is pushed to branch for Rocky which is in hold state - IN-PROGRESS [1] > > 2. grenade: Branch. Patch is pushed to branch for Rocky which is in hold state - IN-PROGRESS [1] > > 3. patrole: Release done, patch is under review[2] - COMPLETED > > 4. tempest: Release done, patch is under review[3] - COMPLETED > > COMPLETED (Done or no release required): > > 5. bashate: independent release | Branch-less. version 0.6.0 is released last month and no further release required in Rocky cycle. - COMPLETED > > 6. coverage2sql: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED > > 7. devstack-plugin-ceph: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED > > 8. devstack-plugin-cookiecutter: Branch-less. Not any release yet and no specific release required for Rocky. - COMPLETED > > 9. devstack-tools: Branch-less. version 0.4.0 is the latest version released and no further release required in Rocky cycle. - COMPLETED > > 10. devstack-vagrant: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED > > 11. eslint-config-openstack: Branch-less. version 4.0.1 is the latest version released. no further release required in Rocky cycle. - COMPLETED > > 12. hacking: Branch-less. version 11.1.0 is the latest version released. no further release required in Rocky cycle. - COMPLETED > > 13. karma-subunit-reporter: Branch-less. version v0.0.4 is the latest version released. no further release required in Rocky cycle. - COMPLETED > > 14. openstack-health: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED > > 15. os-performance-tools: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED > > 16. os-testr: Branch-less. version 1.0.0 is the latest version released. no further release required in Rocky cycle. - COMPLETED > > 17. qa-specs: Spec repo, no release needed. - COMPLETED > > 18. stackviz: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED > > 19. tempest-plugin-cookiecutter: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED > > 20. tempest-lib: Deprecated repo, No released needed for Rocky - COMPLETED > > 21. tempest-stress: Branch-less. Not any release yet and no specific release required for Rocky too. - COMPLETED > > 22. devstack-plugin-container: Branch. Release and Branched done[4] - COMPLETED > > > [1] https://review.openstack.org/#/q/topic:rocky-branch-devstack-grenade+(status:open+OR+status:merged) > [2] https://review.openstack.org/#/c/592277/ > [3] https://review.openstack.org/#/c/592276/ > [4] https://review.openstack.org/#/c/591804/ > > -gmann > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dharmendra.kushwaha at india.nec.com Wed Aug 22 04:21:22 2018 From: dharmendra.kushwaha at india.nec.com (Dharmendra Kushwaha) Date: Wed, 22 Aug 2018 04:21:22 +0000 Subject: [openstack-dev] [Tacker] Proposing changes in Tacker Core Team Message-ID: Hi Tacker members, To keep our Tacker project growing with new active members, I would like to propose to prune +2 ability of our farmer member Kanagaraj Manickam, and propose Cong Phuoc Hoang (IRC: phuoc) to join the tacker core team. Kanagaraj is not been involved since last couple of cycle. You had a great Contribution in Tacker project like VNF scaling features which are milestone for project. Thanks for your contribution, and wish to see you again. Phuoc is contributing actively in Tacker from Pike cycle, and he has grown into a key member of this project [1]. He delivered multiple features in each cycle. Additionally tons of other activities like bug fixes, answering actively on bugs. He is also actively contributing in cross project like tosca-parser and heat-translator which is much helpful for Tacker. Please vote your +1/-1. [1]: http://stackalytics.com/?project_type=openstack&release=all&metric=commits&module=tacker-group&user_id=hoangphuoc Thanks & Regards Dharmendra Kushwaha -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyentrihai93 at gmail.com Wed Aug 22 04:23:24 2018 From: nguyentrihai93 at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gVHLDrSBI4bqjaQ==?=) Date: Wed, 22 Aug 2018 13:23:24 +0900 Subject: [openstack-dev] [Tacker] Proposing changes in Tacker Core Team In-Reply-To: References: Message-ID: +1 On Wed, Aug 22, 2018, 1:21 PM Dharmendra Kushwaha < dharmendra.kushwaha at india.nec.com> wrote: > Hi Tacker members, > > > > To keep our Tacker project growing with new active members, I would like > > to propose to prune +2 ability of our farmer member Kanagaraj Manickam, > > and propose Cong Phuoc Hoang (IRC: phuoc) to join the tacker core team. > > > > Kanagaraj is not been involved since last couple of cycle. You had a great > > Contribution in Tacker project like VNF scaling features which are > milestone > > for project. Thanks for your contribution, and wish to see you again. > > > > Phuoc is contributing actively in Tacker from Pike cycle, and > > he has grown into a key member of this project [1]. He delivered multiple > > features in each cycle. Additionally tons of other activities like bug > fixes, > > answering actively on bugs. He is also actively contributing in cross > project > > like tosca-parser and heat-translator which is much helpful for Tacker. > > > > Please vote your +1/-1. > > > > [1]: > http://stackalytics.com/?project_type=openstack&release=all&metric=commits&module=tacker-group&user_id=hoangphuoc > > > > Thanks & Regards > > Dharmendra Kushwaha > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- *Nguyen Tri Hai */ Ph.D. Student ANDA Lab., Soongsil Univ., Seoul, South Korea -------------- next part -------------- An HTML attachment was scrubbed... URL: From yangun at dcn.ssu.ac.kr Wed Aug 22 04:50:43 2018 From: yangun at dcn.ssu.ac.kr (hyunsikYang) Date: Wed, 22 Aug 2018 13:50:43 +0900 Subject: [openstack-dev] [Tacker] Proposing changes in Tacker Core Team In-Reply-To: References: Message-ID: +1 *Sent with Shift * 2018-08-22 13:23 GMT+09:00 Nguyễn Trí Hải : > +1 > > On Wed, Aug 22, 2018, 1:21 PM Dharmendra Kushwaha < > dharmendra.kushwaha at india.nec.com> wrote: > >> Hi Tacker members, >> >> >> >> To keep our Tacker project growing with new active members, I would like >> >> to propose to prune +2 ability of our farmer member Kanagaraj Manickam, >> >> and propose Cong Phuoc Hoang (IRC: phuoc) to join the tacker core team. >> >> >> >> Kanagaraj is not been involved since last couple of cycle. You had a great >> >> Contribution in Tacker project like VNF scaling features which are >> milestone >> >> for project. Thanks for your contribution, and wish to see you again. >> >> >> >> Phuoc is contributing actively in Tacker from Pike cycle, and >> >> he has grown into a key member of this project [1]. He delivered multiple >> >> features in each cycle. Additionally tons of other activities like bug >> fixes, >> >> answering actively on bugs. He is also actively contributing in cross >> project >> >> like tosca-parser and heat-translator which is much helpful for Tacker. >> >> >> >> Please vote your +1/-1. >> >> >> >> [1]: http://stackalytics.com/?project_type=openstack& >> release=all&metric=commits&module=tacker-group&user_id=hoangphuoc >> >> >> >> Thanks & Regards >> >> Dharmendra Kushwaha >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -- > > *Nguyen Tri Hai */ Ph.D. Student > > ANDA Lab., Soongsil Univ., Seoul, South Korea > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- *-------------------------------------------------------------------* *Hyunsik Yang (Ph. D candidate)* *DCN laboratory / School of Electronic Engineering* *Soongsil University * *511 Sangdo-dong Dongjak-gu* *Seoul, 156-743 Korea* *TEL : (+82)-2-820-0841 * *M.P: (+82)-10-9005-7439* *Sent with Shift * -------------- next part -------------- An HTML attachment was scrubbed... URL: From gong.yongsheng at 99cloud.net Wed Aug 22 05:21:16 2018 From: gong.yongsheng at 99cloud.net (=?UTF-8?B?6b6a5rC455Sf?=) Date: Wed, 22 Aug 2018 13:21:16 +0800 (GMT+08:00) Subject: [openstack-dev] =?utf-8?q?=5BTacker=5D_Proposing_changes_in_Tacke?= =?utf-8?q?r_Core_Team?= In-Reply-To: Message-ID: +1 -- 龚永生 九州云信息科技有限公司 99CLOUD Co. Ltd. 邮箱(Email):gong.yongsheng at 99cloud.net 地址:北京市海淀区上地三街嘉华大厦B座806 Addr : Room 806, Tower B, Jiahua Building, No. 9 Shangdi 3rd Street, Haidian District, Beijing, China 手机(Mobile):+86-18618199879 公司网址(WebSite):http://99cloud.net 在 2018-08-22 12:21:22,Dharmendra Kushwaha 写道: Hi Tacker members, To keep our Tacker project growing with new active members, I would like to propose to prune +2 ability of our farmer member Kanagaraj Manickam, and propose Cong Phuoc Hoang (IRC: phuoc) to join the tacker core team. Kanagaraj is not been involved since last couple of cycle. You had a great Contribution in Tacker project like VNF scaling features which are milestone for project. Thanks for your contribution, and wish to see you again. Phuoc is contributing actively in Tacker from Pike cycle, and he has grown into a key member of this project [1]. He delivered multiple features in each cycle. Additionally tons of other activities like bug fixes, answering actively on bugs. He is also actively contributing in cross project like tosca-parser and heat-translator which is much helpful for Tacker. Please vote your +1/-1. [1]: http://stackalytics.com/?project_type=openstack&release=all&metric=commits&module=tacker-group&user_id=hoangphuoc Thanks & Regards Dharmendra Kushwaha -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyentrihai93 at gmail.com Wed Aug 22 05:42:55 2018 From: nguyentrihai93 at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gVHLDrSBI4bqjaQ==?=) Date: Wed, 22 Aug 2018 14:42:55 +0900 Subject: [openstack-dev] [goals][python3] please check with me before submitting any zuul migration patches In-Reply-To: <1534875835-sup-7809@lrrr.local> References: <1534875835-sup-7809@lrrr.local> Message-ID: Please add yourself to storyboard to everyone know who is working on the project. https://storyboard.openstack.org/#!/story/2002586 On Wed, Aug 22, 2018 at 3:31 AM Doug Hellmann wrote: > We have a few folks eager to join in and contribute to the python3 goal > by helping with the patches to migrate zuul settings. That's great! > However, many of the patches being proposed are incorrect, which means > there is either something wrong with the tool or the way it is used. > > The intent was to have a very small group, 3-4 people, who knew how > the tools worked to propose all of those patches. Having incorrect > patches can break the CI for a project, so we need to be especially > careful with them. We do not want every team writing the patches > for themselves, and we do not want lots and lots of people who we > have to train to use the tools. > > If you are not one of the people already listed as a goal champion > on [1], please PLEASE stop writing patches and get in touch with > me personally and directly (via IRC or email) BEFORE doing any more > work on the goal. > > Thanks, > Doug > > [1] https://governance.openstack.org/tc/goals/stein/python3-first.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Nguyen Tri Hai / Ph.D. Student ANDA Lab., Soongsil Univ., Seoul, South Korea *[image: http://link.springer.com/chapter/10.1007/978-3-319-26135-5_4] * -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyentrihai93 at gmail.com Wed Aug 22 05:52:24 2018 From: nguyentrihai93 at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gVHLDrSBI4bqjaQ==?=) Date: Wed, 22 Aug 2018 14:52:24 +0900 Subject: [openstack-dev] [goal][python3] dealing with new jobs failing on old branches In-Reply-To: <1534860661-sup-5278@lrrr.local> References: <1534860661-sup-5278@lrrr.local> Message-ID: Hi, I figured out that we also can submit a patch to fix failure job in the old branches, particularly, the docs job. However, I still struggle with the failure of py27 or py35 jobs. On Tue, Aug 21, 2018 at 11:17 PM Doug Hellmann wrote: > Goal champions, > > Most of the jobs in the project-templates do not have branch > specifiers. That allows us to add a job to a repository and then > not realize that it doesn't work on an old branch. We're finding > some of those with this zuul migration (for example, > https://review.openstack.org/#/c/593012/ and > https://review.openstack.org/#/c/593016/). > > To deal with these, we need to remove that job or template from the > repository's settings in the project-config repository, and not include > it in the import patches. > > 1. First we want to wait for the team to land as many of the unaltered > import patches as possible, so those jobs stay on the master branch > and recent stable branches where they work. > > 2. Then, propose a patch to project-config to remove just the problem jobs > and templates from the repositories where they are a problem. > > 3. Then, rebase the patch that removes all of a team's project settings > on top of the one created in step 2. > > 4. Finally, modify the import patch(es) on the older stable branches > where the jobs fail and remove the jobs or templates that cause > problems. Set those patches to depend on the patch created in > step 2, since they cannot land without the project-config change. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Nguyen Tri Hai / Ph.D. Student ANDA Lab., Soongsil Univ., Seoul, South Korea *[image: http://link.springer.com/chapter/10.1007/978-3-319-26135-5_4] * -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Wed Aug 22 06:05:40 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 22 Aug 2018 15:05:40 +0900 Subject: [openstack-dev] [Tacker] Proposing changes in Tacker Core Team In-Reply-To: References: Message-ID: +1 *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Wed, Aug 22, 2018 at 2:21 PM 龚永生 wrote: > +1 > > > > > -- > *龚永生* > *九州云信息科技有限公司 99CLOUD Co. Ltd.* > 邮箱(Email):gong.yongsheng at 99cloud.net > 地址:北京市海淀区上地三街嘉华大厦B座806 > Addr : Room 806, Tower B, Jiahua Building, No. 9 Shangdi 3rd Street, > Haidian District, Beijing, China > 手机(Mobile):+86-18618199879 > 公司网址(WebSite):http://99cloud.net > > 在 2018-08-22 12:21:22,Dharmendra Kushwaha < > dharmendra.kushwaha at india.nec.com> 写道: > > Hi Tacker members, > > > > To keep our Tacker project growing with new active members, I would like > > to propose to prune +2 ability of our farmer member Kanagaraj Manickam, > > and propose Cong Phuoc Hoang (IRC: phuoc) to join the tacker core team. > > > > Kanagaraj is not been involved since last couple of cycle. You had a great > > Contribution in Tacker project like VNF scaling features which are > milestone > > for project. Thanks for your contribution, and wish to see you again. > > > > Phuoc is contributing actively in Tacker from Pike cycle, and > > he has grown into a key member of this project [1]. He delivered multiple > > features in each cycle. Additionally tons of other activities like bug > fixes, > > answering actively on bugs. He is also actively contributing in cross > project > > like tosca-parser and heat-translator which is much helpful for Tacker. > > > > Please vote your +1/-1. > > > > [1]: > http://stackalytics.com/?project_type=openstack&release=all&metric=commits&module=tacker-group&user_id=hoangphuoc > > > > Thanks & Regards > > Dharmendra Kushwaha > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyentrihai93 at gmail.com Wed Aug 22 06:11:50 2018 From: nguyentrihai93 at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gVHLDrSBI4bqjaQ==?=) Date: Wed, 22 Aug 2018 15:11:50 +0900 Subject: [openstack-dev] [goal][python3] week 2 update In-Reply-To: References: <1534778701-sup-1930@lrrr.local> <1534796891-sup-969@lrrr.local> <1534797743-sup-4122@lrrr.local> Message-ID: Hi, The other projects please consider merging the OpenStack Release Bot patches related to Rocky branch. So that we can propose the patches related to python3-first goal. On Tue, Aug 21, 2018 at 8:40 PM Telles Nobrega wrote: > Thanks. We merged most of them, there is only one that failed the tests so > I'm rechecking it. > > On Mon, Aug 20, 2018 at 5:43 PM Doug Hellmann > wrote: > >> Excerpts from Doug Hellmann's message of 2018-08-20 16:34:22 -0400: >> > Excerpts from Telles Nobrega's message of 2018-08-20 15:07:29 -0300: >> > > Hi Doug, >> > > >> > > I believe Sahara is ready to have those patches worked on. >> > > >> > > Do we have to do anything specific to get the env ready? >> > >> > Just be ready to do the reviews. I am generating the patches now and >> > will propose them in a little while when the script finishes. >> >> And here they are: >> >> >> +----------------------------------------------+---------------------------------+-------------------------------------+ >> | Subject | Repo >> | URL | >> >> +----------------------------------------------+---------------------------------+-------------------------------------+ >> | import zuul job settings from project-config | >> openstack/python-saharaclient | https://review.openstack.org/593904 | >> | switch documentation job to new PTI | >> openstack/python-saharaclient | https://review.openstack.org/593905 | >> | add python 3.6 unit test job | >> openstack/python-saharaclient | https://review.openstack.org/593906 | >> | import zuul job settings from project-config | >> openstack/python-saharaclient | https://review.openstack.org/593918 | >> | import zuul job settings from project-config | >> openstack/python-saharaclient | https://review.openstack.org/593923 | >> | import zuul job settings from project-config | >> openstack/python-saharaclient | https://review.openstack.org/593928 | >> | import zuul job settings from project-config | >> openstack/python-saharaclient | https://review.openstack.org/593933 | >> | import zuul job settings from project-config | openstack/sahara >> | https://review.openstack.org/593907 | >> | switch documentation job to new PTI | openstack/sahara >> | https://review.openstack.org/593908 | >> | add python 3.6 unit test job | openstack/sahara >> | https://review.openstack.org/593909 | >> | import zuul job settings from project-config | openstack/sahara >> | https://review.openstack.org/593919 | >> | import zuul job settings from project-config | openstack/sahara >> | https://review.openstack.org/593924 | >> | import zuul job settings from project-config | openstack/sahara >> | https://review.openstack.org/593929 | >> | import zuul job settings from project-config | openstack/sahara >> | https://review.openstack.org/593934 | >> | import zuul job settings from project-config | >> openstack/sahara-dashboard | https://review.openstack.org/593910 | >> | switch documentation job to new PTI | >> openstack/sahara-dashboard | https://review.openstack.org/593911 | >> | import zuul job settings from project-config | >> openstack/sahara-dashboard | https://review.openstack.org/593920 | >> | import zuul job settings from project-config | >> openstack/sahara-dashboard | https://review.openstack.org/593925 | >> | import zuul job settings from project-config | >> openstack/sahara-dashboard | https://review.openstack.org/593930 | >> | import zuul job settings from project-config | >> openstack/sahara-dashboard | https://review.openstack.org/593935 | >> | import zuul job settings from project-config | openstack/sahara-extra >> | https://review.openstack.org/593912 | >> | import zuul job settings from project-config | openstack/sahara-extra >> | https://review.openstack.org/593921 | >> | import zuul job settings from project-config | openstack/sahara-extra >> | https://review.openstack.org/593926 | >> | import zuul job settings from project-config | openstack/sahara-extra >> | https://review.openstack.org/593931 | >> | import zuul job settings from project-config | openstack/sahara-extra >> | https://review.openstack.org/593936 | >> | import zuul job settings from project-config | >> openstack/sahara-image-elements | https://review.openstack.org/593913 | >> | import zuul job settings from project-config | >> openstack/sahara-image-elements | https://review.openstack.org/593922 | >> | import zuul job settings from project-config | >> openstack/sahara-image-elements | https://review.openstack.org/593927 | >> | import zuul job settings from project-config | >> openstack/sahara-image-elements | https://review.openstack.org/593932 | >> | import zuul job settings from project-config | >> openstack/sahara-image-elements | https://review.openstack.org/593937 | >> | import zuul job settings from project-config | openstack/sahara-specs >> | https://review.openstack.org/593914 | >> | import zuul job settings from project-config | openstack/sahara-tests >> | https://review.openstack.org/593915 | >> | switch documentation job to new PTI | openstack/sahara-tests >> | https://review.openstack.org/593916 | >> | add python 3.6 unit test job | openstack/sahara-tests >> | https://review.openstack.org/593917 | >> >> +----------------------------------------------+---------------------------------+-------------------------------------+ >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -- > > TELLES NOBREGA > > SOFTWARE ENGINEER > > Red Hat Brasil > > Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo > > tenobreg at redhat.com > > TRIED. TESTED. TRUSTED. > Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil > pelo Great Place to Work. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Nguyen Tri Hai / Ph.D. Student ANDA Lab., Soongsil Univ., Seoul, South Korea *[image: http://link.springer.com/chapter/10.1007/978-3-319-26135-5_4] * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jo.hiroyuki at lab.ntt.co.jp Wed Aug 22 06:19:09 2018 From: jo.hiroyuki at lab.ntt.co.jp (Hiroyuki JO) Date: Wed, 22 Aug 2018 15:19:09 +0900 Subject: [openstack-dev] [Tacker] Proposing changes in Tacker Core Team In-Reply-To: References: Message-ID: <010701d439e0$095c2bc0$1c148340$@lab.ntt.co.jp> +1 -- Hiroyuki JO Email: jo.hiroyuki at lab.ntt.co.jp, TEL(direct) : +81-422-59-7394 NTT Network Service Systems Laboratories From: Dharmendra Kushwaha [mailto:dharmendra.kushwaha at india.nec.com] Sent: Wednesday, August 22, 2018 1:21 PM To: openstack-dev Subject: [openstack-dev] [Tacker] Proposing changes in Tacker Core Team Hi Tacker members, To keep our Tacker project growing with new active members, I would like to propose to prune +2 ability of our farmer member Kanagaraj Manickam, and propose Cong Phuoc Hoang (IRC: phuoc) to join the tacker core team. Kanagaraj is not been involved since last couple of cycle. You had a great Contribution in Tacker project like VNF scaling features which are milestone for project. Thanks for your contribution, and wish to see you again. Phuoc is contributing actively in Tacker from Pike cycle, and he has grown into a key member of this project [1]. He delivered multiple features in each cycle. Additionally tons of other activities like bug fixes, answering actively on bugs. He is also actively contributing in cross project like tosca-parser and heat-translator which is much helpful for Tacker. Please vote your +1/-1. [1]: http://stackalytics.com/?project_type=openstack&release=all&metric=commits&m odule=tacker-group&user_id=hoangphuoc Thanks & Regards Dharmendra Kushwaha -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Aug 22 06:31:28 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 22 Aug 2018 15:31:28 +0900 Subject: [openstack-dev] [qa][patrole][neutron][policy] Neutron Policy Testing in OpenStack Patrole project Message-ID: <1656054b3aa.fedd93b1146065.1300449763532042058@ghanshyammann.com> Hi All, This thread is to request the neutron team on review help for neutron policy testing in Patrole project. Folks who are not familiar with Patrole, below is the brief background & description of Patrole: ------------------------- OpenStack Patrole is official project under QA umbrella which perform the RBAC testing. It has been in development state since Ocata and currently released its 0.4.0 version for Rocky[1]. Complete Documentation can be found here[2]. #openstack-qa is IRC channel for Patrole. Main goal of this project is to perform the RBAC testing for OpenStack where we will first focus on Nova, Cinder, Keystone, Glance and Neutron in Patrole repo and provide the framework / mechanism to extend the testing for other project via plugin or some other way (yet to finalized). Current state : - Good coverage for Nova, Keystone, Cinder, Glance. - Ongoing 1. neutron coverage, framework stability - Next 1. stable release of Patrole, 2. start gating the Patrole testing on project side. -------------------------- Patrole team is working on neutron policy testing. As you know neutron policy is not as simple as other projects and also no user facing documentation for policy. I was discussing with amotoki about it and got to know that he is working on policy doc or something which can be useful for users and so does Patrole can consume that for writing the test cases. Another request QA has for neutron team is about review the neutron policy test cases. Here is the complete review list[3] (cannot get the single gerrit topic linked with story#) and it will be great if neutron team can keep eyes on those and provide early feedback on new test cases (their policy name, return code, coverage etc). One example where we need feedback is - https://review.openstack.org/#/c/586739/ Q: What is the return code for GET API if policy authorization fail. >From neutron doc [4] (though it is internal doc but it explain the neutron policy internals), it seems for GET, PUT, DELETE where resource existence is checked first. If resource does not exist then 404 is return for security purpose as 403 can tell invalid user that this resource exist. But for PUT and DELETE, it can be 403 when resource exist but user does not have access to PUT/DELETE operation. I was discussing it with amotoki also and we thought of - Check 404 for GET - Check [403, 404] for PUT and DELETE. - later we will strict the checks of 404 and 403 separately for PUT and DELETE. Let's us know if that is right way to proceed. [1] https://docs.openstack.org/releasenotes/patrole/v0.4.0.html [2] https://docs.openstack.org/patrole/latest/ [3] https://storyboard.openstack.org/#!/story/2002641 [4] https://docs.openstack.org/neutron/pike/contributor/internals/policy.html#request-authorization -gmann From skaplons at redhat.com Wed Aug 22 08:10:30 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Wed, 22 Aug 2018 10:10:30 +0200 Subject: [openstack-dev] [neutron] Old patches cleaning Message-ID: <5F0E976C-25FE-4C68-9423-60DFF173745C@redhat.com> Hi, In Neutron we have many patches without any activity since long time. To make list of patches a bit smaller I want to run script [1] soon. I will run it only for projects like: * neutron * neutron-lib * neutron-tempest-plugin But if You want to run it for Your stadium project it should be also possible after my patch [2] will be merged. If You have any concerns about running this script, please raise Your hand now :) If You are owner of patch which will be abandoned and You will want to continue work on it, You can always restore Your patch and continue work on it then. [1] https://github.com/openstack/neutron/blob/master/tools/abandon_old_reviews.sh [2] https://review.openstack.org/#/c/594326 — Slawek Kaplonski Senior software engineer Red Hat From gergely.csatari at nokia.com Wed Aug 22 08:21:33 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Wed, 22 Aug 2018 08:21:33 +0000 Subject: [openstack-dev] [TripleO]Addressing Edge/Multi-site/Multi-cloud deployment use cases (new squad) In-Reply-To: References: Message-ID: Hi, This is good news. We could even have an hour session to discuss ideas about TripleO-s place in the edge cloud infrastructure. Would you be open for that? Br, Gerg0 -----Original Message----- From: James Slagle Sent: Tuesday, August 21, 2018 2:42 PM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [TripleO]Addressing Edge/Multi-site/Multi-cloud deployment use cases (new squad) On Tue, Aug 21, 2018 at 2:40 AM Csatari, Gergely (Nokia - HU/Budapest) wrote: > > Hi, > > There was a two days workshop on edge requirements back in Dublin. The notes are stored here: https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG I think there are some areas there what can be interesting for the squad. > Edge Computing Group plans to have a day long discussion in Denver. Maybe we could have a short discussion there about these requirements. Thanks! I've added my name to the etherpad for the PTG and will plan on spending Tuesday with the group. https://etherpad.openstack.org/p/EdgeComputingGroupPTG4 -- -- James Slagle -- __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From adriant at catalyst.net.nz Wed Aug 22 08:23:29 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Wed, 22 Aug 2018 20:23:29 +1200 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 6 August 2018 In-Reply-To: <1533915998.2993501.1470046096.3F011E8B@webmail.messagingengine.com> References: <1533915998.2993501.1470046096.3F011E8B@webmail.messagingengine.com> Message-ID: <672e0eee-1ccc-fc08-74fe-5468e5ee506b@catalyst.net.nz> Bah! I saw this while on holiday and didn't get a chance to respond, sorry for being late to the conversation. On 11/08/18 3:46 AM, Colleen Murphy wrote: > ### Self-Service Keystone > > At the weekly meeting Adam suggested we make self-service keystone a focus point of the PTG[9]. Currently, policy limitations make it difficult for an unprivileged keystone user to get things done or to get information without the help of an administrator. There are some other projects that have been created to act as workflow proxies to mitigate keystone's limitations, such as Adjutant[10] (now an official OpenStack project) and Ksproj[11] (written by Kristi). The question is whether the primitives offered by keystone are sufficient building blocks for these external tools to leverage, or if we should be doing more of this logic within keystone. Certainly improving our RBAC model is going to be a major part of improving the self-service user experience. > > [9] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-08-07-16.00.log.html#l-121 > [10] https://adjutant.readthedocs.io/en/latest/ > [11] https://github.com/CCI-MOC/ksproj As you can probably expect, I'd love to be a part of any of these discussions. Anything I can nicely move to being logic directly supported in Keystone, the less I need to do in Adjutant. The majority of things though I think I can do reasonably well with the primitives Keystone gives me, and what I can't I tend to try and work with upstream to fill the gaps. System vs project scope helps a lot though, and I look forward to really playing with that. I sadly won't be at the PTG, but will be at the Berlin summit. Plus I have a lot of Adjutant work planned for Stein, a large chunk of which is refactors and reshuffling blueprints and writing up a roadmap, plus some better entry point tasks for new contributors. > ### Standalone Keystone > > Also at the meeting and during office hours, we revived the discussion of what it would take to have a standalone keystone be a useful identity provider for non-OpenStack projects[12][13]. First up we'd need to turn keystone into a fully-fledged SAML IdP, which it's not at the moment (which is a point of confusion in our documentation), or even add support for it to act as an OpenID Connect IdP. This would be relatively easy to do (or at least not impossible). Then the application would have to use keystonemiddleware or its own middleware to route requests to keystone to issue and validate tokens (this is one aspect where we've previously discussed whether JWT could benefit us). Then the question is what should a not-OpenStack application do with keystone's "scoped RBAC"? It would all depend on how the resources of the application are grouped and whether they care about multitenancy in some form. Likely each application would have different needs and it would be difficult to find a one-size-fits-all approach. We're interested to know whether anyone has a burning use case for something like this. > > [12] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-08-07-16.00.log.html#l-192 > [13] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-08-07.log.html#t2018-08-07T17:01:30 This one is interesting because another department at Catalyst is actually looking to use Keystone outside of the scope of OpenStack. They are building a SaaS platform, and they need authn, authz (with some basic RBAC), a service catalog (think API endpoint per software offering), and most of those things are useful outside of OpenStack. They can then use projects to signify a customer, and a project (customer) could have one or more users accessing the management GUIs, with roles giving them some RBAC. A large part of this is because they can then also piggy back on a lot of work our team has done with OpenStack and Keystone and even reuse some of our projects and tools for billing and other things (Adjutant maybe?). They could use KeystoneAuth for CLI and client tools, they can build their APIs using Keystonemiddleware. Then another reason why this actually interests the Catalyst Cloud team is because we actually use Keystone with an SQL backend for our public cloud, with the db in a multi-region galera cluster. Keystone is our Idp, we don't federate it, and we now have a reasonably passable 2FA option on it, with a better MFA option coming in Stein when I'm done with Auth Receipts. We actually kind of like Keystone for our authn, and because we didn't have any existing users when we first built our cloud so using vanilla Keystone seemed like a sensible solution. We had plans to migrate users and federate, or move to LDAP, but they never materialized because maintaining more systems didn't make sense and did add many useful benefits. Making Keystone a fully fledged Idp with SAML and OpenID support would be fantastic because we could then build a tiny single sign on around Keystone and use that for all our non-openstack services. In fact I had a prototype side project planned which would be a tiny Flask or Django app that would act as a single sign on for Keystone. It would have a login form that handles the new MFA process with auth receipts in Keystone, and on getting the token it would wrap that into an OpenID token which other systems could interpret. With the appropriate APIs for acting as a provider and most of those just doing user actions with that token in Keystone. In theory I could have made it a tiny entirely ephemeral app which only needs to know where keystone is (no admin creds). Basically a tiny Idp around Keystone. But if Keystone goes down the path of supporting SAML and OpenID then all we really need is a login GUI that supports auth receipts (and plugin support for different types of MFA to match ones in Keystone), which probably still should be a tiny side project rather than views in Keystone (should Keystone really serve HTML?), or requiring Horizon (Horizon could use it as a SSO). I would love to help with something like this if we do go down that path. :) From hguemar at fedoraproject.org Wed Aug 22 08:35:06 2018 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 22 Aug 2018 10:35:06 +0200 Subject: [openstack-dev] Redis licensing terms changes Message-ID: Hi, I haven't seen this but I'd like to point that Redis moved to an open core licensing model. https://redislabs.com/community/commons-clause/ In short: * base engine remains under BSD license * modules move to ASL 2.0 + commons clause which is non-free (prohibits sales of derived products) IMHO, projects that rely on Redis as default driver, should consider alternatives (off course, it's up to them). Regards, H. From longkb at vn.fujitsu.com Wed Aug 22 08:44:59 2018 From: longkb at vn.fujitsu.com (Kim Bao, Long) Date: Wed, 22 Aug 2018 08:44:59 +0000 Subject: [openstack-dev] [Tacker] Proposing changes in Tacker Core Team In-Reply-To: References: Message-ID: +1 from me. First of all, I would like to thank Phuoc for his contribution in Tacker. As I know, Phuoc joined in Tacker project for a year, but his contribution in Tacker is highly appreciated. Besides, he also is one of active member in IRC, gerrit and bug report. Hope that he can help Tacker keep growing in his new role. LongKB From: Dharmendra Kushwaha [mailto:dharmendra.kushwaha at india.nec.com] Sent: Wednesday, August 22, 2018 11:21 AM To: openstack-dev Subject: [openstack-dev] [Tacker] Proposing changes in Tacker Core Team Hi Tacker members, To keep our Tacker project growing with new active members, I would like to propose to prune +2 ability of our farmer member Kanagaraj Manickam, and propose Cong Phuoc Hoang (IRC: phuoc) to join the tacker core team. Kanagaraj is not been involved since last couple of cycle. You had a great Contribution in Tacker project like VNF scaling features which are milestone for project. Thanks for your contribution, and wish to see you again. Phuoc is contributing actively in Tacker from Pike cycle, and he has grown into a key member of this project [1]. He delivered multiple features in each cycle. Additionally tons of other activities like bug fixes, answering actively on bugs. He is also actively contributing in cross project like tosca-parser and heat-translator which is much helpful for Tacker. Please vote your +1/-1. [1]: http://stackalytics.com/?project_type=openstack&release=all&metric=commits&module=tacker-group&user_id=hoangphuoc Thanks & Regards Dharmendra Kushwaha -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Aug 22 08:45:20 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 22 Aug 2018 10:45:20 +0200 Subject: [openstack-dev] Redis licensing terms changes In-Reply-To: References: Message-ID: <9182e211-b26e-ec0e-4b08-9bc53e0c82eb@openstack.org> Haïkel wrote: > I haven't seen this but I'd like to point that Redis moved to an open > core licensing model. > https://redislabs.com/community/commons-clause/ > > In short: > * base engine remains under BSD license > * modules move to ASL 2.0 + commons clause which is non-free > (prohibits sales of derived products) Beyond the sale of a derived product, it prohibits selling hosting of or providing consulting services on anything that depend on it... so it's pretty broad. > IMHO, projects that rely on Redis as default driver, should consider > alternatives (off course, it's up to them). The TC stated in the past that default drivers had to be open source, so if anything depends on commons-claused Redis modules, they would probably have to find an alternative... Which OpenStack components are affected ? -- Thierry Carrez (ttx) From dangtrinhnt at gmail.com Wed Aug 22 08:57:45 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 22 Aug 2018 17:57:45 +0900 Subject: [openstack-dev] [Searchlight] Action plan for Searchlight in Stein Message-ID: Dear team, Here is my proposed action plan for Searchlight in Stein. The ultimate goal is to revive Searchlight with a sustainable number of contributors and can release as expected. 1. Migrate Searchlight to Storyboard with the help of Kendall 2. Attract more contributors (as well as cores) 3. Clean up docs, notes 4. Review and clean up patches [1] [2] [3] [4] 5. Setting up goals/features for Stein. We will need to have a virtual PTG (September 10-14, 2018, Denver) since I cannot attend it this time. This is our Etherpad for Stein, please feel free to contribute from now on until the PTG: https://review.openstack.org/#/q/project:openstack/searchlight+status:open [1] https://review.openstack.org/#/q/project:openstack/searchlight+status:open [2] https://review.openstack.org/#/q/project:openstack/searchlight-ui+status:open [3] https://review.openstack.org/#/q/project:openstack/python-searchlightclient+status:open [4] https://review.openstack.org/#/q/project:openstack/searchlight-specs+status:open If you have any idea or want to contribute, please ping me on IRC: - IRC Channel: #openstack-searchlight - My IRC handler: dangtrinhnt Bests, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Wed Aug 22 09:46:20 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 22 Aug 2018 11:46:20 +0200 Subject: [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration) In-Reply-To: References: Message-ID: <20180822094620.kncry4ufbe6fwi5u@localhost> On 20/08, Matthew Booth wrote: > For those who aren't familiar with it, nova's volume-update (also > called swap volume by nova devs) is the nova part of the > implementation of cinder's live migration (also called retype). > Volume-update is essentially an internal cinder<->nova api, but as > that's not a thing it's also unfortunately exposed to users. Some > users have found it and are using it, but because it's essentially an > internal cinder<->nova api it breaks pretty easily if you don't treat > it like a special snowflake. It looks like we've finally found a way > it's broken for non-cinder callers that we can't fix, even with a > dirty hack. > > volume-update essentially does a live copy of the > data on volume to volume, then seamlessly swaps the > attachment to from to . The guest OS on > will not notice anything at all as the hypervisor swaps the storage > backing an attached volume underneath it. > > When called by cinder, as intended, cinder does some post-operation > cleanup such that is deleted and inherits the same > volume_id; that is effectively becomes . When called any > other way, however, this cleanup doesn't happen, which breaks a bunch > of assumptions. One of these is that a disk's serial number is the > same as the attached volume_id. Disk serial number, in KVM at least, > is immutable, so can't be updated during volume-update. This is fine > if we were called via cinder, because the cinder cleanup means the > volume_id stays the same. If called any other way, however, they no > longer match, at least until a hard reboot when it will be reset to > the new volume_id. It turns out this breaks live migration, but > probably other things too. We can't think of a workaround. > > I wondered why users would want to do this anyway. It turns out that > sometimes cinder won't let you migrate a volume, but nova > volume-update doesn't do those checks (as they're specific to cinder > internals, none of nova's business, and duplicating them would be > fragile, so we're not adding them!). Specifically we know that cinder > won't let you migrate a volume with snapshots. There may be other > reasons. If cinder won't let you migrate your volume, you can still > move your data by using nova's volume-update, even though you'll end > up with a new volume on the destination, and a slightly broken > instance. Apparently the former is a trade-off worth making, but the > latter has been reported as a bug. > Hi Matt, As you know, I'm in favor of making this REST API call only authorized for Cinder to avoid messing the cloud. I know you wanted Cinder to have a solution to do live migrations of volumes with snapshots, and while this is not possible to do in a reasonable fashion, I kept thinking about it given your strong feelings to provide a solution for users that really need this, and I think we may have a "reasonable" compromise. The solution is conceptually simple. We add a new API microversion in Cinder that adds and optional parameter called "generic_keep_source" (defaults to False) to both migrate and retype operations. This means that if the driver optimized migration cannot do the migration and the generic migration code is the one doing the migration, then, instead of our final step being to swap the volume id's and deleting the source volume, what we would do is to swap the volume id's and move all the snapshots to reference the new volume. Then we would create a user message with the new ID of the volume. This way we can preserve the old volume with all its snapshots and do the live migration. The implementation is a little bit tricky, as we'll have to add anew "update_migrated_volume" mechanism to support the renaming of both volumes, since the old one wouldn't work with this among other things, but it's doable. Unfortunately I don't have the time right now to work on this... Cheers, Gorka. > I'd like to make it very clear that nova's volume-update, isn't > expected to work correctly except when called by cinder. Specifically > there was a proposal that we disable volume-update from non-cinder > callers in some way, possibly by asserting volume state that can only > be set by cinder. However, I'm also very aware that users are calling > volume-update because it fills a need, and we don't want to trap data > that wasn't previously trapped. > > Firstly, is anybody aware of any other reasons to use nova's > volume-update directly? > > Secondly, is there any reason why we shouldn't just document then you > have to delete snapshots before doing a volume migration? Hopefully > some cinder folks or operators can chime in to let me know how to back > them up or somehow make them independent before doing this, at which > point the volume itself should be migratable? > > If we can establish that there's an acceptable alternative to calling > volume-update directly for all use-cases we're aware of, I'm going to > propose heading off this class of bug by disabling it for non-cinder > callers. > > Matt > -- > Matthew Booth > Red Hat OpenStack Engineer, Compute DFG > > Phone: +442070094448 (UK) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From thierry at openstack.org Wed Aug 22 09:57:08 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 22 Aug 2018 11:57:08 +0200 Subject: [openstack-dev] [Searchlight] Action plan for Searchlight in Stein In-Reply-To: References: Message-ID: <2ed70e89-1e8f-2674-7988-ba869b82d6ca@openstack.org> Trinh Nguyen wrote: > Here is my proposed action plan for Searchlight in Stein. The ultimate > goal is to revive Searchlight with a sustainable number of contributors > and can release as expected. > [...] Thanks again for stepping up, and communicating so clearly. -- Thierry Carrez (ttx) From mbooth at redhat.com Wed Aug 22 10:27:43 2018 From: mbooth at redhat.com (Matthew Booth) Date: Wed, 22 Aug 2018 11:27:43 +0100 Subject: [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration) In-Reply-To: <20180822094620.kncry4ufbe6fwi5u@localhost> References: <20180822094620.kncry4ufbe6fwi5u@localhost> Message-ID: On Wed, 22 Aug 2018 at 10:47, Gorka Eguileor wrote: > > On 20/08, Matthew Booth wrote: > > For those who aren't familiar with it, nova's volume-update (also > > called swap volume by nova devs) is the nova part of the > > implementation of cinder's live migration (also called retype). > > Volume-update is essentially an internal cinder<->nova api, but as > > that's not a thing it's also unfortunately exposed to users. Some > > users have found it and are using it, but because it's essentially an > > internal cinder<->nova api it breaks pretty easily if you don't treat > > it like a special snowflake. It looks like we've finally found a way > > it's broken for non-cinder callers that we can't fix, even with a > > dirty hack. > > > > volume-update essentially does a live copy of the > > data on volume to volume, then seamlessly swaps the > > attachment to from to . The guest OS on > > will not notice anything at all as the hypervisor swaps the storage > > backing an attached volume underneath it. > > > > When called by cinder, as intended, cinder does some post-operation > > cleanup such that is deleted and inherits the same > > volume_id; that is effectively becomes . When called any > > other way, however, this cleanup doesn't happen, which breaks a bunch > > of assumptions. One of these is that a disk's serial number is the > > same as the attached volume_id. Disk serial number, in KVM at least, > > is immutable, so can't be updated during volume-update. This is fine > > if we were called via cinder, because the cinder cleanup means the > > volume_id stays the same. If called any other way, however, they no > > longer match, at least until a hard reboot when it will be reset to > > the new volume_id. It turns out this breaks live migration, but > > probably other things too. We can't think of a workaround. > > > > I wondered why users would want to do this anyway. It turns out that > > sometimes cinder won't let you migrate a volume, but nova > > volume-update doesn't do those checks (as they're specific to cinder > > internals, none of nova's business, and duplicating them would be > > fragile, so we're not adding them!). Specifically we know that cinder > > won't let you migrate a volume with snapshots. There may be other > > reasons. If cinder won't let you migrate your volume, you can still > > move your data by using nova's volume-update, even though you'll end > > up with a new volume on the destination, and a slightly broken > > instance. Apparently the former is a trade-off worth making, but the > > latter has been reported as a bug. > > > > Hi Matt, > > As you know, I'm in favor of making this REST API call only authorized > for Cinder to avoid messing the cloud. > > I know you wanted Cinder to have a solution to do live migrations of > volumes with snapshots, and while this is not possible to do in a > reasonable fashion, I kept thinking about it given your strong feelings > to provide a solution for users that really need this, and I think we > may have a "reasonable" compromise. > > The solution is conceptually simple. We add a new API microversion in > Cinder that adds and optional parameter called "generic_keep_source" > (defaults to False) to both migrate and retype operations. > > This means that if the driver optimized migration cannot do the > migration and the generic migration code is the one doing the migration, > then, instead of our final step being to swap the volume id's and > deleting the source volume, what we would do is to swap the volume id's > and move all the snapshots to reference the new volume. Then we would > create a user message with the new ID of the volume. > > This way we can preserve the old volume with all its snapshots and do > the live migration. > > The implementation is a little bit tricky, as we'll have to add anew > "update_migrated_volume" mechanism to support the renaming of both > volumes, since the old one wouldn't work with this among other things, > but it's doable. > > Unfortunately I don't have the time right now to work on this... Sounds promising, and honestly more than I'd have hoped for. Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) From balazs.gibizer at ericsson.com Wed Aug 22 10:42:34 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Wed, 22 Aug 2018 12:42:34 +0200 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> Message-ID: <1534934554.7552.3@smtp.office365.com> On Sat, Aug 18, 2018 at 2:25 PM, Chris Dent wrote: > > So my hope is that (in no particular order) Jay Pipes, Eric Fried, > Takashi Natsume, Tetsuro Nakamura, Matt Riedemann, Andrey Volkov, > Alex Xu, Balazs Gibizer, Ed Leafe, and any other contributor to > placement whom I'm forgetting [1] would express their preference on > what they'd like to see happen. +1 for separate git repository +1 for initializing the placement-core with nova-core team +1 for talking separately about incuding more cores to the placement-core I'm for taking incremental steps. So if the git repo separation can ben done independently of the project separation then why not do the step first we seems to be agreeing with. I think allowing the placement-core team to diverge from the nova-core team will help in many ways to talk about the project separate: * more core reviewers for placement-> more review bandwidth for placement-> less review need from nova-cores on placement code -> more time for nova-cores to propose solutions for remaining big nova induced placement changes (affinity, quota) and implement support in nova for existing placement features (consumer gen, nested RP, granular resource request) * possibility to include reviews to the placement core team (over time) with other, placement-using module background (cinder, neutron, cyborg, etc.) -> fresh viewpoints about the direction of placement from placement API consumers * a divers core team will allow us to test the water about feature priorization conflicts if any. I'm not against making two steps at the same time and doing the project separation _if_ there are some level of consensus amongst the interested parties. But based on this long mail thread we don't have that yet. So I suggest to do the repo and core team change only now and spend time gathering experience having the evolved placement-core team. Cheers, gibi From edmondsw at us.ibm.com Wed Aug 22 11:42:43 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Wed, 22 Aug 2018 07:42:43 -0400 Subject: [openstack-dev] [goal][python3] week 2 update In-Reply-To: <1534778701-sup-1930@lrrr.local> References: <1534778701-sup-1930@lrrr.local> Message-ID: Doug Hellmann wrote on 08/20/2018 11:27:09 AM: > If your team is ready to have your zuul settings migrated, please > let us know by following up to this email. We will start with the > volunteers, and then work our way through the other teams. I think PowerVMStackers is ready (so nova-powervm, networking-powervm, ceilometer-powervm). -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Wed Aug 22 12:49:23 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 22 Aug 2018 07:49:23 -0500 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 6 August 2018 In-Reply-To: <672e0eee-1ccc-fc08-74fe-5468e5ee506b@catalyst.net.nz> References: <1533915998.2993501.1470046096.3F011E8B@webmail.messagingengine.com> <672e0eee-1ccc-fc08-74fe-5468e5ee506b@catalyst.net.nz> Message-ID: <1ce4287a-e7b5-7640-855e-0207946bba0d@gmail.com> On 08/22/2018 03:23 AM, Adrian Turjak wrote: > Bah! I saw this while on holiday and didn't get a chance to respond, > sorry for being late to the conversation. > > On 11/08/18 3:46 AM, Colleen Murphy wrote: >> ### Self-Service Keystone >> >> At the weekly meeting Adam suggested we make self-service keystone a focus point of the PTG[9]. Currently, policy limitations make it difficult for an unprivileged keystone user to get things done or to get information without the help of an administrator. There are some other projects that have been created to act as workflow proxies to mitigate keystone's limitations, such as Adjutant[10] (now an official OpenStack project) and Ksproj[11] (written by Kristi). The question is whether the primitives offered by keystone are sufficient building blocks for these external tools to leverage, or if we should be doing more of this logic within keystone. Certainly improving our RBAC model is going to be a major part of improving the self-service user experience. >> >> [9] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-08-07-16.00.log.html#l-121 >> [10] https://adjutant.readthedocs.io/en/latest/ >> [11] https://github.com/CCI-MOC/ksproj > As you can probably expect, I'd love to be a part of any of these > discussions. Anything I can nicely move to being logic directly > supported in Keystone, the less I need to do in Adjutant. The majority > of things though I think I can do reasonably well with the primitives > Keystone gives me, and what I can't I tend to try and work with upstream > to fill the gaps. > > System vs project scope helps a lot though, and I look forward to really > playing with that. Since it made sense to queue incorporating system scope after the flask work, I just started working with that on the credentials API*. There is a WIP series up for review that attempts to do a couple things [0]. First it tries to incorporate system and project scope checking into the API. Second it tries to be more explicit about protection test cases, which I think is going to be important since we're adding another scope type. We also support three different roles now and it would be nice to clearly see who can do what in each case with tests. I'd be curious to get your feedback here if you have any. * Because the credentials API was already moved to flask and has room for self-service improvements [1] [0] https://review.openstack.org/#/c/594547/ [1] https://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/policies/credential.py#n21 > > I sadly won't be at the PTG, but will be at the Berlin summit. Plus I > have a lot of Adjutant work planned for Stein, a large chunk of which is > refactors and reshuffling blueprints and writing up a roadmap, plus some > better entry point tasks for new contributors. > >> ### Standalone Keystone >> >> Also at the meeting and during office hours, we revived the discussion of what it would take to have a standalone keystone be a useful identity provider for non-OpenStack projects[12][13]. First up we'd need to turn keystone into a fully-fledged SAML IdP, which it's not at the moment (which is a point of confusion in our documentation), or even add support for it to act as an OpenID Connect IdP. This would be relatively easy to do (or at least not impossible). Then the application would have to use keystonemiddleware or its own middleware to route requests to keystone to issue and validate tokens (this is one aspect where we've previously discussed whether JWT could benefit us). Then the question is what should a not-OpenStack application do with keystone's "scoped RBAC"? It would all depend on how the resources of the application are grouped and whether they care about multitenancy in some form. Likely each application would have different needs and it would be difficult to find a one-size-fits-all approach. We're interested to know whether anyone has a burning use case for something like this. >> >> [12] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-08-07-16.00.log.html#l-192 >> [13] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-08-07.log.html#t2018-08-07T17:01:30 > This one is interesting because another department at Catalyst is > actually looking to use Keystone outside of the scope of OpenStack. They > are building a SaaS platform, and they need authn, authz (with some > basic RBAC), a service catalog (think API endpoint per software > offering), and most of those things are useful outside of OpenStack. > They can then use projects to signify a customer, and a project > (customer) could have one or more users accessing the management GUIs, > with roles giving them some RBAC. A large part of this is because they > can then also piggy back on a lot of work our team has done with > OpenStack and Keystone and even reuse some of our projects and tools for > billing and other things (Adjutant maybe?). They could use KeystoneAuth > for CLI and client tools, they can build their APIs using > Keystonemiddleware. > > > Then another reason why this actually interests the Catalyst Cloud team > is because we actually use Keystone with an SQL backend for our public > cloud, with the db in a multi-region galera cluster. Keystone is our > Idp, we don't federate it, and we now have a reasonably passable 2FA > option on it, with a better MFA option coming in Stein when I'm done > with Auth Receipts. We actually kind of like Keystone for our authn, and > because we didn't have any existing users when we first built our cloud > so using vanilla Keystone seemed like a sensible solution. We had plans > to migrate users and federate, or move to LDAP, but they never > materialized because maintaining more systems didn't make sense and did > add many useful benefits. Making Keystone a fully fledged Idp with SAML > and OpenID support would be fantastic because we could then build a tiny > single sign on around Keystone and use that for all our non-openstack > services. > > In fact I had a prototype side project planned which would be a tiny > Flask or Django app that would act as a single sign on for Keystone. It > would have a login form that handles the new MFA process with auth > receipts in Keystone, and on getting the token it would wrap that into > an OpenID token which other systems could interpret. With the > appropriate APIs for acting as a provider and most of those just doing > user actions with that token in Keystone. In theory I could have made it > a tiny entirely ephemeral app which only needs to know where keystone is > (no admin creds). Basically a tiny Idp around Keystone. > > But if Keystone goes down the path of supporting SAML and OpenID then > all we really need is a login GUI that supports auth receipts (and > plugin support for different types of MFA to match ones in Keystone), > which probably still should be a tiny side project rather than views in > Keystone (should Keystone really serve HTML?), or requiring Horizon > (Horizon could use it as a SSO). I would love to help with something > like this if we do go down that path. :) > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From balazs.gibizer at ericsson.com Wed Aug 22 12:55:27 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Wed, 22 Aug 2018 14:55:27 +0200 Subject: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict In-Reply-To: <7b45da6c-c8d3-c54f-89c0-9798589dfdc4@fried.cc> References: <1534419109.24276.3@smtp.office365.com> <1534419803.3149.0@smtp.office365.com> <1534500637.29318.1@smtp.office365.com> <7b45da6c-c8d3-c54f-89c0-9798589dfdc4@fried.cc> Message-ID: <1534942527.7552.8@smtp.office365.com> On Fri, Aug 17, 2018 at 5:40 PM, Eric Fried wrote: > gibi- > >>> - On migration, when we transfer the allocations in either >>> direction, a >>> conflict means someone managed to resize (or otherwise change >>> allocations?) since the last time we pulled data. Given the global >>> lock >>> in the report client, this should have been tough to do. If it does >>> happen, I would think any retry would need to be done all the way >>> back >>> at the claim, which I imagine is higher up than we should go. So >>> again, >>> I think we should fail the migration and make the user retry. >> >> Do we want to fail the whole migration or just the migration step >> (e.g. >> confirm, revert)? >> The later means that failure during confirm or revert would put the >> instance back to VERIFY_RESIZE. While the former would mean that in >> case >> of conflict at confirm we try an automatic revert. But for a >> conflict at >> revert we can only put the instance to ERROR state. > > This again should be "impossible" to come across. What would the > behavior be if we hit, say, ValueError in this spot? I might not totally follow you. I see two options to choose from for the revert case: a) Allocation manipulation error during revert of a migration causes that instance goes to ERROR. -> end user cannot retry the revert the instance needs to be deleted. b) Allocation manipulation error during revert of a migration causes that the instance goes back to VERIFY_RESIZE state. -> end user can retry the revert via the API. I see three options to choose from for the confirm case: a) Allocation manipulation error during confirm of a migration causes that instance goes to ERROR. -> end user cannot retry the confirm the instance needs to be deleted. b) Allocation manipulation error during confirm of a migration causes that the instance goes back to VERIFY_RESIZE state. -> end user can retry the confirm via the API. c) Allocation manipulation error during confirm of a migration causes that nova automatically tries to revert the migration. (For failure during this revert the same options available as for the generic revert case, see above) We also need to consider live migration. It is similar in a sense that it also use move_allocations. But it is different as the end user doesn't explicitly confirm or revert a live migration. I'm looking for opinions about which option we should take in each cases. gibi > > -efried > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From opensrloo at gmail.com Wed Aug 22 13:11:13 2018 From: opensrloo at gmail.com (Ruby Loo) Date: Wed, 22 Aug 2018 09:11:13 -0400 Subject: [openstack-dev] Stepping down from Ironic core In-Reply-To: References: Message-ID: Hi John, So sorry to hear this but totally understandable! Thanks for letting us know and for everything you've done! Enjoy life without ironic :) --ruby On Tue, Aug 21, 2018 at 10:39 AM John Villalovos wrote: > Good morning Ironic, > > I have come to realize that I don't have the time needed to be able to > devote the attention needed to continue as an Ironic core. > > I'm hopeful that in the future I will work on Ironic or OpenStack again! :) > > The Ironic (and OpenStack) community has been a great one and I have > really enjoyed my time working on it and working with all the people. I > will still be hanging around on IRC and you may see me submitting a patch > here and there too :) > > Thanks again, > John > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtomasek at redhat.com Wed Aug 22 13:26:40 2018 From: jtomasek at redhat.com (Jiri Tomasek) Date: Wed, 22 Aug 2018 15:26:40 +0200 Subject: [openstack-dev] [TripleO]Addressing Edge/Multi-site/Multi-cloud deployment use cases (new squad) In-Reply-To: References: Message-ID: Hi, thanks for a write up James. I am adding a few notes/ideas inline... On Mon, Aug 20, 2018 at 10:48 PM James Slagle wrote: > As we start looking at how TripleO will address next generation deployment > needs such as Edge, multi-site, and multi-cloud, I'd like to kick off a > discussion around how TripleO can evolve and adapt to meet these new > challenges. > > What are these challenges? I think the OpenStack Edge Whitepaper does a > good > job summarizing some of them: > > > https://www.openstack.org/assets/edge/OpenStack-EdgeWhitepaper-v3-online.pdf > > They include: > > - management of distributed infrastructure > - massive scale (thousands instead of hundreds) > - limited network connectivity > - isolation of distributed sites > - orchestration of federated services across multiple sites > > We already have a lot of ongoing work that directly or indirectly starts to > address some of these challenges. That work includes things like > config-download, split-controlplane, metalsmith integration, validations, > all-in-one, and standalone. > > I laid out some initial ideas in a previous message: > > http://lists.openstack.org/pipermail/openstack-dev/2018-July/132398.html > > I'll be reviewing some of that here and going into a bit more detail. > > These are some of the high level ideas I'd like to see TripleO start to > address: > > - More separation between planning and deploying (likely to be further > defined > in spec discussion). We've had these concepts for a while, but we need > to do > a better job of surfacing them to users as deployments grow in size and > complexity. > One of the focus points of ui/cli and workflows squads for Stein is getting GUI and CLI consolidated so that both clients operate on deployment plan via Mistral workflows. We are currently working on identifying missing CLI commands which would lead to adopting the same workflow as GUI uses. This will lead to complete interoperability between the clients and would make a deployment plan the first-class citizen as Ben mentioned in discussion linked above. Existing plan import/export functionality makes the deployment plan easily portable and replicable as it is possible to export the plan at any point of time and re-use it (with ability to still apply some tweaks for each usage) When Steven's work [1] introduces plan-types which adds ability to define multiple starting points for the deployment plan. [1] https://review.openstack.org/#/c/574753 > > With config-download, we can more easily separate the phases of > rendering, > downloading, validating, and applying the configuration. As we increase > in > scale to managing many deployments, we should take advantage of what > each of > those phases offer. > > The separation also makes the deployment more portable, as we should > eliminate any restrictions that force the undercloud to be the control > node > applying the configuration. > > - Management of multiple deployments from a single undercloud. This is of > course already possible today, but we need better docs and polish and > more > testing to flush out any bugs. > > - Plan and template management in git. > > This could be an iterative step towards eliminating Swift in the > undercloud. > Swift seemed like a natural choice at the time because it was an existing > OpenStack service. However, I think git would do a better job at > tracking > history and comparing changes and is much more lightweight than Swift. > We've > been managing the config-download directory as a git repo, and I like > this > direction. For now, we are just putting the whole git repo in Swift, but > I > wonder if it makes sense to consider eliminating Swift entirely. We need > to > consider the scale of managing thousands of plans for separate edge > deployments. > > I also think this would be a step towards undercloud simplification. > +1, we need to identify how much this affects the existing API and overall user experience for managing deployment plans. Currentl plan management options we support are: - create plan from default files (/usr/share/tht...) - create/update plan from local directory - create/update plan by providing tarball - create/update plan from remote git repository Ian has been working on similar efforts towards performance improvements [2], It would be good to take this a step further and evaluate possibility to eliminate Swift entirely. [2] https://review.openstack.org/#/c/581153/ -- Jirka > > - Orchestration between plans. I think there's general agreement around > scaling > up the undercloud to be more effective at managing and deploying multiple > plans. > > The plans could be different OpenStack deployments potentially sharing > some > resources. Or, they could be deployments of different software stacks > (Kubernetes/OpenShift, Ceph, etc). > > We'll need to develop some common interfaces for some basic orchestration > between plans. It could include dependencies, ordering, and sharing > parameter > data (such as passwords or connection info). There is already some > ongoing > discussion about some of this work: > > > http://lists.openstack.org/pipermail/openstack-dev/2018-August/133247.html > > I would suspect this would start out as collecting specific use cases, > and > then figuring out the right generic interfaces. > > - Multiple deployments of a single plan. This could be useful for doing > many > deployments that are all the same. Of course some info might be different > such as network IP's, hostnames, and node specific details. We could have > some generic input interfaces for those sorts of things without having to > create new Heat stacks, which would allow re-using the same plan/stack > for > multiple deployments. When scaling to hundreds/thousands of edge > deployments > this could be really effective at side-stepping managing > hundreds/thousands > of Heat stacks. > > We may also need further separation between a plan and it's deployment > state > to have this modularity. > > - Distributed management/application of configuration. Even though the > configuration is portable (config-download), we may still want some > automation around applying the deployment when not using the undercloud > as a > control node. I think things like ansible-runner or Ansible AWX could > help > here, or perhaps mistral-executor agents, or "mistral as a library". This > would also make our workflows more portable. > > - New documentation highlighting some or all of the above features and how > to > take advantage of it for new use cases (thousands of edge deployments, > etc). > I see this as a sort of "TripleO Edge Deployment Guide" that would > highlight > how to take advantage of TripleO for Edge/multi-site use cases. > > Obviously all the ideas are a lot of work, and not something I think we'll > complete in a single cycle. > > I'd like to pull a squad together focused on Edge/multi-site/multi-cloud > and > TripleO. On that note, this squad could also work together with other > deployment projects that are looking at similar use cases and look to > collaborate. > > If you're interested in working on this squad, I'd see our first tasks as > being: > > - Brainstorming additional ideas to the above > - Breaking down ideas into actionable specs/blueprints for stein (and > possibly > future releases). > - Coming up with a consistent message around direction and vision for > solving > these deployment challenges. > - Bringing together ongoing work that relates to these use cases together > so > that we're all collaborating with shared vision and purpose and we can > help > prioritize reviews/ci/etc. > - Identifying any discussion items we need to work through in person at the > upcoming Denver PTG. > > I'm happy to help facilitate the squad. If you have any feedback on these > ideas > or would like to join the squad, reply to the thread or sign up in the > etherpad: > > https://etherpad.openstack.org/p/tripleo-edge-squad-status > > I'm just referring to the squad as "Edge" for now, but we can also pick a > cooler owl themed name :). > > -- > -- James Slagle > -- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Aug 22 13:29:35 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 22 Aug 2018 09:29:35 -0400 Subject: [openstack-dev] [goal][python3] week 2 update In-Reply-To: References: <1534778701-sup-1930@lrrr.local> Message-ID: <1534944544-sup-4068@lrrr.local> Excerpts from William M Edmonds's message of 2018-08-22 07:42:43 -0400: > > Doug Hellmann wrote on 08/20/2018 11:27:09 AM: > > If your team is ready to have your zuul settings migrated, please > > let us know by following up to this email. We will start with the > > volunteers, and then work our way through the other teams. > > I think PowerVMStackers is ready (so nova-powervm, networking-powervm, > ceilometer-powervm). Here you go: +----------------------------------------------+------------------------------+-------------------------------------+---------------+ | Subject | Repo | URL | Branch | +----------------------------------------------+------------------------------+-------------------------------------+---------------+ | import zuul job settings from project-config | openstack/ceilometer-powervm | https://review.openstack.org/594984 | master | | add python 3.6 unit test job | openstack/ceilometer-powervm | https://review.openstack.org/594985 | master | | import zuul job settings from project-config | openstack/ceilometer-powervm | https://review.openstack.org/594989 | stable/ocata | | import zuul job settings from project-config | openstack/ceilometer-powervm | https://review.openstack.org/594992 | stable/pike | | import zuul job settings from project-config | openstack/ceilometer-powervm | https://review.openstack.org/594995 | stable/queens | | import zuul job settings from project-config | openstack/ceilometer-powervm | https://review.openstack.org/594998 | stable/rocky | | import zuul job settings from project-config | openstack/networking-powervm | https://review.openstack.org/594986 | master | | import zuul job settings from project-config | openstack/networking-powervm | https://review.openstack.org/594990 | stable/ocata | | import zuul job settings from project-config | openstack/networking-powervm | https://review.openstack.org/594993 | stable/pike | | import zuul job settings from project-config | openstack/networking-powervm | https://review.openstack.org/594996 | stable/queens | | import zuul job settings from project-config | openstack/networking-powervm | https://review.openstack.org/594999 | stable/rocky | | import zuul job settings from project-config | openstack/nova-powervm | https://review.openstack.org/594987 | master | | add python 3.6 unit test job | openstack/nova-powervm | https://review.openstack.org/594988 | master | | import zuul job settings from project-config | openstack/nova-powervm | https://review.openstack.org/594991 | stable/ocata | | import zuul job settings from project-config | openstack/nova-powervm | https://review.openstack.org/594994 | stable/pike | | import zuul job settings from project-config | openstack/nova-powervm | https://review.openstack.org/594997 | stable/queens | | import zuul job settings from project-config | openstack/nova-powervm | https://review.openstack.org/595000 | stable/rocky | +----------------------------------------------+------------------------------+-------------------------------------+---------------+ From doug at doughellmann.com Wed Aug 22 13:49:13 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 22 Aug 2018 09:49:13 -0400 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <06afaecc-158c-a6d2-2e4d-c586116eac73@gmail.com> References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> <1534883437-sup-4403@lrrr.local> <06afaecc-158c-a6d2-2e4d-c586116eac73@gmail.com> Message-ID: <1534945106-sup-4359@lrrr.local> Excerpts from melanie witt's message of 2018-08-21 15:05:00 -0700: > On Tue, 21 Aug 2018 16:41:11 -0400, Doug Hellmann wrote: > > Excerpts from melanie witt's message of 2018-08-21 12:53:43 -0700: > >> On Tue, 21 Aug 2018 06:50:56 -0500, Matt Riedemann wrote: > >>> At this point, I think we're at: > >>> > >>> 1. Should placement be extracted into it's own git repo in Stein while > >>> nova still has known major issues which will have dependencies on > >>> placement changes, mainly modeling affinity? > >>> > >>> 2. If we extract, does it go under compute governance or a new project > >>> with a new PTL. > >>> > >>> As I've said, I personally believe that unless we have concrete plans > >>> for the big items in #1, we shouldn't hold up the extraction. We said in > >>> Dublin we wouldn't extract to a new git repo in Rocky but we'd work up > >>> to that point so we could do it in Stein, so this shouldn't surprise > >>> anyone. The actual code extraction and re-packaging and all that is > >>> going to be the biggest technical issue with all of this, and will > >>> likely take all of stein to complete it after all the bugs are shaken out. > >>> > >>> For #2, I think for now, in the interim, while we deal with the > >>> technical headache of the code extraction itself, it's best to leave the > >>> new repo under compute governance so the existing team is intact and we > >>> don't conflate the people issue with the technical issue at the same > >>> time. Get the hard technical part done first, and then we can move it > >>> out of compute governance. Once it's in its own git repo, we can change > >>> the core team as needed but I think it should be initialized with > >>> existing nova-core. > >> > >> I'm in support of extracting placement into its own git repo because > >> Chris has done a lot of work to reduce dependencies in placement and > >> moving it into its own repo would help in not having to keep chasing > >> that. As has been said before, I think all of us agree that placement > >> should be separate as an end goal. The question is when to fully > >> separate it from governance. > >> > >> It's true that we don't have concrete plans for affinity modeling and > >> shared storage modeling. But I think we do have concrete plans for vGPU > >> enhancements (being able to have different vGPU types on one compute > >> host and adding support for traits). vGPU support is an important and > >> highly sought after feature for operators and users, as we witnessed at > >> the last Summit in Vancouver. vGPU support is currently using a flat > >> resource provider structure that needs to be migrated to nested in order > >> to do the enhancement work, and that's how the reshaper work came about. > >> (Reshaper work will migrate a flat resource provider structure to a > >> nested one.) > >> > >> We have the nested resource provider support in placement but we need to > >> integrate the Nova side, leveraging the reshaper code. The reshaper code > >> is still going through code review, then next we have the integration to > >> do. I think things are bound to break when we integrate it, just because > >> nothing is ever perfect, as much as we scrutinize it and the real test > >> is when we start using it for real. I think going through this > >> integration would be best done *before* extraction to a new repo. But > >> given that there is never a "good" time to extract something to a new > >> repo, I am OK with the idea of doing the extraction first, if that is > >> what most people want to do. > >> > >> What I'm concerned about on the governance piece is how things look as > >> far as project priorities between the two projects if they are split. > >> Affinity modeling and shared storage support are compute features > >> OpenStack operators and users need. Operators need affinity modeling in > >> the placement is needed to achieve parity for affinity scheduling with > >> multiple cells. That means, affinity scheduling in Nova with multiple > >> cells is susceptible to races and does *not* work as well as the > >> previous single cell support. Shared storage support is something > >> operators have badly needed for years now and was envisioned to be > >> solved with placement. > >> > >> Given all of that, I'm not seeing how *now* is a good time to separate > >> the placement project under separate governance with separate goals and > >> priorities. If operators need things for compute, that are well-known > >> and that placement was created to solve, how will placement have a > >> shared interest in solving compute problems, if it is not part of the > >> compute project? > >> > > > > Who are candidates to be members of a review team for the placement > > repository after the code is moved out of openstack/nova? > > > > How many of them are also members of the nova-core team? > > I assume you pose this question in the proposed situation I described > where placement is a repo under compute. I expect the review team to be No, not at all. I'm trying to understand how you think a completely separate team is going to cause problems. Because it seems like at least a large portion, if not all, of the contributors want it, and I need to have a very good reason for denying their request, if we do. Right now, I understand that there are concerns, but I don't understand why. > > What do you think those folks are more interested in working on than the > > things you listed as needing to be done to support the nova use cases? > > I'm not thinking of anything specific here. At a high level, I don't see > how separating into two separate groups under separate leadership helps > us deliver the listed things for operators and users. I tend to think > that a unified group will be more successful at that. OK. At the same time, I'm trying to understand why you have a hard time believing a new team's priorities would not be aligned with the nova team's priorities if, as it seems, a large percentage of that new team would be made up of the same exact people. > > What can they do to reassure you that they will work on the items > > nova needs, regardless of the governance structure? > > If they were separate groups, I don't see why the leadership of > placement would necessarily share goals and priorities with compute. I > think that is why it's much more difficult to get things done with two > separate groups, in general. Most of the teams in the community seem to have a relatively easy time coming to common agreement on priorities and goals, even when they are not so closely related as the nova and placement teams would be. So I guess I still don't see what the problem would be, and am looking for more details, either about the concerns you all have or ways to alleviate them. > I want to reiterate again that the only thing I care about here is > delivering functionality that operators and users need. vGPUs, in > particular, has been highly sought after at a community-wide level, not > just from the compute community. I want to deliver the features that > people are depending on and IMHO, being a unified group helps that. I > don't see how being two separate groups helps that. I don't think doing those things is mutually exclusive with solving, or at least addressing, the underlying trust and self-governance issues here, though. In fact, I think dealing with *that* is going to make us all more effective at delivering the software, in the long term because we will have cleared up what the expectations between the two teams is. Doug From openstack at fried.cc Wed Aug 22 13:52:35 2018 From: openstack at fried.cc (Eric Fried) Date: Wed, 22 Aug 2018 08:52:35 -0500 Subject: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict In-Reply-To: <1534942527.7552.8@smtp.office365.com> References: <1534419109.24276.3@smtp.office365.com> <1534419803.3149.0@smtp.office365.com> <1534500637.29318.1@smtp.office365.com> <7b45da6c-c8d3-c54f-89c0-9798589dfdc4@fried.cc> <1534942527.7552.8@smtp.office365.com> Message-ID: <5d1054d6-910a-c73d-beed-6d2ef691c63d@fried.cc> b) sounds the most sane in both cases. I don't like the idea of "your move operation failed and you have no recourse but to delete your instance". And automatic retry sounds lovely, but potentially hairy to implement (and we would need to account for the retries-failed scenario anyway) so at least initially we should leave that out. On 08/22/2018 07:55 AM, Balázs Gibizer wrote: > > > On Fri, Aug 17, 2018 at 5:40 PM, Eric Fried wrote: >> gibi- >> >>>>  - On migration, when we transfer the allocations in either >>>> direction, a >>>>  conflict means someone managed to resize (or otherwise change >>>>  allocations?) since the last time we pulled data. Given the global >>>> lock >>>>  in the report client, this should have been tough to do. If it does >>>>  happen, I would think any retry would need to be done all the way back >>>>  at the claim, which I imagine is higher up than we should go. So >>>> again, >>>>  I think we should fail the migration and make the user retry. >>> >>>  Do we want to fail the whole migration or just the migration step (e.g. >>>  confirm, revert)? >>>  The later means that failure during confirm or revert would put the >>>  instance back to VERIFY_RESIZE. While the former would mean that in >>> case >>>  of conflict at confirm we try an automatic revert. But for a >>> conflict at >>>  revert we can only put the instance to ERROR state. >> >> This again should be "impossible" to come across. What would the >> behavior be if we hit, say, ValueError in this spot? > > I might not totally follow you. I see two options to choose from for the > revert case: > > a) Allocation manipulation error during revert of a migration causes > that instance goes to ERROR. -> end user cannot retry the revert the > instance needs to be deleted. > > b) Allocation manipulation error during revert of a migration causes > that the instance goes back to VERIFY_RESIZE state. -> end user can > retry the revert via the API. > > I see three options to choose from for the confirm case: > > a) Allocation manipulation error during confirm of a migration causes > that instance goes to ERROR. -> end user cannot retry the confirm the > instance needs to be deleted. > > b) Allocation manipulation error during confirm of a migration causes > that the instance goes back to VERIFY_RESIZE state. -> end user can > retry the confirm via the API. > > c) Allocation manipulation error during confirm of a migration causes > that nova automatically tries to revert the migration. (For failure > during this revert the same options available as for the generic revert > case, see above) > > We also need to consider live migration. It is similar in a sense that > it also use move_allocations. But it is different as the end user > doesn't explicitly confirm or revert a live migration. > > I'm looking for opinions about which option we should take in each cases. > > gibi > >> >> -efried >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From no-reply at openstack.org Wed Aug 22 13:57:03 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Wed, 22 Aug 2018 13:57:03 -0000 Subject: [openstack-dev] horizon 14.0.0.0rc2 (rocky) Message-ID: Hello everyone, A new release candidate for horizon for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/horizon/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/horizon/log/?h=stable/rocky Release notes for horizon can be found at: https://docs.openstack.org/releasenotes/horizon/ From sean.mcginnis at gmx.com Wed Aug 22 13:57:47 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 22 Aug 2018 08:57:47 -0500 Subject: [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration) In-Reply-To: <20180822094620.kncry4ufbe6fwi5u@localhost> References: <20180822094620.kncry4ufbe6fwi5u@localhost> Message-ID: <20180822135747.GA27570@sm-workstation> > > The solution is conceptually simple. We add a new API microversion in > Cinder that adds and optional parameter called "generic_keep_source" > (defaults to False) to both migrate and retype operations. > > This means that if the driver optimized migration cannot do the > migration and the generic migration code is the one doing the migration, > then, instead of our final step being to swap the volume id's and > deleting the source volume, what we would do is to swap the volume id's > and move all the snapshots to reference the new volume. Then we would > create a user message with the new ID of the volume. > How would you propose to "move all the snapshots to reference the new volume"? Most storage does not allow a snapshot to be moved from one volume to another. really the only way a migration of a snapshot can work across all storage types would be to incrementally copy the data from a source to a destination up to the point of the oldest snapshot, create a new snapshot on the new volume, then proceed through until all snapshots have been rebuilt on the new volume. From no-reply at openstack.org Wed Aug 22 14:02:17 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Wed, 22 Aug 2018 14:02:17 -0000 Subject: [openstack-dev] cinder 13.0.0.0rc2 (rocky) Message-ID: Hello everyone, A new release candidate for cinder for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/cinder/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/cinder/log/?h=stable/rocky Release notes for cinder can be found at: https://docs.openstack.org/releasenotes/cinder/ From no-reply at openstack.org Wed Aug 22 14:03:43 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Wed, 22 Aug 2018 14:03:43 -0000 Subject: [openstack-dev] heat 11.0.0.0rc2 (rocky) Message-ID: Hello everyone, A new release candidate for heat for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/heat/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/heat/log/?h=stable/rocky Release notes for heat can be found at: https://docs.openstack.org/releasenotes/heat/ From openstack at fried.cc Wed Aug 22 14:13:25 2018 From: openstack at fried.cc (Eric Fried) Date: Wed, 22 Aug 2018 09:13:25 -0500 Subject: [openstack-dev] [oslo] UUID sentinel needs a home Message-ID: For some time, nova has been using uuidsentinel [1] which conveniently allows you to get a random UUID in a single LOC with a readable name that's the same every time you reference it within that process (but not across processes). Example usage: [2]. We would like other projects (notably the soon-to-be-split-out placement project) to be able to use uuidsentinel without duplicating the code. So we would like to stuff it in an oslo lib. The question is whether it should live in oslotest [3] or in oslo_utils.uuidutils [4]. The proposed patches are (almost) the same. The issues we've thought of so far: - If this thing is used only for test, oslotest makes sense. We haven't thought of a non-test use, but somebody surely will. - Conversely, if we put it in oslo_utils, we're kinda saying we support it for non-test too. (This is why the oslo_utils version does some extra work for thread safety and collision avoidance.) - In oslotest, awkwardness is necessary to avoid circular importing: uuidsentinel uses oslo_utils.uuidutils, which requires oslotest. In oslo_utils.uuidutils, everything is right there. - It's a... UUID util. If I didn't know anything and I was looking for a UUID util like uuidsentinel, I would look in a module called uuidutils first. We hereby solicit your opinions, either by further discussion here or as votes on the respective patches. Thanks, efried [1] https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/uuidsentinel.py [2] https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/functional/api/openstack/placement/db/test_resource_provider.py#L109-L115 [3] https://review.openstack.org/594068 [4] https://review.openstack.org/594179 From stdake at cisco.com Wed Aug 22 14:13:53 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Wed, 22 Aug 2018 14:13:53 +0000 Subject: [openstack-dev] [kolla][project navigator] kolla missing in project navigator In-Reply-To: <2ec52efe-78f8-23eb-6e83-be3955b75015@openstack.org> References: <5B7AC8AC.7000106@openstack.org> , <2ec52efe-78f8-23eb-6e83-be3955b75015@openstack.org> Message-ID: <1534947232328.74052@cisco.com> Thierry, Kolla likely belongs in the packaging recipes in the map. Kolla-Ansible belongs in the lifecycle tools. FWIW, I'm agree with Jean on the location of OpenStack-Ansible in the map. This is a deployment tool, not really a set of recipes. I think the name "openstack-ansible" as a project is what causes all the confusion. Some folks see it as a set of playbooks by its naming, but really its a lifecycle management tool simply using Ansible as a dependent technology. Jimmy, Thanks for your help in sorting out the project navigator. This is greatly appreciated. Cheers -steve ________________________________________ From: Thierry Carrez Sent: Monday, August 20, 2018 7:31 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [kolla][project navigator] kolla missing in project navigator Eduardo, "Kolla" was originally left out of the map (and therefore the new OpenStack components page) because the map only shows deliverables that are directly usable by deployers. That is why "Kolla-Ansible" is listed there and not "Kolla". Are you making the case that Kolla should be used directly by deployers (rather than run it though Ansible with Kolla-Ansible), and therefore should appear as a deployment option on the map as well ? -- Thierry Carrez (ttx) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jimmy at openstack.org Wed Aug 22 14:37:17 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 22 Aug 2018 09:37:17 -0500 Subject: [openstack-dev] Redis licensing terms changes In-Reply-To: <9182e211-b26e-ec0e-4b08-9bc53e0c82eb@openstack.org> References: <9182e211-b26e-ec0e-4b08-9bc53e0c82eb@openstack.org> Message-ID: <5B7D751D.5010800@openstack.org> Hmm... http://antirez.com/news/120 Today a page about the new Creative Common license in the Redis Labs web site was interpreted as if Redis itself switched license. This is not the case, Redis is, and will remain, BSD licensed. However in the fake news era my attempts to provide the correct information failed, and I’m still seeing everywhere “Redis is no longer open source”. The reality is that Redis remains BSD, and actually Redis Labs did the right thing supporting my effort to keep the Redis core open as usually. What is happening instead is that certain Redis modules, developed inside Redis Labs, are now released under the Common Clause (using Apache license as a base license). This means that basically certain enterprise add-ons, instead of being completely closed source as they could be, will be available with a more permissive license. Thierry Carrez wrote: > Haïkel wrote: >> I haven't seen this but I'd like to point that Redis moved to an open >> core licensing model. >> https://redislabs.com/community/commons-clause/ >> >> In short: >> * base engine remains under BSD license >> * modules move to ASL 2.0 + commons clause which is non-free >> (prohibits sales of derived products) > > Beyond the sale of a derived product, it prohibits selling hosting of > or providing consulting services on anything that depend on it... so > it's pretty broad. > >> IMHO, projects that rely on Redis as default driver, should consider >> alternatives (off course, it's up to them). > > The TC stated in the past that default drivers had to be open source, > so if anything depends on commons-claused Redis modules, they would > probably have to find an alternative... > > Which OpenStack components are affected ? > From nguyentrihai93 at gmail.com Wed Aug 22 14:48:22 2018 From: nguyentrihai93 at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gVHLDrSBI4bqjaQ==?=) Date: Wed, 22 Aug 2018 23:48:22 +0900 Subject: [openstack-dev] [Searchlight] Action plan for Searchlight in Stein In-Reply-To: References: Message-ID: Hi, The link for Stein's Etherpad is missing. On Wed, Aug 22, 2018 at 5:58 PM Trinh Nguyen wrote: > Dear team, > > Here is my proposed action plan for Searchlight in Stein. The ultimate > goal is to revive Searchlight with a sustainable number of contributors and > can release as expected. > > 1. Migrate Searchlight to Storyboard with the help of Kendall > 2. Attract more contributors (as well as cores) > 3. Clean up docs, notes > 4. Review and clean up patches [1] [2] [3] [4] > 5. Setting up goals/features for Stein. We will need to have a virtual PTG > (September 10-14, 2018, Denver) since I cannot attend it this time. > > This is our Etherpad for Stein, please feel free to contribute from now on > until the PTG: > https://review.openstack.org/#/q/project:openstack/searchlight+status:open > > [1] > https://review.openstack.org/#/q/project:openstack/searchlight+status:open > [2] > https://review.openstack.org/#/q/project:openstack/searchlight-ui+status:open > [3] > https://review.openstack.org/#/q/project:openstack/python-searchlightclient+status:open > [4] > https://review.openstack.org/#/q/project:openstack/searchlight-specs+status:open > > If you have any idea or want to contribute, please ping me on IRC: > > - IRC Channel: #openstack-searchlight > - My IRC handler: dangtrinhnt > > > Bests, > > *Trinh Nguyen *| Founder & Chief Architect > > > > *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Nguyen Tri Hai / Ph.D. Student ANDA Lab., Soongsil Univ., Seoul, South Korea *[image: http://link.springer.com/chapter/10.1007/978-3-319-26135-5_4] * -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Wed Aug 22 14:49:42 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 22 Aug 2018 23:49:42 +0900 Subject: [openstack-dev] [Searchlight] Action plan for Searchlight in Stein In-Reply-To: References: Message-ID: Oops, here you go: https://etherpad.openstack.org/p/searchlight-stein-ptg *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Wed, Aug 22, 2018 at 11:46 PM Nguyễn Trí Hải wrote: > Hi, > > The link for Stein's Etherpad is missing. > > On Wed, Aug 22, 2018 at 5:58 PM Trinh Nguyen > wrote: > >> Dear team, >> >> Here is my proposed action plan for Searchlight in Stein. The ultimate >> goal is to revive Searchlight with a sustainable number of contributors and >> can release as expected. >> >> 1. Migrate Searchlight to Storyboard with the help of Kendall >> 2. Attract more contributors (as well as cores) >> 3. Clean up docs, notes >> 4. Review and clean up patches [1] [2] [3] [4] >> 5. Setting up goals/features for Stein. We will need to have a virtual >> PTG (September 10-14, 2018, Denver) since I cannot attend it this time. >> >> This is our Etherpad for Stein, please feel free to contribute from now >> on until the PTG: >> https://review.openstack.org/#/q/project:openstack/searchlight+status:open >> >> [1] >> https://review.openstack.org/#/q/project:openstack/searchlight+status:open >> [2] >> https://review.openstack.org/#/q/project:openstack/searchlight-ui+status:open >> [3] >> https://review.openstack.org/#/q/project:openstack/python-searchlightclient+status:open >> [4] >> https://review.openstack.org/#/q/project:openstack/searchlight-specs+status:open >> >> If you have any idea or want to contribute, please ping me on IRC: >> >> - IRC Channel: #openstack-searchlight >> - My IRC handler: dangtrinhnt >> >> >> Bests, >> >> *Trinh Nguyen *| Founder & Chief Architect >> >> >> >> *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > > Nguyen Tri Hai / Ph.D. Student > > ANDA Lab., Soongsil Univ., Seoul, South Korea > > > > *[image: > http://link.springer.com/chapter/10.1007/978-3-319-26135-5_4] > * > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Wed Aug 22 16:12:38 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Wed, 22 Aug 2018 09:12:38 -0700 Subject: [openstack-dev] [tempest][qa][congress] trouble setting tempest feature flag Message-ID: Hi all, I have added feature flags for the congress tempest plugin [1] and set them in the devstack plugin [2], but the flags seem to be ignored. The tests are skipped [3] according to the default False flag rather than run according to the True flag set in devstack plugin. Any hints on what may be wrong? Thanks so much! [1] https://review.openstack.org/#/c/594747/3 [2] https://review.openstack.org/#/c/594793/1/devstack/plugin.sh [3] http://logs.openstack.org/64/594564/3/check/congress-devstack-api-mysql/b2cd46f/logs/testr_results.html.gz (the bottom two skipped tests were expected to run) From bodenvmw at gmail.com Wed Aug 22 16:49:32 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Wed, 22 Aug 2018 10:49:32 -0600 Subject: [openstack-dev] [neutron] Old patches cleaning In-Reply-To: <5F0E976C-25FE-4C68-9423-60DFF173745C@redhat.com> References: <5F0E976C-25FE-4C68-9423-60DFF173745C@redhat.com> Message-ID: <3961da35-c755-5842-acaa-e3426d02c727@gmail.com> On 8/22/18 2:10 AM, Slawomir Kaplonski wrote:> I will run it only for projects like: > * neutron-lib > > If You have any concerns about running this script, please raise Your hand now :) Thanks for this. Personally I don't see a need to cleanup old reviews for neutron-lib; it's a pretty small list and a few oldies are still being discussed in some form or another. But if you think there's a need, that's fine as well. Neutron is a whole different story. From mark at stackhpc.com Wed Aug 22 17:51:26 2018 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 22 Aug 2018 18:51:26 +0100 Subject: [openstack-dev] [kayobe] Kayobe update Message-ID: Hello Kayobians, I thought it is about time to do another update. # Releases Work continues on master adding Kayobe features. We're still deploying the Queens release of OpenStack, although the Rocky patch [1] gets refreshed every so often to ensure we're not lagging behind. I've been thinking about whether it makes sense to continue to target Kayobe releases against a specific release of OpenStack. It's relatively decoupled in general. * What if we could specify the version of OpenStack deploy from a list of supported versions? * What if we could specify a version of Kolla Ansible to install, and what if Kolla Ansible supported deploying different releases of each service? This is often how clouds end up in practice. This would increase our test matrix somewhat, but could allow us to stay current while still adding Kayobe features for OpenStack releases that operators are actually using. # Recently added features * Ansible 2.5 support [2] * Add support for a separate admin network [3] # Upgrades There is currently no coverage of upgrades in CI. This leaves us in a dangerous position. I've started work on an upgrade job [4], which seems almost ready. We first deploy Pike, smoke test, upgrade to Queens, then smoke test again. Hopefully this job will influence development of a similar job in Kolla Ansible during the Stein cycle that ensures issues are caught upstream where possible. # PTG There won't be an official Kayobe session at the PTG in Denver, although I and a few others from the team will be present. If anyone would like to meet to discuss Kayobe then don't be shy. Please get in touch either via email or IRC (mgoddard). # New faces We've seen a few new faces in #openstack-kayobe recently. Welcome, and keep asking questions - they help us improve our software and documentation. [1] https://review.openstack.org/#/c/568804 [2] https://review.openstack.org/562306 [3] https://review.openstack.org/572370 [4] https://review.openstack.org/592932 Cheers, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Wed Aug 22 18:03:43 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 22 Aug 2018 11:03:43 -0700 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <1534945106-sup-4359@lrrr.local> References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> <1534883437-sup-4403@lrrr.local> <06afaecc-158c-a6d2-2e4d-c586116eac73@gmail.com> <1534945106-sup-4359@lrrr.local> Message-ID: <775949fc-a058-a076-06a5-c42bb8d016ec@gmail.com> On Wed, 22 Aug 2018 09:49:13 -0400, Doug Hellmann wrote: > Excerpts from melanie witt's message of 2018-08-21 15:05:00 -0700: >> On Tue, 21 Aug 2018 16:41:11 -0400, Doug Hellmann wrote: >>> Excerpts from melanie witt's message of 2018-08-21 12:53:43 -0700: >>>> On Tue, 21 Aug 2018 06:50:56 -0500, Matt Riedemann wrote: >>>>> At this point, I think we're at: >>>>> >>>>> 1. Should placement be extracted into it's own git repo in Stein while >>>>> nova still has known major issues which will have dependencies on >>>>> placement changes, mainly modeling affinity? >>>>> >>>>> 2. If we extract, does it go under compute governance or a new project >>>>> with a new PTL. >>>>> >>>>> As I've said, I personally believe that unless we have concrete plans >>>>> for the big items in #1, we shouldn't hold up the extraction. We said in >>>>> Dublin we wouldn't extract to a new git repo in Rocky but we'd work up >>>>> to that point so we could do it in Stein, so this shouldn't surprise >>>>> anyone. The actual code extraction and re-packaging and all that is >>>>> going to be the biggest technical issue with all of this, and will >>>>> likely take all of stein to complete it after all the bugs are shaken out. >>>>> >>>>> For #2, I think for now, in the interim, while we deal with the >>>>> technical headache of the code extraction itself, it's best to leave the >>>>> new repo under compute governance so the existing team is intact and we >>>>> don't conflate the people issue with the technical issue at the same >>>>> time. Get the hard technical part done first, and then we can move it >>>>> out of compute governance. Once it's in its own git repo, we can change >>>>> the core team as needed but I think it should be initialized with >>>>> existing nova-core. >>>> >>>> I'm in support of extracting placement into its own git repo because >>>> Chris has done a lot of work to reduce dependencies in placement and >>>> moving it into its own repo would help in not having to keep chasing >>>> that. As has been said before, I think all of us agree that placement >>>> should be separate as an end goal. The question is when to fully >>>> separate it from governance. >>>> >>>> It's true that we don't have concrete plans for affinity modeling and >>>> shared storage modeling. But I think we do have concrete plans for vGPU >>>> enhancements (being able to have different vGPU types on one compute >>>> host and adding support for traits). vGPU support is an important and >>>> highly sought after feature for operators and users, as we witnessed at >>>> the last Summit in Vancouver. vGPU support is currently using a flat >>>> resource provider structure that needs to be migrated to nested in order >>>> to do the enhancement work, and that's how the reshaper work came about. >>>> (Reshaper work will migrate a flat resource provider structure to a >>>> nested one.) >>>> >>>> We have the nested resource provider support in placement but we need to >>>> integrate the Nova side, leveraging the reshaper code. The reshaper code >>>> is still going through code review, then next we have the integration to >>>> do. I think things are bound to break when we integrate it, just because >>>> nothing is ever perfect, as much as we scrutinize it and the real test >>>> is when we start using it for real. I think going through this >>>> integration would be best done *before* extraction to a new repo. But >>>> given that there is never a "good" time to extract something to a new >>>> repo, I am OK with the idea of doing the extraction first, if that is >>>> what most people want to do. >>>> >>>> What I'm concerned about on the governance piece is how things look as >>>> far as project priorities between the two projects if they are split. >>>> Affinity modeling and shared storage support are compute features >>>> OpenStack operators and users need. Operators need affinity modeling in >>>> the placement is needed to achieve parity for affinity scheduling with >>>> multiple cells. That means, affinity scheduling in Nova with multiple >>>> cells is susceptible to races and does *not* work as well as the >>>> previous single cell support. Shared storage support is something >>>> operators have badly needed for years now and was envisioned to be >>>> solved with placement. >>>> >>>> Given all of that, I'm not seeing how *now* is a good time to separate >>>> the placement project under separate governance with separate goals and >>>> priorities. If operators need things for compute, that are well-known >>>> and that placement was created to solve, how will placement have a >>>> shared interest in solving compute problems, if it is not part of the >>>> compute project? >>>> >>> >>> Who are candidates to be members of a review team for the placement >>> repository after the code is moved out of openstack/nova? >>> >>> How many of them are also members of the nova-core team? >> >> I assume you pose this question in the proposed situation I described >> where placement is a repo under compute. I expect the review team to be > > No, not at all. I'm trying to understand how you think a completely > separate team is going to cause problems. Because it seems like at > least a large portion, if not all, of the contributors want it, and > I need to have a very good reason for denying their request, if we > do. Right now, I understand that there are concerns, but I don't > understand why. I have been trying to explain why over several replies to this thread. Fracturing a group is not something anyone does to foster cooperation and shared priorities and goals. It is my job and responsibility to ensure that features that operators and users need, get delivered to them. We have a list of specific features that we know operators and users need, right now. And fracturing the group is something that will negatively impact our ability to implement them. Working farther apart will negatively impact our ability to coordinate work that is closely coupled. The last thing I want to do right now is build walls. I do not want there to be separate goal-setting and priorities discussions -- I want us to work together, coordinating as one group. At the very least, separating the groups is something to be taken very seriously and is not reversible. I want to be very sure it will be better for operators and users relying on us, if the groups were separated. I haven't heard any reasons why it would be, yet. Thus, I would like to see an incremental approach, given the outstanding work items we know we have. I would like to see us make changes and *try* to create an environment where people can work together in one group, before we go to the extreme option and separate everything completely. I would like to extract the code into its own repo under compute, with the expectation that it is to evolve *independently* of Nova code, and with a subset of nova-core, plus a new placement-core team with placement experts added to it. And we see how that goes. Then, we reassess. If everyone gives that an earnest try, and it is not working for people, then we will know for sure that we tried to make changes and stay together as one group, and it didn't work, and separating into two groups in the middle of the work items we have, is what we have to do. >>> What do you think those folks are more interested in working on than the >>> things you listed as needing to be done to support the nova use cases? >> >> I'm not thinking of anything specific here. At a high level, I don't see >> how separating into two separate groups under separate leadership helps >> us deliver the listed things for operators and users. I tend to think >> that a unified group will be more successful at that. > > OK. At the same time, I'm trying to understand why you have a hard > time believing a new team's priorities would not be aligned with > the nova team's priorities if, as it seems, a large percentage of > that new team would be made up of the same exact people. I think it's about context. If two separate projects do their own priority and goal setting, separately, I think they will naturally be more different than they would be if they were one project. Currently, we agree on goals and priorities together, in the compute context. If placement has its own separate context, the priority setting and goal planning will be done in the context of placement. In two separate groups, someone who is a member of both the Nova and Placement teams would have to persuade Placement-only members to agree to prioritize a particular item. This may sound subtle, but it's a notable difference in how things work when it's one team vs two separate teams. I think having shared context and alignment, at this point in time, when we have outstanding closely coupled nova/placement work to do, is critical in delivering for operators and users who are depending on us. >>> What can they do to reassure you that they will work on the items >>> nova needs, regardless of the governance structure? >> >> If they were separate groups, I don't see why the leadership of >> placement would necessarily share goals and priorities with compute. I >> think that is why it's much more difficult to get things done with two >> separate groups, in general. > > Most of the teams in the community seem to have a relatively easy time > coming to common agreement on priorities and goals, even when they are > not so closely related as the nova and placement teams would be. So I > guess I still don't see what the problem would be, and am looking for > more details, either about the concerns you all have or ways to > alleviate them. The "relatively easy time" sounds a bit vague to me. My experience with cross-project work (multi-attach) has been that someone has to serve as a champion of a feature or effort, and persuade other team to care about it and work together on it. If the team is sufficiently persuaded, they agree to collaborate on a thing. Then, they go into their silos and do some work on it. Then, the champion checks in on the teams after some time passes. The champion gathers problems and blockers and sets up a meeting between the two teams or takes the messages back and forth themselves. The teams do some more work in their silos. The champion checks in again after some time, setting up a meeting to re-synchronize the teams. And this continues until the feature is complete, if the champion has been around that long. This can work, but it's a much different experience than being one team. Being that we are still in the thick of it, and need to work on the items I have mentioned before in my other replies, I don't think *now* is a good time to separate groups. Separating now will hurt operators, users, and customers. Separating now will make already challenging work even more challenging, with more barriers to climb over and more effort to stay connected. That is why I'm concerned. >> I want to reiterate again that the only thing I care about here is >> delivering functionality that operators and users need. vGPUs, in >> particular, has been highly sought after at a community-wide level, not >> just from the compute community. I want to deliver the features that >> people are depending on and IMHO, being a unified group helps that. I >> don't see how being two separate groups helps that. > > I don't think doing those things is mutually exclusive with solving, > or at least addressing, the underlying trust and self-governance > issues here, though. In fact, I think dealing with *that* is going > to make us all more effective at delivering the software, in the > long term because we will have cleared up what the expectations > between the two teams is. To be clear, there are not underlying trust issues with most of the team. A trust issue was expressed by one member of the team and I think it's unfair to apply it to everyone else. Aside from that, it has always been difficult to add folks to nova-core because of the large scope and expertise needed to approve code across all of Nova. Extracting the placement code into its own repo alleviates the problem of the massive scope and gives us the ability to have a placement-core team. And I expect the placement repo to evolve independently of Nova code, as it's been intended to be a separate project, eventually. My hope is that making these changes will improve the experience in the placement subteam and compute as a whole. I hope that the idea of a compromise to try it out will be amenable to everyone. I don't take the idea of separating the groups completely, at this point in time, with outstanding closely coupled nova/placement work, lightly. I think it should be taken very seriously and done with care. And taking a step, trying it out, and then reassessing, seems like the most prudent way to go about it, IMO. Best, -melanie From emccormick at cirrusseven.com Wed Aug 22 18:07:45 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 22 Aug 2018 14:07:45 -0400 Subject: [openstack-dev] [kayobe] Kayobe update In-Reply-To: References: Message-ID: On Wed, Aug 22, 2018, 1:52 PM Mark Goddard wrote: > Hello Kayobians, > > I thought it is about time to do another update. > > # PTG > > There won't be an official Kayobe session at the PTG in Denver, although I > and a few others from the team will be present. If anyone would like to > meet to discuss Kayobe then don't be shy. Please get in touch either via > email or IRC (mgoddard). > Would you have any interest in doing an overview / Q&A session with Operators Monday before lunch or sometime Tuesday? It doesn't need to be anything fancy or formal as these are all fishbowl sessions. It might be a good way to get some traction and feedback. > > Cheers, > Mark > -Erik > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Aug 22 18:25:44 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 22 Aug 2018 18:25:44 +0000 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <775949fc-a058-a076-06a5-c42bb8d016ec@gmail.com> References: <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> <1534883437-sup-4403@lrrr.local> <06afaecc-158c-a6d2-2e4d-c586116eac73@gmail.com> <1534945106-sup-4359@lrrr.local> <775949fc-a058-a076-06a5-c42bb8d016ec@gmail.com> Message-ID: <20180822182544.iuxhmrugmclc42wh@yuggoth.org> On 2018-08-22 11:03:43 -0700 (-0700), melanie witt wrote: [...] > I think it's about context. If two separate projects do their own priority > and goal setting, separately, I think they will naturally be more different > than they would be if they were one project. Currently, we agree on goals > and priorities together, in the compute context. If placement has its own > separate context, the priority setting and goal planning will be done in the > context of placement. In two separate groups, someone who is a member of > both the Nova and Placement teams would have to persuade Placement-only > members to agree to prioritize a particular item. This may sound subtle, but > it's a notable difference in how things work when it's one team vs two > separate teams. I think having shared context and alignment, at this point > in time, when we have outstanding closely coupled nova/placement work to do, > is critical in delivering for operators and users who are depending on us. [...] I'm clearly missing some critical detail about the relationships in the Nova team. Don't the Nova+Placement contributors already have to convince the Placement-only contributors what to prioritize working on? Or are you saying that if they disagree that's fine because the Nova+Placement contributors will get along just fine without the Placement-only contributors helping them get it done? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tenobreg at redhat.com Wed Aug 22 18:32:34 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Wed, 22 Aug 2018 15:32:34 -0300 Subject: [openstack-dev] [sahara] Anti-affinity Broke Message-ID: Hi all, We have an open bug on storyboard regarding anti-affinity on sahara. https://storyboard.openstack.org/#!/story/2002656 This was proposed by Joe Topjian and I have implemented the proposed fix. Unfortunetely we don't have resources to test it properly. Joe, if you can take a look and review https://review.openstack.org/#/c/587978/ . We need this reviewed and merged by tomorrow in order to have it in Rocky. Thanks -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Aug 22 18:46:29 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 22 Aug 2018 14:46:29 -0400 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: References: Message-ID: <1534963406-sup-6359@lrrr.local> Excerpts from Eric Fried's message of 2018-08-22 09:13:25 -0500: > For some time, nova has been using uuidsentinel [1] which conveniently > allows you to get a random UUID in a single LOC with a readable name > that's the same every time you reference it within that process (but not > across processes). Example usage: [2]. > > We would like other projects (notably the soon-to-be-split-out placement > project) to be able to use uuidsentinel without duplicating the code. So > we would like to stuff it in an oslo lib. > > The question is whether it should live in oslotest [3] or in > oslo_utils.uuidutils [4]. The proposed patches are (almost) the same. > The issues we've thought of so far: > > - If this thing is used only for test, oslotest makes sense. We haven't > thought of a non-test use, but somebody surely will. It also depends on whether we want it used that way. I think, given the fact that the data is not persistent or consistent across runs, I would rather have it as a test library only, and not part of the public production API of oslo.util (see below). > - Conversely, if we put it in oslo_utils, we're kinda saying we support > it for non-test too. (This is why the oslo_utils version does some extra > work for thread safety and collision avoidance.) That protection is necessary regardless of how it is going to be used. > - In oslotest, awkwardness is necessary to avoid circular importing: > uuidsentinel uses oslo_utils.uuidutils, which requires oslotest. In > oslo_utils.uuidutils, everything is right there. A third alternative is to create a test fixture which is exposed from oslo.utils under the test package. That clearly labels the code as a test tool, but avoids the circular import problem of placing it in oslotest. > - It's a... UUID util. If I didn't know anything and I was looking for a > UUID util like uuidsentinel, I would look in a module called uuidutils > first. > > We hereby solicit your opinions, either by further discussion here or as > votes on the respective patches. > > Thanks, > efried > > [1] > https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/uuidsentinel.py > [2] > https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/functional/api/openstack/placement/db/test_resource_provider.py#L109-L115 > [3] https://review.openstack.org/594068 > [4] https://review.openstack.org/594179 > From fungi at yuggoth.org Wed Aug 22 18:49:55 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 22 Aug 2018 18:49:55 +0000 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C181CFF@EX10MBOX03.pnnl.gov> References: <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> <1534883437-sup-4403@lrrr.local> <06afaecc-158c-a6d2-2e4d-c586116eac73@gmail.com> <1A3C52DFCD06494D8528644858247BF01C181C68@EX10MBOX03.pnnl.gov> <20180821231052.23jhawema2l5qkg3@yuggoth.org> <1A3C52DFCD06494D8528644858247BF01C181CFF@EX10MBOX03.pnnl.gov> Message-ID: <20180822184955.wzlatk5cfmbyysij@yuggoth.org> On 2018-08-22 00:17:41 +0000 (+0000), Fox, Kevin M wrote: > There have been plenty of cross project goals set forth from the > TC and implemented by the various projects such as wsgi or > python3. Those have been worked on by each of the projects in > priority to some project specific goals by devs interested in > bettering OpenStack. Why is it so hard to believe if the TC gave > out a request for a grander user/ops supporting feature, that the > community wouldn't step up? PTL's are supposed to be neutral to > vendor specific issues and work for the betterment of the Project. Those goals, cross-project by nature, necessarily involve people with domain-specific knowledge in the requisite projects. That is a lot different than expecting Cinder developers to switch gears and start working on Barbican instead just because the TC (or the UC, or the OSF BoD, or whoever) decrees key management is prioritized over multi-attach storage. Cross-project goal setting is already a strained process, in which we as a community spend a _lot_ of time and effort to determine what various project teams are even willing to work on and prioritize alongside the things they already get done. Asking them to work on something has absolutely not stopped them from wanting to work on other things instead. There are plenty of instances where the community (via its elected leadership) has attempted to set goals and some teams have chosen to work on other priorities of their own instead. If they could have directed all their contributors to focus on that it would have been completed, but they (all teams really) attempt balance the priorities set by the OpenStack Technical Committee and other leadership with their own project-specific priorities. Just as the TC sinks a lot of effort into getting teams to focus on things it identifies as priorities, the PTLs encounter similar challenges getting their teams to focus on whatever priorities they've set as a group. Some contributors only work on what interests them, some only on what their employer tells them, and so on, while much of the rest struggle simply to keep up with the overall rate of change. > I don't buy the complexity argument either. Other non OpenStack > projects are implementing similar functionality with far less > complexity. I attribute a lot of that to difference in governence. > Through governence we've made hard things much harder. They can't > be fixed until the governence issues are fixed first I think. [...] Again, specifics would be nice. What decisions has the community made in governing itself which have contributed to the problems you see? What incremental changes would you make to improve that situation (hint: blow-it-all-up suggestions like "get rid of PTLs" aren't solutions when you're steering a community consisting of thousands of developers, we need steps to get from point A to point B)? In this _particular_ situation, what action are you asking the TC or other community leaders to take to resolve the problem (and what do you see as "the problem" in this case, for that matter)? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From kennelson11 at gmail.com Wed Aug 22 19:19:40 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 22 Aug 2018 12:19:40 -0700 Subject: [openstack-dev] [Searchlight] Action plan for Searchlight in Stein In-Reply-To: References: Message-ID: Hello Trinh, On Wed, Aug 22, 2018 at 1:57 AM Trinh Nguyen wrote: > Dear team, > > Here is my proposed action plan for Searchlight in Stein. The ultimate > goal is to revive Searchlight with a sustainable number of contributors and > can release as expected. > > 1. Migrate Searchlight to Storyboard with the help of Kendall > I will get Searchlight setup in our dev environment and run some test migrations today and let you know when they finish :) > 2. Attract more contributors (as well as cores) > 3. Clean up docs, notes > 4. Review and clean up patches [1] [2] [3] [4] > 5. Setting up goals/features for Stein. We will need to have a virtual PTG > (September 10-14, 2018, Denver) since I cannot attend it this time. > > This is our Etherpad for Stein, please feel free to contribute from now on > until the PTG: > https://review.openstack.org/#/q/project:openstack/searchlight+status:open > > [1] > https://review.openstack.org/#/q/project:openstack/searchlight+status:open > [2] > https://review.openstack.org/#/q/project:openstack/searchlight-ui+status:open > [3] > https://review.openstack.org/#/q/project:openstack/python-searchlightclient+status:open > [4] > https://review.openstack.org/#/q/project:openstack/searchlight-specs+status:open > > If you have any idea or want to contribute, please ping me on IRC: > > - IRC Channel: #openstack-searchlight > - My IRC handler: dangtrinhnt > > > Bests, > > *Trinh Nguyen *| Founder & Chief Architect > > > > *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * > > - Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Aug 22 20:59:51 2018 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 22 Aug 2018 21:59:51 +0100 Subject: [openstack-dev] [kayobe] Kayobe update In-Reply-To: References: Message-ID: On Wed, 22 Aug 2018, 19:08 Erik McCormick, wrote: > > > On Wed, Aug 22, 2018, 1:52 PM Mark Goddard wrote: > >> Hello Kayobians, >> >> I thought it is about time to do another update. >> > > > > >> # PTG >> >> There won't be an official Kayobe session at the PTG in Denver, although >> I and a few others from the team will be present. If anyone would like to >> meet to discuss Kayobe then don't be shy. Please get in touch either via >> email or IRC (mgoddard). >> > > Would you have any interest in doing an overview / Q&A session with > Operators Monday before lunch or sometime Tuesday? It doesn't need to be > anything fancy or formal as these are all fishbowl sessions. It might be a > good way to get some traction and feedback. > Absolutely, that's a great idea. I was hoping to attend the Scientific SIG session on Monday, but any time on Tuesday would work. > >> >> Cheers, >> Mark >> > > -Erik > > >> __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andy at andybotting.com Wed Aug 22 21:31:36 2018 From: andy at andybotting.com (Andy Botting) Date: Thu, 23 Aug 2018 07:31:36 +1000 Subject: [openstack-dev] [glance][horizon] Issues we found when using Community Images Message-ID: Hi all, We've recently moved to using Glance's community visibility on the Nectar Research Cloud. We had lots of public images (12255), and we found it was becoming slow to list them all and the community image visibility seems to fit our use-case nicely. We moved all of our user's images over to become community images, and left our 'official' images as the only public ones. We found a few issues, which I wanted to document, if anyone else is looking at doing the same thing. -> Glance API has no way of returning all images available to me in a single API request (https://bugs.launchpad.net/glance/+bug/1779251) The default list of images is perfect (all available to me, except community), but there's a heap of cases where you need to fetch all images including community. If we did have this, my next points would be a whole lot easier to solve. -> Horizon's support for Community images is very lacking ( https://bugs.launchpad.net/horizon/+bug/1779250) On the surface, it looks like Community images are supported in Horizon, but it's only as far as listing images in the Images tab. Trying to boot a Community image from the Launch Instance wizard is actually impossible, as community images don't appear in that list at all. The images tab in Horizon dynamically builds the list of images on the Images tab through new Glance API calls when you use any filters (good). In contrast, the source tab on the Launch Images wizard loads all images at the start (slow with lots of images), then relies on javascript client-side filtering of the list. I've got a dirty patch to fix this for us by basically making two Glance API requests (one without specifying visibility, and another with visibility=community), then merging the data. This would be better handled the same way as the Images tab, with new Glance API requests when filtering. -> Users can't set their own images as Community from the dashboard Should be relatively easy to add this. I'm hoping to look into fixing this soon. -> Murano / Sahara image discovery These projects rely on images to be chosen when creating new environments, and it looks like they use a glance list for their discovery. They both suffer from the same issue and require their images to be non-community for them to find their images. -> Openstack Client didn't support listing community images at all ( https://storyboard.openstack.org/#!/story/2001925) It did support setting images to community, but support for actually listing them was missing. Support has now been added, but not sure if it's made it to a release yet. Apart from these issues, our migration was pretty successful with minimal user complaints. cheers, Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Aug 22 22:21:15 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 22 Aug 2018 15:21:15 -0700 Subject: [openstack-dev] [Searchlight] Action plan for Searchlight in Stein In-Reply-To: References: Message-ID: It is done! I could only find two lp projects- searchlight and python-searchlight-client. If I missed any let me know and I can run the script on them. Otherwise you can view the results here[1]. Play around with it for a couple days and if it works for you we can migrate you whenever. We usually do migrations on Fridays to minimize impact on other work. For other info about the migration process, you can check that out here[2] or ask in #storyboard or email me directly :) -Kendall (diablo_rojo) [1] https://storyboard-dev.openstack.org/#!/project_group/61 [2] https://docs.openstack.org/infra/storyboard/migration.html On Wed, Aug 22, 2018 at 12:19 PM Kendall Nelson wrote: > Hello Trinh, > > > On Wed, Aug 22, 2018 at 1:57 AM Trinh Nguyen > wrote: > >> Dear team, >> >> Here is my proposed action plan for Searchlight in Stein. The ultimate >> goal is to revive Searchlight with a sustainable number of contributors and >> can release as expected. >> >> 1. Migrate Searchlight to Storyboard with the help of Kendall >> > > I will get Searchlight setup in our dev environment and run some test > migrations today and let you know when they finish :) > > >> 2. Attract more contributors (as well as cores) >> 3. Clean up docs, notes >> 4. Review and clean up patches [1] [2] [3] [4] >> 5. Setting up goals/features for Stein. We will need to have a virtual >> PTG (September 10-14, 2018, Denver) since I cannot attend it this time. >> >> This is our Etherpad for Stein, please feel free to contribute from now >> on until the PTG: >> https://review.openstack.org/#/q/project:openstack/searchlight+status:open >> >> [1] >> https://review.openstack.org/#/q/project:openstack/searchlight+status:open >> [2] >> https://review.openstack.org/#/q/project:openstack/searchlight-ui+status:open >> [3] >> https://review.openstack.org/#/q/project:openstack/python-searchlightclient+status:open >> [4] >> https://review.openstack.org/#/q/project:openstack/searchlight-specs+status:open >> >> If you have any idea or want to contribute, please ping me on IRC: >> >> - IRC Channel: #openstack-searchlight >> - My IRC handler: dangtrinhnt >> >> >> Bests, >> >> *Trinh Nguyen *| Founder & Chief Architect >> >> >> >> *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * >> >> > - Kendall (diablo_rojo) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Wed Aug 22 22:46:12 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Wed, 22 Aug 2018 22:46:12 -0000 Subject: [openstack-dev] octavia-dashboard 2.0.0.0rc3 (rocky) Message-ID: Hello everyone, A new release candidate for octavia-dashboard for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/octavia-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/octavia-dashboard/log/?h=stable/rocky Release notes for octavia-dashboard can be found at: https://docs.openstack.org/releasenotes/octavia-dashboard/ If you find an issue that could be considered release-critical, please file it at: https://storyboard.openstack.org/#!/project/909 and tag it *rocky-rc-potential* to bring it to the octavia-dashboard release crew's attention. From no-reply at openstack.org Wed Aug 22 22:52:52 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Wed, 22 Aug 2018 22:52:52 -0000 Subject: [openstack-dev] octavia 3.0.0.0rc3 (rocky) Message-ID: Hello everyone, A new release candidate for octavia for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/octavia/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/octavia/log/?h=stable/rocky Release notes for octavia can be found at: https://docs.openstack.org/releasenotes/octavia/ From kennelson11 at gmail.com Wed Aug 22 23:25:58 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 22 Aug 2018 16:25:58 -0700 Subject: [openstack-dev] [Freezer] Reactivate the team In-Reply-To: References: Message-ID: I finished the test migration. You can find the results here[1]. I only found two lp projects- freezer and freezer-web-ui. If I missed any, please let me know and I will run the migration script on them. Play around with it for a few days and let me know if you are interested in moving forward with the real migration. -Kendall (diablo_rojo) [1] https://storyboard-dev.openstack.org/#!/project_group/62 On Tue, Aug 21, 2018 at 11:30 AM Kendall Nelson wrote: > If you also wanted to add migrating from Launchpad to Storyboard to this > list I am happy to help do the test migration and coordinate the real > migration. > > -Kendall (diablo_rojo) > > On Fri, Aug 17, 2018 at 6:50 PM Trinh Nguyen > wrote: > >> Dear Freezer team, >> >> Since we have appointed a new PTL for the Stein cycle (gengchc2), I >> suggest that we should reactivate the team follows these actions: >> >> 1. Have a team meeting to formalize the new leader as well as discuss >> the new direction. >> 2. Grant PTL privileges for gengchc2 on Launchpad and Project Gerrit >> repositories. >> 3. Reorganize the core team to make sure we have enough active core >> reviewers for new patches. >> 4. Clean up bug reports, blueprints on Launchpad, as well as >> unreviewed patched on Gerrit. >> >> I hope that we can revive Freezer. >> >> Best regards, >> >> *Trinh Nguyen *| Founder & Chief Architect >> >> >> >> *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Aug 22 23:32:55 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 22 Aug 2018 19:32:55 -0400 Subject: [openstack-dev] [monasca][goal][python3] monasca's zuul migration is only partially complete Message-ID: <1534980736-sup-1102@lrrr.local> Monasca team, It looks like you have self-proposed some, but not all, of the patches to import the zuul settings into monasca repositories. I found these: +-----------------------------------------------------+--------------------------------+--------------+------------+-------------------------------------+---------------+ | Subject | Repo | Tests | Workflow | URL | Branch | +-----------------------------------------------------+--------------------------------+--------------+------------+-------------------------------------+---------------+ | Removed dependency on supervisor | openstack/monasca-agent | VERIFIED | MERGED | https://review.openstack.org/554304 | master | | fix tox python3 overrides | openstack/monasca-agent | VERIFIED | MERGED | https://review.openstack.org/574693 | master | | fix tox python3 overrides | openstack/monasca-api | VERIFIED | MERGED | https://review.openstack.org/572970 | master | | import zuul job settings from project-config | openstack/monasca-api | VERIFIED | MERGED | https://review.openstack.org/590698 | stable/ocata | | import zuul job settings from project-config | openstack/monasca-api | VERIFIED | MERGED | https://review.openstack.org/590355 | stable/pike | | import zuul job settings from project-config | openstack/monasca-api | VERIFIED | MERGED | https://review.openstack.org/589928 | stable/queens | | fix tox python3 overrides | openstack/monasca-common | VERIFIED | MERGED | https://review.openstack.org/572910 | master | | ignore python2-specific code under python3 for pep8 | openstack/monasca-common | VERIFIED | MERGED | https://review.openstack.org/573002 | master | | fix tox python3 overrides | openstack/monasca-log-api | VERIFIED | MERGED | https://review.openstack.org/572971 | master | | replace use of 'unicode' builtin | openstack/monasca-log-api | VERIFIED | MERGED | https://review.openstack.org/573015 | master | | fix tox python3 overrides | openstack/monasca-statsd | VERIFIED | MERGED | https://review.openstack.org/572911 | master | | fix tox python3 overrides | openstack/python-monascaclient | VERIFIED | MERGED | https://review.openstack.org/573344 | master | | replace unicode with six.text_type | openstack/python-monascaclient | VERIFIED | MERGED | https://review.openstack.org/575212 | master | | | | | | | | | | | VERIFIED: 13 | MERGED: 13 | | | +-----------------------------------------------------+--------------------------------+--------------+------------+-------------------------------------+———————+ They do not include the monasca-events-api, monasca-specs, monasca-persister, monasca-tempest-plugin, monasca-thresh, monasca-ui, monasca-ceilometer, monasaca-transform, monasca-analytics, monasca-grafana-datasource, and monasca-kibana-plugin repositories. It also looks like they don’t include some necessary changes for some branches in some of the other repos, although I haven’t checked if those branches actually exist so maybe they’re fine. We also need a patch to project-config to remove the settings for all of the monasca team’s repositories. I can generate the missing patches, but doing that now is likely to introduce some bad patches into the repositories that have had some work done, so you’ll need to review everything carefully. In all, it looks like we’re missing around 80+ patches, although some of the ones I have generated locally may be bogus because of the existing changes. I realize Witold is OOO for a while, so I'm emailing the list to ask the team how you want to proceed. Should I go ahead and propose the patches I have? Doug From mordred at inaugust.com Thu Aug 23 00:26:07 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 22 Aug 2018 19:26:07 -0500 Subject: [openstack-dev] [glance][horizon] Issues we found when using Community Images In-Reply-To: References: Message-ID: <3039cc8c-6697-bc08-83d4-e53e5e43d3b7@inaugust.com> On 08/22/2018 04:31 PM, Andy Botting wrote: > Hi all, > > We've recently moved to using Glance's community visibility on the > Nectar Research Cloud. We had lots of public images (12255), and we > found it was becoming slow to list them all and the community image > visibility seems to fit our use-case nicely. > > We moved all of our user's images over to become community images, and > left our 'official' images as the only public ones. > > We found a few issues, which I wanted to document, if anyone else is > looking at doing the same thing. > > -> Glance API has no way of returning all images available to me in a > single API request (https://bugs.launchpad.net/glance/+bug/1779251) > The default list of images is perfect (all available to me, except > community), but there's a heap of cases where you need to fetch all > images including community. If we did have this, my next points would be > a whole lot easier to solve. > > -> Horizon's support for Community images is very lacking > (https://bugs.launchpad.net/horizon/+bug/1779250) > On the surface, it looks like Community images are supported in Horizon, > but it's only as far as listing images in the Images tab. Trying to boot > a Community image from the Launch Instance wizard is actually > impossible, as community images don't appear in that list at all. The > images tab in Horizon dynamically builds the list of images on the > Images tab through new Glance API calls when you use any filters (good). > In contrast, the source tab on the Launch Images wizard loads all images > at the start (slow with lots of images), then relies on javascript > client-side filtering of the list. I've got a dirty patch to fix this > for us by basically making two Glance API requests (one without > specifying visibility, and another with visibility=community), then > merging the data. This would be better handled the same way as the > Images tab, with new Glance API requests when filtering. > > -> Users can't set their own images as Community from the dashboard > Should be relatively easy to add this. I'm hoping to look into fixing > this soon. > > -> Murano / Sahara image discovery > These projects rely on images to be chosen when creating new > environments, and it looks like they use a glance list for their > discovery. They both suffer from the same issue and require their images > to be non-community for them to find their images. > > -> Openstack Client didn't support listing community images at all > (https://storyboard.openstack.org/#!/story/2001925 > ) > It did support setting images to community, but support for actually > listing them was missing.  Support has  now been added, but not sure if > it's made it to a release yet. We've got a few more things I want to do related to images, sdk, openstackclient *and* horizon to make rollouts like this a bit better. I'm betting when I do that I shoujld add murano, sahara and heat to the list. We're currently having to add the new support in like 5 places, which is where some of the holes come from. Hopefully we'll get stuff solid on that front soon - but thanks for the feedback! > Apart from these issues, our migration was pretty successful with > minimal user complaints. \o/ From mriedemos at gmail.com Thu Aug 23 01:23:41 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 22 Aug 2018 20:23:41 -0500 Subject: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration Message-ID: Hi everyone, I have started an etherpad for cells topics at the Stein PTG [1]. The main issue in there right now is dealing with cross-cell cold migration in nova. At a high level, I am going off these requirements: * Cells can shard across flavors (and hardware type) so operators would like to move users off the old flavors/hardware (old cell) to new flavors in a new cell. * There is network isolation between compute hosts in different cells, so no ssh'ing the disk around like we do today. But the image service is global to all cells. Based on this, for the initial support for cross-cell cold migration, I am proposing that we leverage something like shelve offload/unshelve masquerading as resize. We shelve offload from the source cell and unshelve in the target cell. This should work for both volume-backed and non-volume-backed servers (we use snapshots for shelved offloaded non-volume-backed servers). There are, of course, some complications. The main ones that I need help with right now are what happens with volumes and ports attached to the server. Today we detach from the source and attach at the target, but that's assuming the storage backend and network are available to both hosts involved in the move of the server. Will that be the case across cells? I am assuming that depends on the network topology (are routed networks being used?) and storage backend (routed storage?). If the network and/or storage backend are not available across cells, how do we migrate volumes and ports? Cinder has a volume migrate API for admins but I do not know how nova would know the proper affinity per-cell to migrate the volume to the proper host (cinder does not have a routed storage concept like routed provider networks in neutron, correct?). And as far as I know, there is no such thing as port migration in Neutron. Could Placement help with the volume/port migration stuff? Neutron routed provider networks rely on placement aggregates to schedule the VM to a compute host in the same network segment as the port used to create the VM, however, if that segment does not span cells we are kind of stuck, correct? To summarize the issues as I see them (today): * How to deal with the targeted cell during scheduling? This is so we can even get out of the source cell in nova. * How does the API deal with the same instance being in two DBs at the same time during the move? * How to handle revert resize? * How are volumes and ports handled? I can get feedback from my company's operators based on what their deployment will look like for this, but that does not mean it will work for others, so I need as much feedback from operators, especially those running with multiple cells today, as possible. Thanks in advance. [1] https://etherpad.openstack.org/p/nova-ptg-stein-cells -- Thanks, Matt From dangtrinhnt at gmail.com Thu Aug 23 01:28:59 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 23 Aug 2018 10:28:59 +0900 Subject: [openstack-dev] [Searchlight] Action plan for Searchlight in Stein In-Reply-To: References: Message-ID: Hi Kendall, Thanks much for the help. :) Bests, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Thu, Aug 23, 2018 at 7:21 AM Kendall Nelson wrote: > It is done! I could only find two lp projects- searchlight and > python-searchlight-client. If I missed any let me know and I can run the > script on them. Otherwise you can view the results here[1]. > > Play around with it for a couple days and if it works for you we can > migrate you whenever. We usually do migrations on Fridays to minimize > impact on other work. > > For other info about the migration process, you can check that out here[2] > or ask in #storyboard or email me directly :) > > -Kendall (diablo_rojo) > > [1] https://storyboard-dev.openstack.org/#!/project_group/61 > [2] https://docs.openstack.org/infra/storyboard/migration.html > > On Wed, Aug 22, 2018 at 12:19 PM Kendall Nelson > wrote: > >> Hello Trinh, >> >> >> On Wed, Aug 22, 2018 at 1:57 AM Trinh Nguyen >> wrote: >> >>> Dear team, >>> >>> Here is my proposed action plan for Searchlight in Stein. The ultimate >>> goal is to revive Searchlight with a sustainable number of contributors and >>> can release as expected. >>> >>> 1. Migrate Searchlight to Storyboard with the help of Kendall >>> >> >> I will get Searchlight setup in our dev environment and run some test >> migrations today and let you know when they finish :) >> >> >>> 2. Attract more contributors (as well as cores) >>> 3. Clean up docs, notes >>> 4. Review and clean up patches [1] [2] [3] [4] >>> 5. Setting up goals/features for Stein. We will need to have a virtual >>> PTG (September 10-14, 2018, Denver) since I cannot attend it this time. >>> >>> This is our Etherpad for Stein, please feel free to contribute from now >>> on until the PTG: >>> https://review.openstack.org/#/q/project:openstack/searchlight+status:open >>> >>> [1] >>> https://review.openstack.org/#/q/project:openstack/searchlight+status:open >>> [2] >>> https://review.openstack.org/#/q/project:openstack/searchlight-ui+status:open >>> [3] >>> https://review.openstack.org/#/q/project:openstack/python-searchlightclient+status:open >>> [4] >>> https://review.openstack.org/#/q/project:openstack/searchlight-specs+status:open >>> >>> If you have any idea or want to contribute, please ping me on IRC: >>> >>> - IRC Channel: #openstack-searchlight >>> - My IRC handler: dangtrinhnt >>> >>> >>> Bests, >>> >>> *Trinh Nguyen *| Founder & Chief Architect >>> >>> >>> >>> *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz >>> * >>> >>> >> - Kendall (diablo_rojo) >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Thu Aug 23 01:41:20 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 23 Aug 2018 10:41:20 +0900 Subject: [openstack-dev] [Freezer] Reactivate the team In-Reply-To: References: Message-ID: Hi Kendall, I hope gengchc2 will have a decision on this since he is the new PTL of Freezer for Stein. Bests, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Thu, Aug 23, 2018 at 8:26 AM Kendall Nelson wrote: > I finished the test migration. You can find the results here[1]. I only > found two lp projects- freezer and freezer-web-ui. If I missed any, please > let me know and I will run the migration script on them. > > Play around with it for a few days and let me know if you are interested > in moving forward with the real migration. > > -Kendall (diablo_rojo) > > [1] https://storyboard-dev.openstack.org/#!/project_group/62 > > On Tue, Aug 21, 2018 at 11:30 AM Kendall Nelson > wrote: > >> If you also wanted to add migrating from Launchpad to Storyboard to this >> list I am happy to help do the test migration and coordinate the real >> migration. >> >> -Kendall (diablo_rojo) >> >> On Fri, Aug 17, 2018 at 6:50 PM Trinh Nguyen >> wrote: >> >>> Dear Freezer team, >>> >>> Since we have appointed a new PTL for the Stein cycle (gengchc2), I >>> suggest that we should reactivate the team follows these actions: >>> >>> 1. Have a team meeting to formalize the new leader as well as >>> discuss the new direction. >>> 2. Grant PTL privileges for gengchc2 on Launchpad and Project Gerrit >>> repositories. >>> 3. Reorganize the core team to make sure we have enough active core >>> reviewers for new patches. >>> 4. Clean up bug reports, blueprints on Launchpad, as well as >>> unreviewed patched on Gerrit. >>> >>> I hope that we can revive Freezer. >>> >>> Best regards, >>> >>> *Trinh Nguyen *| Founder & Chief Architect >>> >>> >>> >>> *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz >>> * >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorrison at gmail.com Thu Aug 23 02:14:28 2018 From: sorrison at gmail.com (Sam Morrison) Date: Thu, 23 Aug 2018 12:14:28 +1000 Subject: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: References: Message-ID: I think in our case we’d only migrate between cells if we know the network and storage is accessible and would never do it if not. Thinking moving from old to new hardware at a cell level. If storage and network isn’t available ideally it would fail at the api request. There is also ceph backed instances and so this is also something to take into account which nova would be responsible for. I’ll be in Denver so we can discuss more there too. Cheers, Sam > On 23 Aug 2018, at 11:23 am, Matt Riedemann wrote: > > Hi everyone, > > I have started an etherpad for cells topics at the Stein PTG [1]. The main issue in there right now is dealing with cross-cell cold migration in nova. > > At a high level, I am going off these requirements: > > * Cells can shard across flavors (and hardware type) so operators would like to move users off the old flavors/hardware (old cell) to new flavors in a new cell. > > * There is network isolation between compute hosts in different cells, so no ssh'ing the disk around like we do today. But the image service is global to all cells. > > Based on this, for the initial support for cross-cell cold migration, I am proposing that we leverage something like shelve offload/unshelve masquerading as resize. We shelve offload from the source cell and unshelve in the target cell. This should work for both volume-backed and non-volume-backed servers (we use snapshots for shelved offloaded non-volume-backed servers). > > There are, of course, some complications. The main ones that I need help with right now are what happens with volumes and ports attached to the server. Today we detach from the source and attach at the target, but that's assuming the storage backend and network are available to both hosts involved in the move of the server. Will that be the case across cells? I am assuming that depends on the network topology (are routed networks being used?) and storage backend (routed storage?). If the network and/or storage backend are not available across cells, how do we migrate volumes and ports? Cinder has a volume migrate API for admins but I do not know how nova would know the proper affinity per-cell to migrate the volume to the proper host (cinder does not have a routed storage concept like routed provider networks in neutron, correct?). And as far as I know, there is no such thing as port migration in Neutron. > > Could Placement help with the volume/port migration stuff? Neutron routed provider networks rely on placement aggregates to schedule the VM to a compute host in the same network segment as the port used to create the VM, however, if that segment does not span cells we are kind of stuck, correct? > > To summarize the issues as I see them (today): > > * How to deal with the targeted cell during scheduling? This is so we can even get out of the source cell in nova. > > * How does the API deal with the same instance being in two DBs at the same time during the move? > > * How to handle revert resize? > > * How are volumes and ports handled? > > I can get feedback from my company's operators based on what their deployment will look like for this, but that does not mean it will work for others, so I need as much feedback from operators, especially those running with multiple cells today, as possible. Thanks in advance. > > [1] https://etherpad.openstack.org/p/nova-ptg-stein-cells > > -- > > Thanks, > > Matt From alee at redhat.com Thu Aug 23 03:06:36 2018 From: alee at redhat.com (Ade Lee) Date: Wed, 22 Aug 2018 23:06:36 -0400 Subject: [openstack-dev] [barbican][oslo][release][requirements] FFE request for castellan In-Reply-To: <20180821191655.xw37baq4q6ikfqts@gentoo.org> References: <1533914109.23178.37.camel@redhat.com> <20180814185634.GA26658@sm-workstation> <1534352313.5705.35.camel@redhat.com> <8f8add49-cb63-3452-cc7c-c812bfab0877@nemebean.com> <20180821191655.xw37baq4q6ikfqts@gentoo.org> Message-ID: <1534993596.21877.71.camel@redhat.com> Thanks guys, Sorry - it was not clear to me if I was supposed to do anything further. It seems like the requirements team has approved the FFE and the release has merged. Is there anything further I need to do? Thanks, Ade On Tue, 2018-08-21 at 14:16 -0500, Matthew Thode wrote: > On 18-08-21 14:00:41, Ben Nemec wrote: > > Because castellan is in global-requirements, we need an FFE from > > requirements too. Can someone from the requirements team respond > > to the > > review? Thanks. > > > > On 08/16/2018 04:34 PM, Ben Nemec wrote: > > > The backport has merged and I've proposed the release here: > > > https://review.openstack.org/592746 > > > > > > On 08/15/2018 11:58 AM, Ade Lee wrote: > > > > Done. > > > > > > > > https://review.openstack.org/#/c/592154/ > > > > > > > > Thanks, > > > > Ade > > > > > > > > On Wed, 2018-08-15 at 09:20 -0500, Ben Nemec wrote: > > > > > > > > > > On 08/14/2018 01:56 PM, Sean McGinnis wrote: > > > > > > > On 08/10/2018 10:15 AM, Ade Lee wrote: > > > > > > > > Hi all, > > > > > > > > > > > > > > > > I'd like to request a feature freeze exception to get > > > > > > > > the > > > > > > > > following > > > > > > > > change in for castellan. > > > > > > > > > > > > > > > > https://review.openstack.org/#/c/575800/ > > > > > > > > > > > > > > > > This extends the functionality of the vault backend to > > > > > > > > provide > > > > > > > > previously uninmplemented functionality, so it should > > > > > > > > not break > > > > > > > > anyone. > > > > > > > > > > > > > > > > The castellan vault plugin is used behind barbican in > > > > > > > > the > > > > > > > > barbican- > > > > > > > > vault plugin. We'd like to get this change into Rocky > > > > > > > > so that > > > > > > > > we can > > > > > > > > release Barbican with complete functionality on this > > > > > > > > backend > > > > > > > > (along > > > > > > > > with a complete set of passing functional tests). > > > > > > > > > > > > > > This does seem fairly low risk since it's just > > > > > > > implementing a > > > > > > > function that > > > > > > > previously raised a NotImplemented exception. However, > > > > > > > with it > > > > > > > being so > > > > > > > late in the cycle I think we need the release team's > > > > > > > input on > > > > > > > whether this > > > > > > > is possible. Most of the release FFE's I've seen have > > > > > > > been for > > > > > > > critical > > > > > > > bugs, not actual new features. I've added that tag to > > > > > > > this > > > > > > > thread so > > > > > > > hopefully they can weigh in. > > > > > > > > > > > > > > > > > > > As far as releases go, this should be fine. If this doesn't > > > > > > affect > > > > > > any other > > > > > > projects and would just be a late merging feature, as long > > > > > > as the > > > > > > castellan > > > > > > team has considered the risk of adding code so late and is > > > > > > comfortable with > > > > > > that, this is OK. > > > > > > > > > > > > Castellan follows the cycle-with-intermediary release > > > > > > model, so the > > > > > > final Rocky > > > > > > release just needs to be done by next Thursday. I do see > > > > > > the > > > > > > stable/rocky > > > > > > branch has already been created for this repo, so it would > > > > > > need to > > > > > > merge to > > > > > > master first (technically stein), then get cherry-picked to > > > > > > stable/rocky. > > > > > > > > > > Okay, sounds good. It's already merged to master so we're > > > > > good > > > > > there. > > > > > > > > > > Ade, can you get the backport proposed? > > > > > > > I've approved it for a UC only bump > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jaypipes at gmail.com Thu Aug 23 03:24:06 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 22 Aug 2018 23:24:06 -0400 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: References: Message-ID: On Wed, Aug 22, 2018, 10:13 AM Eric Fried wrote: > For some time, nova has been using uuidsentinel [1] which conveniently > allows you to get a random UUID in a single LOC with a readable name > that's the same every time you reference it within that process (but not > across processes). Example usage: [2]. > > We would like other projects (notably the soon-to-be-split-out placement > project) to be able to use uuidsentinel without duplicating the code. So > we would like to stuff it in an oslo lib. > > The question is whether it should live in oslotest [3] or in > oslo_utils.uuidutils [4]. The proposed patches are (almost) the same. > The issues we've thought of so far: > > - If this thing is used only for test, oslotest makes sense. We haven't > thought of a non-test use, but somebody surely will. > - Conversely, if we put it in oslo_utils, we're kinda saying we support > it for non-test too. (This is why the oslo_utils version does some extra > work for thread safety and collision avoidance.) > - In oslotest, awkwardness is necessary to avoid circular importing: > uuidsentinel uses oslo_utils.uuidutils, which requires oslotest. In > oslo_utils.uuidutils, everything is right there. > My preference is to put it in oslotest. Why does oslo_utils.uuidutils import oslotest? That makes zero sense to me... -jay - It's a... UUID util. If I didn't know anything and I was looking for a > UUID util like uuidsentinel, I would look in a module called uuidutils > first. > > We hereby solicit your opinions, either by further discussion here or as > votes on the respective patches. > > Thanks, > efried > > [1] > > https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/uuidsentinel.py > [2] > > https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/functional/api/openstack/placement/db/test_resource_provider.py#L109-L115 > [3] https://review.openstack.org/594068 > [4] https://review.openstack.org/594179 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akalambu at cisco.com Thu Aug 23 05:34:40 2018 From: akalambu at cisco.com (Ajay Kalambur (akalambu)) Date: Thu, 23 Aug 2018 05:34:40 +0000 Subject: [openstack-dev] [neutron][ovs] OVS drop issue Message-ID: <55C0684E-4466-4AD5-B876-48AAFD9A9BBE@cisco.com> Hi We are seeing a very weird issue with OVS 2.9 which is not seen in OVS 2.6. Basically the end symptom is from the neutron L3 agent router namespace we cant ping the gateway in this case 10.86.67.1 Now on performing tcpdump tests we see the gateway responding to ARP the reply comes into the control/network node and is dropped by OVS in the qg- interface We observed that when the ARP reply came back to OVS port it added the following entry recirc_id(0),in_port(2),eth(src=f0:25:72:ab:d4:c1,dst=fa:16:3e:65:85:ad),eth_type(0x8100),vlan(vid=0),encap(eth_type(0x0806)), packets:217, bytes:13888, used:0.329s, actions:drop That drop rule states src mac=gateway mac(f0:25:72:ab:d4:c1), destination mac=qg-xxx interface drop the packet. We were first not sure why this was happening but when we inspected the arp response packet from gateway we noticed in this setup the packet from gatway was sent with cos /tos bit set to priority 5. When we rewrote the packet on TOR to set cos/tos priority to 0 it worked fine. Question is why does OVS 2.9 add a drop rule when it sees an ARP response with cos/tos priority set to 5. Has anyone seen this before 2.6 with newton works for this use case and 2.9 with queens fails Some info below L3 Namespace ip netns exec qrouter-eee032f4-670f-4e43-8e83-e04cb23f00ae ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 46: ha-df4ad8aa-18: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:eb:0c:21 brd ff:ff:ff:ff:ff:ff inet 169.254.192.5/18 brd 169.254.255.255 scope global ha-df4ad8aa-18 valid_lft forever preferred_lft forever inet 169.254.0.1/24 scope global ha-df4ad8aa-18 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:feeb:c21/64 scope link valid_lft forever preferred_lft forever 47: qg-e5541f70-a5: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:65:85:ad brd ff:ff:ff:ff:ff:ff inet 10.86.67.78/24 scope global qg-e5541f70-a5 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe65:85ad/64 scope link nodad valid_lft forever preferred_lft forever Appctl Trace output ovs_vswitch_15520 [root at BXB-AIO-1 /]# ovs-appctl ofproto/trace br-int in_port=2,dl_src=f0:25:72:ab:d4:c1,dl_dst=fa:16:3e:65:85:ad Flow: in_port=2,vlan_tci=0x0000,dl_src=f0:25:72:ab:d4:c1,dl_dst=fa:16:3e:65:85:ad,dl_type=0x0000 bridge("br-int") ---------------- 0. in_port=2,vlan_tci=0x0000/0x1fff, priority 3, cookie 0x21589cb48848e7fb push_vlan:0x8100 set_field:4102->vlan_vid goto_table:60 60. priority 3, cookie 0x21589cb48848e7fb NORMAL -> no learned MAC for destination, flooding bridge("br-prov") ----------------- 0. in_port=2, priority 2, cookie 0x7faae4a30960716f drop bridge("br-inst") ----------------- 0. in_port=2, priority 2, cookie 0x8b9b0311aedc1b0c drop Final flow: in_port=2,dl_vlan=6,dl_vlan_pcp=0,vlan_tci1=0x0000,dl_src=f0:25:72:ab:d4:c1,dl_dst=fa:16:3e:65:85:ad,dl_type=0x0000 Megaflow: recirc_id=0,eth,in_port=2,vlan_tci=0x0000/0x1fff,dl_src=f0:25:72:ab:d4:c1,dl_dst=fa:16:3e:65:85:ad,dl_type=0x0000 Datapath actions: 9,push_vlan(vid=6,pcp=0),7 ovs_vswitch_15520 [root at BXB-AIO-1 /]# ovs-dpctl dump-flows ovs_vswitch_15520 [root at BXB-AIO-1 /]# ovs-dpctl dump-flows | grep f0:25:72:ab:d4:c1 recirc_id(0),in_port(2),eth(src=f0:25:72:ab:d4:c1,dst=ff:ff:ff:ff:ff:ff),eth_type(0x8100),vlan(vid=0),encap(eth_type(0x0806),arp(sip=10.86.67.1,tip=10.86.67.77,op=1/0xff)), packets:3, bytes:192, used:8.199s, actions:1 recirc_id(0),in_port(2),eth(src=f0:25:72:ab:d4:c1,dst=fa:16:3e:65:85:ad),eth_type(0x8100),vlan(vid=0),encap(eth_type(0x0806)), packets:217, bytes:13888, used:0.329s, actions:drop recirc_id(0),in_port(2),eth(src=f0:25:72:ab:d4:c1,dst=ff:ff:ff:ff:ff:ff),eth_type(0x8100),vlan(vid=0),encap(eth_type(0x0806),arp(sip=10.86.67.1,tip=10.86.67.45,op=1/0xff)), packets:3, bytes:192, used:9.778s, actions:1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Thu Aug 23 06:32:37 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 23 Aug 2018 01:32:37 -0500 Subject: [openstack-dev] [barbican][oslo][release][requirements] FFE request for castellan In-Reply-To: <1534993596.21877.71.camel@redhat.com> References: <1533914109.23178.37.camel@redhat.com> <20180814185634.GA26658@sm-workstation> <1534352313.5705.35.camel@redhat.com> <8f8add49-cb63-3452-cc7c-c812bfab0877@nemebean.com> <20180821191655.xw37baq4q6ikfqts@gentoo.org> <1534993596.21877.71.camel@redhat.com> Message-ID: <20180823063237.7bq3362aaefae4pa@gentoo.org> On 18-08-22 23:06:36, Ade Lee wrote: > Thanks guys, > > Sorry - it was not clear to me if I was supposed to do anything > further. It seems like the requirements team has approved the FFE and > the release has merged. Is there anything further I need to do? > > Thanks, > Ade > > On Tue, 2018-08-21 at 14:16 -0500, Matthew Thode wrote: > > On 18-08-21 14:00:41, Ben Nemec wrote: > > > Because castellan is in global-requirements, we need an FFE from > > > requirements too. Can someone from the requirements team respond > > > to the > > > review? Thanks. > > > > > > On 08/16/2018 04:34 PM, Ben Nemec wrote: > > > > The backport has merged and I've proposed the release here: > > > > https://review.openstack.org/592746 > > > > > > > > On 08/15/2018 11:58 AM, Ade Lee wrote: > > > > > Done. > > > > > > > > > > https://review.openstack.org/#/c/592154/ > > > > > > > > > > Thanks, > > > > > Ade > > > > > > > > > > On Wed, 2018-08-15 at 09:20 -0500, Ben Nemec wrote: > > > > > > > > > > > > On 08/14/2018 01:56 PM, Sean McGinnis wrote: > > > > > > > > On 08/10/2018 10:15 AM, Ade Lee wrote: > > > > > > > > > Hi all, > > > > > > > > > > > > > > > > > > I'd like to request a feature freeze exception to get > > > > > > > > > the > > > > > > > > > following > > > > > > > > > change in for castellan. > > > > > > > > > > > > > > > > > > https://review.openstack.org/#/c/575800/ > > > > > > > > > > > > > > > > > > This extends the functionality of the vault backend to > > > > > > > > > provide > > > > > > > > > previously uninmplemented functionality, so it should > > > > > > > > > not break > > > > > > > > > anyone. > > > > > > > > > > > > > > > > > > The castellan vault plugin is used behind barbican in > > > > > > > > > the > > > > > > > > > barbican- > > > > > > > > > vault plugin. We'd like to get this change into Rocky > > > > > > > > > so that > > > > > > > > > we can > > > > > > > > > release Barbican with complete functionality on this > > > > > > > > > backend > > > > > > > > > (along > > > > > > > > > with a complete set of passing functional tests). > > > > > > > > > > > > > > > > This does seem fairly low risk since it's just > > > > > > > > implementing a > > > > > > > > function that > > > > > > > > previously raised a NotImplemented exception. However, > > > > > > > > with it > > > > > > > > being so > > > > > > > > late in the cycle I think we need the release team's > > > > > > > > input on > > > > > > > > whether this > > > > > > > > is possible. Most of the release FFE's I've seen have > > > > > > > > been for > > > > > > > > critical > > > > > > > > bugs, not actual new features. I've added that tag to > > > > > > > > this > > > > > > > > thread so > > > > > > > > hopefully they can weigh in. > > > > > > > > > > > > > > > > > > > > > > As far as releases go, this should be fine. If this doesn't > > > > > > > affect > > > > > > > any other > > > > > > > projects and would just be a late merging feature, as long > > > > > > > as the > > > > > > > castellan > > > > > > > team has considered the risk of adding code so late and is > > > > > > > comfortable with > > > > > > > that, this is OK. > > > > > > > > > > > > > > Castellan follows the cycle-with-intermediary release > > > > > > > model, so the > > > > > > > final Rocky > > > > > > > release just needs to be done by next Thursday. I do see > > > > > > > the > > > > > > > stable/rocky > > > > > > > branch has already been created for this repo, so it would > > > > > > > need to > > > > > > > merge to > > > > > > > master first (technically stein), then get cherry-picked to > > > > > > > stable/rocky. > > > > > > > > > > > > Okay, sounds good. It's already merged to master so we're > > > > > > good > > > > > > there. > > > > > > > > > > > > Ade, can you get the backport proposed? > > > > > > > > > > I've approved it for a UC only bump > > We are still waiting on https://review.openstack.org/594541 to merge, but I already voted and noted that it was FFE approved. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From sundar.nadathur at intel.com Thu Aug 23 06:39:56 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 22 Aug 2018 23:39:56 -0700 Subject: [openstack-dev] [Cyborg] Zoom URL for Aug 29 meeting Message-ID: <45034d8e-22fe-6b6f-2542-1e53cd7b5b86@intel.com> For the August 29 weekly meeting [1], the main agenda is the discussion of Cyborg device/data models. We will use this meeting invite to present slides: Join from PC, Mac, Linux, iOS or Android: https://zoom.us/j/189707867 Or iPhone one-tap :     US: +16465588665,,189707867#  or +14086380986,,189707867# Or Telephone:     Dial(for higher quality, dial a number based on your current location):         US: +1 646 558 8665  or +1 408 638 0986     Meeting ID: 189 707 867     International numbers available: https://zoom.us/u/dnYoZcYYJ [1] https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.se Thu Aug 23 07:13:27 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Thu, 23 Aug 2018 09:13:27 +0200 Subject: [openstack-dev] [magnum] [magnum-ui] show certificate button bug requesting reviews Message-ID: <76cdfd58-fd34-fd2e-82a9-825b37108298@binero.se> Hello, Requesting reviews from the magnum-ui core team for https://review.openstack.org/#/c/595245/ I'm hoping that we could make quick due of this and be able to backport it to the stable/rocky release, would be ideal to backport it for stable/queens as well. Best regards Tobias From thierry at openstack.org Thu Aug 23 08:21:41 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 23 Aug 2018 10:21:41 +0200 Subject: [openstack-dev] Redis licensing terms changes In-Reply-To: <5B7D751D.5010800@openstack.org> References: <9182e211-b26e-ec0e-4b08-9bc53e0c82eb@openstack.org> <5B7D751D.5010800@openstack.org> Message-ID: <9475e530-2418-a415-e539-743bfacbfa77@openstack.org> Jimmy McArthur wrote: > Hmm... > > http://antirez.com/news/120 > > Today a page about the new Creative Common license in the Redis Labs web > site was interpreted as if Redis itself switched license. This is not > the case, Redis is, and will remain, BSD licensed. However in the fake > news era my attempts to provide the correct information failed, and I’m > still seeing everywhere “Redis is no longer open source”. The reality is > that Redis remains BSD, and actually Redis Labs did the right thing > supporting my effort to keep the Redis core open as usually. > > What is happening instead is that certain Redis modules, developed > inside Redis Labs, are now released under the Common Clause (using > Apache license as a base license). This means that basically certain > enterprise add-ons, instead of being completely closed source as they > could be, will be available with a more permissive license. Right, they switched to an open core model, with "enterprise" features moving from open source (AGPL) to proprietary (the so-called Commons clause). So we need to evaluate our use of Redis since: 1/ We generally prefer our default drivers to use truly open source backends (not open core nor proprietary) 2/ I have no idea how usable Redis core is in our use case without the now-proprietary modules (or how usable Redis core will stay in the future now that Redis labs has an incentive to land any "serious" features in the proprietary modules rather than in core). -- Thierry Carrez (ttx) From doug at stackhpc.com Thu Aug 23 08:53:35 2018 From: doug at stackhpc.com (Doug Szumski) Date: Thu, 23 Aug 2018 09:53:35 +0100 Subject: [openstack-dev] [monasca][goal][python3] monasca's zuul migration is only partially complete In-Reply-To: <1534980736-sup-1102@lrrr.local> References: <1534980736-sup-1102@lrrr.local> Message-ID: Reply in-line. On 23/08/18 00:32, Doug Hellmann wrote: > Monasca team, > > It looks like you have self-proposed some, but not all, of the > patches to import the zuul settings into monasca repositories. > > I found these: > > +-----------------------------------------------------+--------------------------------+--------------+------------+-------------------------------------+---------------+ > | Subject | Repo | Tests | Workflow | URL | Branch | > +-----------------------------------------------------+--------------------------------+--------------+------------+-------------------------------------+---------------+ > | Removed dependency on supervisor | openstack/monasca-agent | VERIFIED | MERGED | https://review.openstack.org/554304 | master | > | fix tox python3 overrides | openstack/monasca-agent | VERIFIED | MERGED | https://review.openstack.org/574693 | master | > | fix tox python3 overrides | openstack/monasca-api | VERIFIED | MERGED | https://review.openstack.org/572970 | master | > | import zuul job settings from project-config | openstack/monasca-api | VERIFIED | MERGED | https://review.openstack.org/590698 | stable/ocata | > | import zuul job settings from project-config | openstack/monasca-api | VERIFIED | MERGED | https://review.openstack.org/590355 | stable/pike | > | import zuul job settings from project-config | openstack/monasca-api | VERIFIED | MERGED | https://review.openstack.org/589928 | stable/queens | > | fix tox python3 overrides | openstack/monasca-common | VERIFIED | MERGED | https://review.openstack.org/572910 | master | > | ignore python2-specific code under python3 for pep8 | openstack/monasca-common | VERIFIED | MERGED | https://review.openstack.org/573002 | master | > | fix tox python3 overrides | openstack/monasca-log-api | VERIFIED | MERGED | https://review.openstack.org/572971 | master | > | replace use of 'unicode' builtin | openstack/monasca-log-api | VERIFIED | MERGED | https://review.openstack.org/573015 | master | > | fix tox python3 overrides | openstack/monasca-statsd | VERIFIED | MERGED | https://review.openstack.org/572911 | master | > | fix tox python3 overrides | openstack/python-monascaclient | VERIFIED | MERGED | https://review.openstack.org/573344 | master | > | replace unicode with six.text_type | openstack/python-monascaclient | VERIFIED | MERGED | https://review.openstack.org/575212 | master | > | | | | | | | > | | | VERIFIED: 13 | MERGED: 13 | | | > +-----------------------------------------------------+--------------------------------+--------------+------------+-------------------------------------+———————+ > > They do not include the monasca-events-api, monasca-specs, > monasca-persister, monasca-tempest-plugin, monasca-thresh, monasca-ui, > monasca-ceilometer, monasaca-transform, monasca-analytics, > monasca-grafana-datasource, and monasca-kibana-plugin repositories. > > It also looks like they don’t include some necessary changes for > some branches in some of the other repos, although I haven’t checked > if those branches actually exist so maybe they’re fine. > > We also need a patch to project-config to remove the settings for > all of the monasca team’s repositories. > > I can generate the missing patches, but doing that now is likely > to introduce some bad patches into the repositories that have had > some work done, so you’ll need to review everything carefully. > > In all, it looks like we’re missing around 80+ patches, although > some of the ones I have generated locally may be bogus because of > the existing changes. > > I realize Witold is OOO for a while, so I'm emailing the list to > ask the team how you want to proceed. Should I go ahead and propose > the patches I have? Thanks Doug, we had a discussion and we agreed that the best way to proceed is for you to submit your patches and we will carefully review them. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From thierry at openstack.org Thu Aug 23 09:00:51 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 23 Aug 2018 11:00:51 +0200 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <775949fc-a058-a076-06a5-c42bb8d016ec@gmail.com> References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> <1534883437-sup-4403@lrrr.local> <06afaecc-158c-a6d2-2e4d-c586116eac73@gmail.com> <1534945106-sup-4359@lrrr.local> <775949fc-a058-a076-06a5-c42bb8d016ec@gmail.com> Message-ID: <68248e7c-14f6-d6f2-d87f-8fceb1eed7d6@openstack.org> melanie witt wrote: > [...] > I have been trying to explain why over several replies to this thread. > Fracturing a group is not something anyone does to foster cooperation > and shared priorities and goals. > [...] I would argue that the group is already fractured, otherwise we would not even be having this discussion. In the OpenStack governance model, contributors to a given piece of code control its destiny. We have two safety valves: disagreement between contributors on that specific piece of code are escalated at the PTL level, and disagreement between teams handling different pieces of code that need to interoperate are escalated at the TC level. In reality, in OpenStack history most disagreements were discussed and solved directly between contributors or teams, since nobody likes to appeal to the safety valves. That model implies at the base that contributors to a given piece of code are in control: project teams boundaries need to be aligned on those discrete groups. We dropped the concept of "Programs" a while ago specifically to avoid creating subgroups ruled by larger groups, or artificial domains of ownership. The key issue here is that there is a distinct subgroup within the group. It should be its own team, but it's not. You are saying that keeping the subgroup governed inside the larger group ensures that features that operators and users need get delivered to them. But having a group retaining control over other groups is not how we ensure that in OpenStack -- it's by using the model above. Are you saying that you don't think the OpenStack governance model, where each team talks to its peers in terms of requirements and conflicts between teams may be escalated to the TC if they ever arise, will ultimately ensure that features that operators and users need get delivered to them ? That keeping placement inside Nova governance will yield better results ? -- Thierry Carrez (ttx) From glongwave at gmail.com Thu Aug 23 09:04:04 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Thu, 23 Aug 2018 17:04:04 +0800 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: References: Message-ID: +1 for oslotest Jay Pipes 于2018年8月23日周四 上午11:24写道: > > On Wed, Aug 22, 2018, 10:13 AM Eric Fried wrote: > >> For some time, nova has been using uuidsentinel [1] which conveniently >> allows you to get a random UUID in a single LOC with a readable name >> that's the same every time you reference it within that process (but not >> across processes). Example usage: [2]. >> >> We would like other projects (notably the soon-to-be-split-out placement >> project) to be able to use uuidsentinel without duplicating the code. So >> we would like to stuff it in an oslo lib. >> >> The question is whether it should live in oslotest [3] or in >> oslo_utils.uuidutils [4]. The proposed patches are (almost) the same. >> The issues we've thought of so far: >> >> - If this thing is used only for test, oslotest makes sense. We haven't >> thought of a non-test use, but somebody surely will. >> - Conversely, if we put it in oslo_utils, we're kinda saying we support >> it for non-test too. (This is why the oslo_utils version does some extra >> work for thread safety and collision avoidance.) >> - In oslotest, awkwardness is necessary to avoid circular importing: >> uuidsentinel uses oslo_utils.uuidutils, which requires oslotest. In >> oslo_utils.uuidutils, everything is right there. >> > > My preference is to put it in oslotest. Why does oslo_utils.uuidutils > import oslotest? That makes zero sense to me... > > -jay > > - It's a... UUID util. If I didn't know anything and I was looking for a >> UUID util like uuidsentinel, I would look in a module called uuidutils >> first. >> >> We hereby solicit your opinions, either by further discussion here or as >> votes on the respective patches. >> >> Thanks, >> efried >> >> [1] >> >> https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/uuidsentinel.py >> [2] >> >> https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/functional/api/openstack/placement/db/test_resource_provider.py#L109-L115 >> [3] https://review.openstack.org/594068 >> [4] https://review.openstack.org/594179 >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Thu Aug 23 09:28:48 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 23 Aug 2018 11:28:48 +0200 Subject: [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration) In-Reply-To: <20180822135747.GA27570@sm-workstation> References: <20180822094620.kncry4ufbe6fwi5u@localhost> <20180822135747.GA27570@sm-workstation> Message-ID: <20180823092848.ceztdl4dxr4d332d@localhost> On 22/08, Sean McGinnis wrote: > > > > The solution is conceptually simple. We add a new API microversion in > > Cinder that adds and optional parameter called "generic_keep_source" > > (defaults to False) to both migrate and retype operations. > > > > This means that if the driver optimized migration cannot do the > > migration and the generic migration code is the one doing the migration, > > then, instead of our final step being to swap the volume id's and > > deleting the source volume, what we would do is to swap the volume id's > > and move all the snapshots to reference the new volume. Then we would > > create a user message with the new ID of the volume. > > > > How would you propose to "move all the snapshots to reference the new volume"? > Most storage does not allow a snapshot to be moved from one volume to another. > really the only way a migration of a snapshot can work across all storage types > would be to incrementally copy the data from a source to a destination up to > the point of the oldest snapshot, create a new snapshot on the new volume, then > proceed through until all snapshots have been rebuilt on the new volume. > Hi Sean, Sorry, I phrased that wrong. When I say move the snapshots to the new volume I mean to the "New Volume DB entry", which is now pointing to the old volume. So we wouldn't really be moving the snapshots, we would just be leaving the old volume with its snapshots under a new UUID, and the old UUID that the user had attached to Nova will be referencing the new volume. Again, sorry for the confusion. Cheers, Gorka. From geguileo at redhat.com Thu Aug 23 09:31:43 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 23 Aug 2018 11:31:43 +0200 Subject: [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration) In-Reply-To: References: <20180822094620.kncry4ufbe6fwi5u@localhost> Message-ID: <20180823093143.7yyyplyglfee35wt@localhost> On 22/08, Matthew Booth wrote: > On Wed, 22 Aug 2018 at 10:47, Gorka Eguileor wrote: > > > > On 20/08, Matthew Booth wrote: > > > For those who aren't familiar with it, nova's volume-update (also > > > called swap volume by nova devs) is the nova part of the > > > implementation of cinder's live migration (also called retype). > > > Volume-update is essentially an internal cinder<->nova api, but as > > > that's not a thing it's also unfortunately exposed to users. Some > > > users have found it and are using it, but because it's essentially an > > > internal cinder<->nova api it breaks pretty easily if you don't treat > > > it like a special snowflake. It looks like we've finally found a way > > > it's broken for non-cinder callers that we can't fix, even with a > > > dirty hack. > > > > > > volume-update essentially does a live copy of the > > > data on volume to volume, then seamlessly swaps the > > > attachment to from to . The guest OS on > > > will not notice anything at all as the hypervisor swaps the storage > > > backing an attached volume underneath it. > > > > > > When called by cinder, as intended, cinder does some post-operation > > > cleanup such that is deleted and inherits the same > > > volume_id; that is effectively becomes . When called any > > > other way, however, this cleanup doesn't happen, which breaks a bunch > > > of assumptions. One of these is that a disk's serial number is the > > > same as the attached volume_id. Disk serial number, in KVM at least, > > > is immutable, so can't be updated during volume-update. This is fine > > > if we were called via cinder, because the cinder cleanup means the > > > volume_id stays the same. If called any other way, however, they no > > > longer match, at least until a hard reboot when it will be reset to > > > the new volume_id. It turns out this breaks live migration, but > > > probably other things too. We can't think of a workaround. > > > > > > I wondered why users would want to do this anyway. It turns out that > > > sometimes cinder won't let you migrate a volume, but nova > > > volume-update doesn't do those checks (as they're specific to cinder > > > internals, none of nova's business, and duplicating them would be > > > fragile, so we're not adding them!). Specifically we know that cinder > > > won't let you migrate a volume with snapshots. There may be other > > > reasons. If cinder won't let you migrate your volume, you can still > > > move your data by using nova's volume-update, even though you'll end > > > up with a new volume on the destination, and a slightly broken > > > instance. Apparently the former is a trade-off worth making, but the > > > latter has been reported as a bug. > > > > > > > Hi Matt, > > > > As you know, I'm in favor of making this REST API call only authorized > > for Cinder to avoid messing the cloud. > > > > I know you wanted Cinder to have a solution to do live migrations of > > volumes with snapshots, and while this is not possible to do in a > > reasonable fashion, I kept thinking about it given your strong feelings > > to provide a solution for users that really need this, and I think we > > may have a "reasonable" compromise. > > > > The solution is conceptually simple. We add a new API microversion in > > Cinder that adds and optional parameter called "generic_keep_source" > > (defaults to False) to both migrate and retype operations. > > > > This means that if the driver optimized migration cannot do the > > migration and the generic migration code is the one doing the migration, > > then, instead of our final step being to swap the volume id's and > > deleting the source volume, what we would do is to swap the volume id's > > and move all the snapshots to reference the new volume. Then we would > > create a user message with the new ID of the volume. > > > > This way we can preserve the old volume with all its snapshots and do > > the live migration. > > > > The implementation is a little bit tricky, as we'll have to add anew > > "update_migrated_volume" mechanism to support the renaming of both > > volumes, since the old one wouldn't work with this among other things, > > but it's doable. > > > > Unfortunately I don't have the time right now to work on this... > > Sounds promising, and honestly more than I'd have hoped for. > > Matt > Hi Matt, Reading Sean's reply I notice that I phrased that wrong. The volume on the new storage backend wouldn't have any snapshots. The result of the operation would be a new volume with the old ID and no snapshots (this would be the one in use by Nova), and the old volume with all the snapshots having a new ID on the DB. Due to Cinder's mechanism to create this new volume we wouldn't be returning it on the REST API call, but as a user message instead. Sorry for the confusion. Cheers, Gorka. From geguileo at redhat.com Thu Aug 23 10:42:10 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 23 Aug 2018 12:42:10 +0200 Subject: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: References: Message-ID: <20180823104210.kgctxfjiq47uru34@localhost> On 22/08, Matt Riedemann wrote: > Hi everyone, > > I have started an etherpad for cells topics at the Stein PTG [1]. The main > issue in there right now is dealing with cross-cell cold migration in nova. > > At a high level, I am going off these requirements: > > * Cells can shard across flavors (and hardware type) so operators would like > to move users off the old flavors/hardware (old cell) to new flavors in a > new cell. > > * There is network isolation between compute hosts in different cells, so no > ssh'ing the disk around like we do today. But the image service is global to > all cells. > > Based on this, for the initial support for cross-cell cold migration, I am > proposing that we leverage something like shelve offload/unshelve > masquerading as resize. We shelve offload from the source cell and unshelve > in the target cell. This should work for both volume-backed and > non-volume-backed servers (we use snapshots for shelved offloaded > non-volume-backed servers). > > There are, of course, some complications. The main ones that I need help > with right now are what happens with volumes and ports attached to the > server. Today we detach from the source and attach at the target, but that's > assuming the storage backend and network are available to both hosts > involved in the move of the server. Will that be the case across cells? I am > assuming that depends on the network topology (are routed networks being > used?) and storage backend (routed storage?). If the network and/or storage > backend are not available across cells, how do we migrate volumes and ports? > Cinder has a volume migrate API for admins but I do not know how nova would > know the proper affinity per-cell to migrate the volume to the proper host > (cinder does not have a routed storage concept like routed provider networks > in neutron, correct?). And as far as I know, there is no such thing as port > migration in Neutron. Hi Matt, I think Nova should never have to rely on Cinder's hosts/backends information to do migrations or any other operation. In this case even if Nova had that info, it wouldn't be the solution. Cinder would reject migrations if there's an incompatibility on the Volume Type (AZ, Referenced backend, capabilities...) I don't know anything about Nova cells, so I don't know the specifics of how we could do the mapping between them and Cinder backends, but considering the limited range of possibilities in Cinder I would say we only have Volume Types and AZs to work a solution. > > Could Placement help with the volume/port migration stuff? Neutron routed > provider networks rely on placement aggregates to schedule the VM to a > compute host in the same network segment as the port used to create the VM, > however, if that segment does not span cells we are kind of stuck, correct? > I don't know how the Nova Placement works, but it could hold an equivalency mapping of volume types to cells as in: Cell#1 Cell#2 VolTypeA <--> VolTypeD VolTypeB <--> VolTypeE VolTypeC <--> VolTypeF Then it could do volume retypes (allowing migration) and that would properly move the volumes from one backend to another. Cheers, Gorka. > To summarize the issues as I see them (today): > > * How to deal with the targeted cell during scheduling? This is so we can > even get out of the source cell in nova. > > * How does the API deal with the same instance being in two DBs at the same > time during the move? > > * How to handle revert resize? > > * How are volumes and ports handled? > > I can get feedback from my company's operators based on what their > deployment will look like for this, but that does not mean it will work for > others, so I need as much feedback from operators, especially those running > with multiple cells today, as possible. Thanks in advance. > > [1] https://etherpad.openstack.org/p/nova-ptg-stein-cells > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From davanum at gmail.com Thu Aug 23 10:46:38 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Thu, 23 Aug 2018 06:46:38 -0400 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: References: Message-ID: Where exactly Eric? I can't seem to find the import: http://codesearch.openstack.org/?q=(from%7Cimport).*oslotest&i=nope&files=&repos=oslo.utils -- dims On Wed, Aug 22, 2018 at 11:24 PM Jay Pipes wrote: > > On Wed, Aug 22, 2018, 10:13 AM Eric Fried wrote: > >> For some time, nova has been using uuidsentinel [1] which conveniently >> allows you to get a random UUID in a single LOC with a readable name >> that's the same every time you reference it within that process (but not >> across processes). Example usage: [2]. >> >> We would like other projects (notably the soon-to-be-split-out placement >> project) to be able to use uuidsentinel without duplicating the code. So >> we would like to stuff it in an oslo lib. >> >> The question is whether it should live in oslotest [3] or in >> oslo_utils.uuidutils [4]. The proposed patches are (almost) the same. >> The issues we've thought of so far: >> >> - If this thing is used only for test, oslotest makes sense. We haven't >> thought of a non-test use, but somebody surely will. >> - Conversely, if we put it in oslo_utils, we're kinda saying we support >> it for non-test too. (This is why the oslo_utils version does some extra >> work for thread safety and collision avoidance.) >> - In oslotest, awkwardness is necessary to avoid circular importing: >> uuidsentinel uses oslo_utils.uuidutils, which requires oslotest. In >> oslo_utils.uuidutils, everything is right there. >> > > My preference is to put it in oslotest. Why does oslo_utils.uuidutils > import oslotest? That makes zero sense to me... > > -jay > > - It's a... UUID util. If I didn't know anything and I was looking for a >> UUID util like uuidsentinel, I would look in a module called uuidutils >> first. >> >> We hereby solicit your opinions, either by further discussion here or as >> votes on the respective patches. >> >> Thanks, >> efried >> >> [1] >> >> https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/uuidsentinel.py >> [2] >> >> https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/functional/api/openstack/placement/db/test_resource_provider.py#L109-L115 >> [3] https://review.openstack.org/594068 >> [4] https://review.openstack.org/594179 >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims -------------- next part -------------- An HTML attachment was scrubbed... URL: From jankihc91 at gmail.com Thu Aug 23 11:02:16 2018 From: jankihc91 at gmail.com (Janki Chhatbar) Date: Thu, 23 Aug 2018 16:32:16 +0530 Subject: [openstack-dev] [neutron][ovs][TripleO] Enabling IPv6 address for tunnel endpoints Message-ID: Hi I understand that currently tunnel endpoints are supported to be on IPv4 address only. I have a requirement for them to be on IPv6 endpoints as well with OpenDaylight. So the deployment will have IPv6 addresses for tenant network. I know OVS now supports IPv6 endpoints. I want to know if there are any gaps from Neutron and is it safe to enable tenant endpoints on IPv6 address in TripleO. -- Thanking you Janki Chhatbar OpenStack | Docker | SDN simplyexplainedblog.wordpress.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From moshele at mellanox.com Thu Aug 23 11:34:27 2018 From: moshele at mellanox.com (Moshe Levi) Date: Thu, 23 Aug 2018 11:34:27 +0000 Subject: [openstack-dev] [tripleo][nova] nova rx_queue_size tx_queue_size config options breaks booting vm with SR-IOV Message-ID: Hi all, Recent change in tripleo [1] configure nova rx_queue_size tx_queue_size config by default. It seem that this config option breaks booting vm with SR-IOV. See [2] The issues is because of this code [3] which configure virtio queue size if the in the interface xml the driver is vhost or None. In case of SR-IOV the driver is also None and that why we get the error. A quick fix will be adding driver=vfio to [4] I just wonder if there are other interface in the libvirt xml which this can have the same issue. [1] - https://github.com/openstack/tripleo-heat-templates/commit/444fc042dca3f9a85e8f7076ce68114ac45478c7#diff-99a22d37b829681d157f41d35c38e4c5 [2] - http://paste.openstack.org/show/728666/ [3] - https://review.openstack.org/#/c/595592/ [4] - https://github.com/openstack/nova/blob/34956bea4beb8e5ba474b42ba777eb88a5eadd76/nova/virt/libvirt/designer.py#L123 -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Aug 23 12:06:02 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 23 Aug 2018 08:06:02 -0400 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: References: Message-ID: <1535025580-sup-8617@lrrr.local> Excerpts from Davanum Srinivas (dims)'s message of 2018-08-23 06:46:38 -0400: > Where exactly Eric? I can't seem to find the import: > > http://codesearch.openstack.org/?q=(from%7Cimport).*oslotest&i=nope&files=&repos=oslo.utils > > -- dims oslo.utils depends on oslotest via test-requirements.txt and oslotest is used within the test modules in oslo.utils. As I've said on both reviews, I think we do not want a global singleton instance of this sentinal class. We do want a formal test fixture. Either library can export a test fixture and olso.utils already has oslo_utils.fixture.TimeFixture so there's precedent to adding it there, so I have a slight preference for just doing that. That said, oslo_utils.uuidutils.generate_uuid() is simply returning str(uuid.uuid4()). We have it wrapped up as a function so we can mock it out in other tests, but we hardly need to rely on that if we're making a test fixture for oslotest. My vote is to add a new fixture class to oslo_utils.fixture. Doug > > On Wed, Aug 22, 2018 at 11:24 PM Jay Pipes wrote: > > > > > On Wed, Aug 22, 2018, 10:13 AM Eric Fried wrote: > > > >> For some time, nova has been using uuidsentinel [1] which conveniently > >> allows you to get a random UUID in a single LOC with a readable name > >> that's the same every time you reference it within that process (but not > >> across processes). Example usage: [2]. > >> > >> We would like other projects (notably the soon-to-be-split-out placement > >> project) to be able to use uuidsentinel without duplicating the code. So > >> we would like to stuff it in an oslo lib. > >> > >> The question is whether it should live in oslotest [3] or in > >> oslo_utils.uuidutils [4]. The proposed patches are (almost) the same. > >> The issues we've thought of so far: > >> > >> - If this thing is used only for test, oslotest makes sense. We haven't > >> thought of a non-test use, but somebody surely will. > >> - Conversely, if we put it in oslo_utils, we're kinda saying we support > >> it for non-test too. (This is why the oslo_utils version does some extra > >> work for thread safety and collision avoidance.) > >> - In oslotest, awkwardness is necessary to avoid circular importing: > >> uuidsentinel uses oslo_utils.uuidutils, which requires oslotest. In > >> oslo_utils.uuidutils, everything is right there. > >> > > > > My preference is to put it in oslotest. Why does oslo_utils.uuidutils > > import oslotest? That makes zero sense to me... > > > > -jay > > > > - It's a... UUID util. If I didn't know anything and I was looking for a > >> UUID util like uuidsentinel, I would look in a module called uuidutils > >> first. > >> > >> We hereby solicit your opinions, either by further discussion here or as > >> votes on the respective patches. > >> > >> Thanks, > >> efried > >> > >> [1] > >> > >> https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/uuidsentinel.py > >> [2] > >> > >> https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/functional/api/openstack/placement/db/test_resource_provider.py#L109-L115 > >> [3] https://review.openstack.org/594068 > >> [4] https://review.openstack.org/594179 > >> > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From mriedemos at gmail.com Thu Aug 23 12:10:58 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 23 Aug 2018 07:10:58 -0500 Subject: [openstack-dev] [tripleo][nova] nova rx_queue_size tx_queue_size config options breaks booting vm with SR-IOV In-Reply-To: References: Message-ID: <5b7cbbee-668e-0175-773f-6570b1055cde@gmail.com> On 8/23/2018 6:34 AM, Moshe Levi wrote: > Recent change in tripleo [1] configure nova rx_queue_size tx_queue_size > config by default. > > It seem that this config option breaks booting vm with SR-IOV. See [2] > > The issues is because of this code [3] which configure virtio queue size > if the in the interface xml the driver is vhost or None. > > In case of SR-IOV the driver is also None and that why we get the error. > > A quick fix will be adding driver=vfio to [4] > > I just wonder if there are other interface in the libvirt xml which this > can have the same issue. > > [1] - > https://github.com/openstack/tripleo-heat-templates/commit/444fc042dca3f9a85e8f7076ce68114ac45478c7#diff-99a22d37b829681d157f41d35c38e4c5 > > > [2] - http://paste.openstack.org/show/728666/ > > [3] - https://review.openstack.org/#/c/595592/ > > [4] - > https://github.com/openstack/nova/blob/34956bea4beb8e5ba474b42ba777eb88a5eadd76/nova/virt/libvirt/designer.py#L123 > > Quick note, your [3] and [4] references are reversed. Nice find on this, it's a regression in Rocky. As such, please report a bug so we can track it as an RC3 potential issue. Note that RC3 is *today*. -- Thanks, Matt From doug at doughellmann.com Thu Aug 23 12:34:57 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 23 Aug 2018 08:34:57 -0400 Subject: [openstack-dev] [monasca][goal][python3] monasca's zuul migration is only partially complete In-Reply-To: References: <1534980736-sup-1102@lrrr.local> Message-ID: <1535027603-sup-4897@lrrr.local> Excerpts from Doug Szumski's message of 2018-08-23 09:53:35 +0100: > Thanks Doug, we had a discussion and we agreed that the best way to > proceed is for you to submit your patches and we will carefully review them. I proposed those patches this morning. With the aid of your exemplary repository naming conventions, you can find them all at: https://review.openstack.org/#/q/topic:python3-first+project:%255E.*monasca.*+is:open From jaypipes at gmail.com Thu Aug 23 12:40:21 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 23 Aug 2018 08:40:21 -0400 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: <1535025580-sup-8617@lrrr.local> References: <1535025580-sup-8617@lrrr.local> Message-ID: <756c9051-a605-4d34-6c15-99cbcbc4801d@gmail.com> On 08/23/2018 08:06 AM, Doug Hellmann wrote: > Excerpts from Davanum Srinivas (dims)'s message of 2018-08-23 06:46:38 -0400: >> Where exactly Eric? I can't seem to find the import: >> >> http://codesearch.openstack.org/?q=(from%7Cimport).*oslotest&i=nope&files=&repos=oslo.utils >> >> -- dims > > oslo.utils depends on oslotest via test-requirements.txt and oslotest is > used within the test modules in oslo.utils. > > As I've said on both reviews, I think we do not want a global > singleton instance of this sentinal class. We do want a formal test > fixture. Either library can export a test fixture and olso.utils > already has oslo_utils.fixture.TimeFixture so there's precedent to > adding it there, so I have a slight preference for just doing that. > > That said, oslo_utils.uuidutils.generate_uuid() is simply returning > str(uuid.uuid4()). We have it wrapped up as a function so we can > mock it out in other tests, but we hardly need to rely on that if > we're making a test fixture for oslotest. > > My vote is to add a new fixture class to oslo_utils.fixture. OK, thanks for the helpful explanation, Doug. Works for me. -jay From tobias.urdin at binero.se Thu Aug 23 12:46:43 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Thu, 23 Aug 2018 14:46:43 +0200 Subject: [openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes In-Reply-To: References: <282a7bf1-ae3e-335a-e1a1-69996276f731@binero.se> Message-ID: <22ea4089-d830-ce0e-fd0a-502d16075bda@binero.se> Thanks for all of your help everyone, I've been busy with other thing but was able to pick up where I left regarding Magnum. After fixing some issues I have been able to provision a working Kubernetes cluster. I'm still having issues with getting Docker Swarm working, I've tried with both Docker and flannel as the networking layer but none of these works. After investigating the issue seems to be that etcd.service is not installed (unit file doesn't exist) so the master doesn't work, the minion swarm node is provisioned but cannot join the cluster because there is no etcd. Anybody seen this issue before? I've been digging through all cloud-init logs and cannot see anything that would cause this. I also have another separate issue, when provisioning using the magnum-ui in Horizon and selecting ubuntu with Mesos I get the error "The Parameter (nodes_affinity_policy) was not provided". The nodes_affinity_policy do have a default value in magnum.conf so I'm starting to think this might be an issue with the magnum-ui dashboard? Best regards Tobias On 08/04/2018 06:24 PM, Joe Topjian wrote: > We recently deployed Magnum and I've been making my way through > getting both Swarm and Kubernetes running. I also ran into some > initial issues. These notes may or may not help, but thought I'd share > them in case: > > * We're using Barbican for SSL. I have not tried with the internal > x509keypair. > > * I was only able to get things running with Fedora Atomic 27, > specifically the version used in the Magnum docs: > https://docs.openstack.org/magnum/latest/install/launch-instance.html > > Anything beyond that wouldn't even boot in my cloud. I haven't dug > into this. > > * Kubernetes requires a Cluster Template to have a label of > cert_manager_api=true set in order for the cluster to fully come up > (at least, it didn't work for me until I set this). > > As far as troubleshooting methods go, check the cloud-init logs on the > individual instances to see if any of the "parts" have failed to run. > Manually re-run the parts on the command-line to get a better idea of > why they failed. Review the actual script, figure out the variable > interpolation and how it relates to the Cluster Template being used. > > Eventually I was able to get clusters running with the stock > driver/templates, but wanted to tune them in order to better fit in > our cloud, so I've "forked" them. This is in no way a slight against > the existing drivers/templates nor do I recommend doing this until you > reach a point where the stock drivers won't meet your needs. But I > mention it because it's possible to do and it's not terribly hard. > This is still a work-in-progress and a bit hacky: > > https://github.com/cybera/magnum-templates > > Hope that helps, > Joe > > On Fri, Aug 3, 2018 at 6:46 AM, Tobias Urdin > wrote: > > Hello, > > I'm testing around with Magnum and have so far only had issues. > I've tried deploying Docker Swarm (on Fedora Atomic 27, Fedora > Atomic 28) and Kubernetes (on Fedora Atomic 27) and haven't been > able to get it working. > > Running Queens, is there any information about supported images? > Is Magnum maintained to support Fedora Atomic still? > What is in charge of population the certificates inside the > instances, because this seems to be the root of all issues, I'm > not using Barbican but the x509keypair driver > is that the reason? > > Perhaps I missed some documentation that x509keypair does not > support what I'm trying to do? > > I've seen the following issues: > > Docker: > * Master does not start and listen on TCP because of certificate > issues > dockerd-current[1909]: Could not load X509 key pair (cert: > "/etc/docker/server.crt", key: "/etc/docker/server.key") > > * Node does not start with: > Dependency failed for Docker Application Container Engine. > docker.service: Job docker.service/start failed with result > 'dependency'. > > Kubernetes: > * Master etcd does not start because /run/etcd does not exist > ** When that is created it fails to start because of certificate > 2018-08-03 12:41:16.554257 C | etcdmain: open > /etc/etcd/certs/server.crt: no such file or directory > > * Master kube-apiserver does not start because of certificate > unable to load server certificate: open > /etc/kubernetes/certs/server.crt: no such file or directory > > * Master heat script just sleeps forever waiting for port 8080 to > become available (kube-apiserver) so it can never kubectl apply > the final steps. > > * Node does not even start and times out when Heat deploys it, > probably because master never finishes > > Any help is appreciated perhaps I've missed something crucial, > I've not tested Kubernetes on CoreOS yet. > > Best regards > Tobias > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Aug 23 13:12:01 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 23 Aug 2018 08:12:01 -0500 Subject: [openstack-dev] [tripleo][nova] nova rx_queue_size tx_queue_size config options breaks booting vm with SR-IOV In-Reply-To: <5b7cbbee-668e-0175-773f-6570b1055cde@gmail.com> References: <5b7cbbee-668e-0175-773f-6570b1055cde@gmail.com> Message-ID: <5268e70e-5347-4f64-9355-64b721fae280@gmail.com> On 8/23/2018 7:10 AM, Matt Riedemann wrote: > On 8/23/2018 6:34 AM, Moshe Levi wrote: >> Recent change in tripleo [1] configure nova rx_queue_size >> tx_queue_size config by default. >> >> It seem that this config option breaks booting vm with SR-IOV. See [2] >> >> The issues is because of this code [3] which configure virtio queue >> size if the in the interface xml the driver is vhost or None. >> >> In case of SR-IOV the driver is also None and that why we get the error. >> >> A quick fix will be adding driver=vfio to [4] >> >> I just wonder if there are other interface in the libvirt xml which >> this can have the same issue. >> >> [1] - >> https://github.com/openstack/tripleo-heat-templates/commit/444fc042dca3f9a85e8f7076ce68114ac45478c7#diff-99a22d37b829681d157f41d35c38e4c5 >> >> >> [2] - http://paste.openstack.org/show/728666/ >> >> [3] - https://review.openstack.org/#/c/595592/ >> >> [4] - >> https://github.com/openstack/nova/blob/34956bea4beb8e5ba474b42ba777eb88a5eadd76/nova/virt/libvirt/designer.py#L123 >> >> > > Quick note, your [3] and [4] references are reversed. > > Nice find on this, it's a regression in Rocky. As such, please report a > bug so we can track it as an RC3 potential issue. Note that RC3 is *today*. Moshe had to leave for the day. The IRC conversation about this bug was confusing at best, and it sounds like we don't know what the correct solution to make the rx/tx queues work with vnic_type direct interfaces. Given that, I would like to know: * What do we know actually does work with rx/tx queue sizes? Is it just macvtap ports? Is that was the feature was tested with? * If we have a known good tested vnic_type with the rx/tx queue config options in Rocky, let's put out a known limitations release note and update the help text for those config options to mention that only known types of interfaces work with them. Then people can work on fixing the configs to work with other types of vnics in Stein when there is actually time to test the changes other than unit tests. -- Thanks, Matt From doug at doughellmann.com Thu Aug 23 13:19:18 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 23 Aug 2018 09:19:18 -0400 Subject: [openstack-dev] [tc] Technical Committee status for 23 August Message-ID: <1535030276-sup-6993@lrrr.local> This is the weekly summary of work being done by the Technical Committee members. The full list of active items is managed in the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recent Activity == Project updates: - The RefStack team was dissolved, and the repositories transfered to the interop working group. - Added the qinling-dashboard repository to Qinling project: https://review.openstack.org/#/c/591559/ - The rst2bash repository has been retired: https://review.openstack.org/#/c/592293/ - Added the os-ken repository to the Neutron project: https://review.openstack.org/#/c/588358/ == PTG Planning == The TC is soon going to finalize the topics for presentations to be given around lunch time at the PTG. If you have suggestions, please add them to the etherpad. - https://etherpad.openstack.org/p/PTG4-postlunch There will be 2 TC meetings during the PTG week. See http://lists.openstack.org/pipermail/openstack-tc/2018-August/001544.html for details. == Leaderless teams after PTL elections == We approved all of the volunteers as appointed PTLs and rejected the proposals to drop Freezer and Searchlight from governance. Thank you to all of the folks who have stepped up to serve as PTL for Stein! We also formalized the process for appointing PTLs to avoid the confusion we had this time around. - https://review.openstack.org/590790 == Ongoing Discussions == The draft technical vision has gathered a good bit of feedback. This will be a major topic of discussion for us before and during the PTG. - https://review.openstack.org/#/c/592205/ We have spent a lot of time this week thinking about and discussing the nova/placement split. As things stand, it seems the nova team considers it too early to spin placement out of the team's purview, but it is likely that it will be moved to its own repository during Stein. This leaves some of us concerned about issues like contributors' self-determination and trust between teams within the community, so I expect more discussion to occur before a conclusion is reached. - http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-20.log.html#t2018-08-20T15:27:57 - http://lists.openstack.org/pipermail/openstack-dev/2018-August/133445.html == TC member actions/focus/discussions for the coming week(s) == The PTG is approaching quickly. Please complete any remaining team health checks. == Contacting the TC == The Technical Committee uses a series of weekly "office hour" time slots for synchronous communication. We hope that by having several such times scheduled, we will have more opportunities to engage with members of the community from different timezones. Office hour times in #openstack-tc: - 09:00 UTC on Tuesdays - 01:00 UTC on Wednesdays - 15:00 UTC on Thursdays If you have something you would like the TC to discuss, you can add it to our office hour conversation starter etherpad at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Many of us also run IRC bouncers which stay in #openstack-tc most of the time, so please do not feel that you need to wait for an office hour time to pose a question or offer a suggestion. You can use the string "tc-members" to alert the members to your question. You will find channel logs with past conversations at http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ If you expect your topic to require significant discussion or to need input from members of the community other than the TC, please start a mailing list discussion on openstack-dev at lists.openstack.org and use the subject tag "[tc]" to bring it to the attention of TC members. From dms at danplanet.com Thu Aug 23 13:29:23 2018 From: dms at danplanet.com (Dan Smith) Date: Thu, 23 Aug 2018 06:29:23 -0700 Subject: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: <20180823104210.kgctxfjiq47uru34@localhost> (Gorka Eguileor's message of "Thu, 23 Aug 2018 12:42:10 +0200") References: <20180823104210.kgctxfjiq47uru34@localhost> Message-ID: > I think Nova should never have to rely on Cinder's hosts/backends > information to do migrations or any other operation. > > In this case even if Nova had that info, it wouldn't be the solution. > Cinder would reject migrations if there's an incompatibility on the > Volume Type (AZ, Referenced backend, capabilities...) I think I'm missing a bunch of cinder knowledge required to fully grok this situation and probably need to do some reading. Is there some reason that a volume type can't exist in multiple backends or something? I guess I think of volume type as flavor, and the same definition in two places would be interchangeable -- is that not the case? > I don't know anything about Nova cells, so I don't know the specifics of > how we could do the mapping between them and Cinder backends, but > considering the limited range of possibilities in Cinder I would say we > only have Volume Types and AZs to work a solution. I think the only mapping we need is affinity or distance. The point of needing to migrate the volume would purely be because moving cells likely means you moved physically farther away from where you were, potentially with different storage connections and networking. It doesn't *have* to mean that, but I think in reality it would. So the question I think Matt is looking to answer here is "how do we move an instance from a DC in building A to building C and make sure the volume gets moved to some storage local in the new building so we're not just transiting back to the original home for no reason?" Does that explanation help or are you saying that's fundamentally hard to do/orchestrate? Fundamentally, the cells thing doesn't even need to be part of the discussion, as the same rules would apply if we're just doing a normal migration but need to make sure that storage remains affined to compute. > I don't know how the Nova Placement works, but it could hold an > equivalency mapping of volume types to cells as in: > > Cell#1 Cell#2 > > VolTypeA <--> VolTypeD > VolTypeB <--> VolTypeE > VolTypeC <--> VolTypeF > > Then it could do volume retypes (allowing migration) and that would > properly move the volumes from one backend to another. The only way I can think that we could do this in placement would be if volume types were resource providers and we assigned them traits that had special meaning to nova indicating equivalence. Several of the words in that sentence are likely to freak out placement people, myself included :) So is the concern just that we need to know what volume types in one backend map to those in another so that when we do the migration we know what to ask for? Is "they are the same name" not enough? Going back to the flavor analogy, you could kinda compare two flavor definitions and have a good idea if they're equivalent or not... --Dan From tobias.urdin at binero.se Thu Aug 23 13:48:31 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Thu, 23 Aug 2018 15:48:31 +0200 Subject: [openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes In-Reply-To: <22ea4089-d830-ce0e-fd0a-502d16075bda@binero.se> References: <282a7bf1-ae3e-335a-e1a1-69996276f731@binero.se> <22ea4089-d830-ce0e-fd0a-502d16075bda@binero.se> Message-ID: <7af84e20-2b32-e341-e584-dc2fd9ea0943@binero.se> Found the issue, I assume I have to use Fedora Atomic 26 until Rocky where I can start using Fedora Atomic 27. Will Fedora Atomia 28 be supported for Rocky? https://bugs.launchpad.net/magnum/+bug/1735381 (Run etcd and flanneld in system containers, In Fedora Atomic 27 etcd and flanneld are removed from the base image.) https://review.openstack.org/#/c/524116/ (Run etcd and flanneld in a system container) Still wondering about the "The Parameter (nodes_affinity_policy) was not provided" when using Mesos + Ubuntu? Best regards Tobias On 08/23/2018 02:56 PM, Tobias Urdin wrote: > Thanks for all of your help everyone, > > I've been busy with other thing but was able to pick up where I left > regarding Magnum. > After fixing some issues I have been able to provision a working > Kubernetes cluster. > > I'm still having issues with getting Docker Swarm working, I've tried > with both Docker and flannel as the networking layer but > none of these works. After investigating the issue seems to be that > etcd.service is not installed (unit file doesn't exist) so the master > doesn't work, the minion swarm node is provisioned but cannot join the > cluster because there is no etcd. > > Anybody seen this issue before? I've been digging through all > cloud-init logs and cannot see anything that would cause this. > > I also have another separate issue, when provisioning using the > magnum-ui in Horizon and selecting ubuntu with Mesos I get the error > "The Parameter (nodes_affinity_policy) was not provided". The > nodes_affinity_policy do have a default value in magnum.conf so I'm > starting > to think this might be an issue with the magnum-ui dashboard? > > Best regards > Tobias > > On 08/04/2018 06:24 PM, Joe Topjian wrote: >> We recently deployed Magnum and I've been making my way through >> getting both Swarm and Kubernetes running. I also ran into some >> initial issues. These notes may or may not help, but thought I'd >> share them in case: >> >> * We're using Barbican for SSL. I have not tried with the internal >> x509keypair. >> >> * I was only able to get things running with Fedora Atomic 27, >> specifically the version used in the Magnum docs: >> https://docs.openstack.org/magnum/latest/install/launch-instance.html >> >> Anything beyond that wouldn't even boot in my cloud. I haven't dug >> into this. >> >> * Kubernetes requires a Cluster Template to have a label of >> cert_manager_api=true set in order for the cluster to fully come up >> (at least, it didn't work for me until I set this). >> >> As far as troubleshooting methods go, check the cloud-init logs on >> the individual instances to see if any of the "parts" have failed to >> run. Manually re-run the parts on the command-line to get a better >> idea of why they failed. Review the actual script, figure out the >> variable interpolation and how it relates to the Cluster Template >> being used. >> >> Eventually I was able to get clusters running with the stock >> driver/templates, but wanted to tune them in order to better fit in >> our cloud, so I've "forked" them. This is in no way a slight against >> the existing drivers/templates nor do I recommend doing this until >> you reach a point where the stock drivers won't meet your needs. But >> I mention it because it's possible to do and it's not terribly hard. >> This is still a work-in-progress and a bit hacky: >> >> https://github.com/cybera/magnum-templates >> >> Hope that helps, >> Joe >> >> On Fri, Aug 3, 2018 at 6:46 AM, Tobias Urdin > > wrote: >> >> Hello, >> >> I'm testing around with Magnum and have so far only had issues. >> I've tried deploying Docker Swarm (on Fedora Atomic 27, Fedora >> Atomic 28) and Kubernetes (on Fedora Atomic 27) and haven't been >> able to get it working. >> >> Running Queens, is there any information about supported images? >> Is Magnum maintained to support Fedora Atomic still? >> What is in charge of population the certificates inside the >> instances, because this seems to be the root of all issues, I'm >> not using Barbican but the x509keypair driver >> is that the reason? >> >> Perhaps I missed some documentation that x509keypair does not >> support what I'm trying to do? >> >> I've seen the following issues: >> >> Docker: >> * Master does not start and listen on TCP because of certificate >> issues >> dockerd-current[1909]: Could not load X509 key pair (cert: >> "/etc/docker/server.crt", key: "/etc/docker/server.key") >> >> * Node does not start with: >> Dependency failed for Docker Application Container Engine. >> docker.service: Job docker.service/start failed with result >> 'dependency'. >> >> Kubernetes: >> * Master etcd does not start because /run/etcd does not exist >> ** When that is created it fails to start because of certificate >> 2018-08-03 12:41:16.554257 C | etcdmain: open >> /etc/etcd/certs/server.crt: no such file or directory >> >> * Master kube-apiserver does not start because of certificate >> unable to load server certificate: open >> /etc/kubernetes/certs/server.crt: no such file or directory >> >> * Master heat script just sleeps forever waiting for port 8080 to >> become available (kube-apiserver) so it can never kubectl apply >> the final steps. >> >> * Node does not even start and times out when Heat deploys it, >> probably because master never finishes >> >> Any help is appreciated perhaps I've missed something crucial, >> I've not tested Kubernetes on CoreOS yet. >> >> Best regards >> Tobias >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Thu Aug 23 14:02:04 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 23 Aug 2018 14:02:04 -0000 Subject: [openstack-dev] neutron 13.0.0.0rc2 (rocky) Message-ID: Hello everyone, A new release candidate for neutron for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/neutron/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/neutron/log/?h=stable/rocky Release notes for neutron can be found at: https://docs.openstack.org/releasenotes/neutron/ From nguyentrihai93 at gmail.com Thu Aug 23 14:07:10 2018 From: nguyentrihai93 at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gVHLDrSBI4bqjaQ==?=) Date: Thu, 23 Aug 2018 23:07:10 +0900 Subject: [openstack-dev] [goals][python3] please check with me before submitting any zuul migration patches In-Reply-To: References: <1534875835-sup-7809@lrrr.local> Message-ID: Hi, There is a conflict appearing on karbor projects: https://review.openstack.org/#/q/project:%255E.*karbor.*+topic:python3-first+status:open Please check the storyboard to check who is working on the target project if you want to help. https://storyboard.openstack.org/#!/story/2002586 On Wed, Aug 22, 2018 at 2:42 PM Nguyễn Trí Hải wrote: > Please add yourself to storyboard to everyone know who is working on the > project. > > https://storyboard.openstack.org/#!/story/2002586 > > On Wed, Aug 22, 2018 at 3:31 AM Doug Hellmann > wrote: > >> We have a few folks eager to join in and contribute to the python3 goal >> by helping with the patches to migrate zuul settings. That's great! >> However, many of the patches being proposed are incorrect, which means >> there is either something wrong with the tool or the way it is used. >> >> The intent was to have a very small group, 3-4 people, who knew how >> the tools worked to propose all of those patches. Having incorrect >> patches can break the CI for a project, so we need to be especially >> careful with them. We do not want every team writing the patches >> for themselves, and we do not want lots and lots of people who we >> have to train to use the tools. >> >> If you are not one of the people already listed as a goal champion >> on [1], please PLEASE stop writing patches and get in touch with >> me personally and directly (via IRC or email) BEFORE doing any more >> work on the goal. >> >> Thanks, >> Doug >> >> [1] https://governance.openstack.org/tc/goals/stein/python3-first.html >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > > Nguyen Tri Hai / Ph.D. Student > > ANDA Lab., Soongsil Univ., Seoul, South Korea > > > > *[image: > http://link.springer.com/chapter/10.1007/978-3-319-26135-5_4] > * > -- Nguyen Tri Hai / Ph.D. Student ANDA Lab., Soongsil Univ., Seoul, South Korea *[image: http://link.springer.com/chapter/10.1007/978-3-319-26135-5_4] * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremyfreudberg at gmail.com Thu Aug 23 14:10:13 2018 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Thu, 23 Aug 2018 10:10:13 -0400 Subject: [openstack-dev] [glance][horizon] Issues we found when using Community Images In-Reply-To: References: Message-ID: Hi Andy, Can you comment more on what needs to be updated in Sahara? Are they simply issues in the UI (sahara-dashboard) or is there a problem consuming community images on the server side? I'm happy to pitch in, just would like to do it efficiently. Thanks, Jeremy On Wed, Aug 22, 2018 at 5:31 PM, Andy Botting wrote: > Hi all, > > We've recently moved to using Glance's community visibility on the Nectar > Research Cloud. We had lots of public images (12255), and we found it was > becoming slow to list them all and the community image visibility seems to > fit our use-case nicely. > > We moved all of our user's images over to become community images, and left > our 'official' images as the only public ones. > > We found a few issues, which I wanted to document, if anyone else is looking > at doing the same thing. > > -> Glance API has no way of returning all images available to me in a single > API request (https://bugs.launchpad.net/glance/+bug/1779251) > The default list of images is perfect (all available to me, except > community), but there's a heap of cases where you need to fetch all images > including community. If we did have this, my next points would be a whole > lot easier to solve. > > -> Horizon's support for Community images is very lacking > (https://bugs.launchpad.net/horizon/+bug/1779250) > On the surface, it looks like Community images are supported in Horizon, but > it's only as far as listing images in the Images tab. Trying to boot a > Community image from the Launch Instance wizard is actually impossible, as > community images don't appear in that list at all. The images tab in Horizon > dynamically builds the list of images on the Images tab through new Glance > API calls when you use any filters (good). > In contrast, the source tab on the Launch Images wizard loads all images at > the start (slow with lots of images), then relies on javascript client-side > filtering of the list. I've got a dirty patch to fix this for us by > basically making two Glance API requests (one without specifying visibility, > and another with visibility=community), then merging the data. This would be > better handled the same way as the Images tab, with new Glance API requests > when filtering. > > -> Users can't set their own images as Community from the dashboard > Should be relatively easy to add this. I'm hoping to look into fixing this > soon. > > -> Murano / Sahara image discovery > These projects rely on images to be chosen when creating new environments, > and it looks like they use a glance list for their discovery. They both > suffer from the same issue and require their images to be non-community for > them to find their images. > > -> Openstack Client didn't support listing community images at all > (https://storyboard.openstack.org/#!/story/2001925) > It did support setting images to community, but support for actually listing > them was missing. Support has now been added, but not sure if it's made it > to a release yet. > > Apart from these issues, our migration was pretty successful with minimal > user complaints. > > cheers, > Andy > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at stackhpc.com Thu Aug 23 14:37:29 2018 From: doug at stackhpc.com (Doug Szumski) Date: Thu, 23 Aug 2018 15:37:29 +0100 Subject: [openstack-dev] [monasca][goal][python3] monasca's zuul migration is only partially complete In-Reply-To: <1535027603-sup-4897@lrrr.local> References: <1534980736-sup-1102@lrrr.local> <1535027603-sup-4897@lrrr.local> Message-ID: On 23/08/18 13:34, Doug Hellmann wrote: > Excerpts from Doug Szumski's message of 2018-08-23 09:53:35 +0100: > >> Thanks Doug, we had a discussion and we agreed that the best way to >> proceed is for you to submit your patches and we will carefully review them. > I proposed those patches this morning. With the aid of your exemplary > repository naming conventions, you can find them all at: > > https://review.openstack.org/#/q/topic:python3-first+project:%255E.*monasca.*+is:open Thanks, we will start going through them. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dprince at redhat.com Thu Aug 23 14:42:29 2018 From: dprince at redhat.com (Dan Prince) Date: Thu, 23 Aug 2018 10:42:29 -0400 Subject: [openstack-dev] [tripleo] ansible roles in tripleo In-Reply-To: <1534269113.6400.11.camel@redhat.com> References: <1534269113.6400.11.camel@redhat.com> Message-ID: On Tue, Aug 14, 2018 at 1:53 PM Jill Rouleau wrote: > > Hey folks, > > Like Alex mentioned[0] earlier, we've created a bunch of ansible roles > for tripleo specific bits. The idea is to start putting some basic > cookiecutter type things in them to get things started, then move some > low-hanging fruit out of tripleo-heat-templates and into the appropriate > roles. For example, docker/services/keystone.yaml could have > upgrade_tasks and fast_forward_upgrade_tasks moved into ansible-role- > tripleo-keystone/tasks/(upgrade.yml|fast_forward_upgrade.yml), and the > t-h-t updated to > include_role: ansible-role-tripleo-keystone > tasks_from: upgrade.yml > without having to modify any puppet or heat directives. > > This would let us define some patterns for implementing these tripleo > roles during Stein while looking at how we can make use of ansible for > things like core config. I like the idea of consolidating the Ansible stuff and getting out of the practice of inlining it into t-h-t. Especially the "core config" which I take to mean moving away from Puppet and towards Ansible for service level configuration. But presumably we are going to rely on the upstream Openstack ansible-os_* projects to do the heavy config lifting for us here though right? We won't have to do much on our side to leverage that I hope other than translating old hiera to equivalent settings for the config files to ensure some backwards comparability. While I agree with the goals I do wonder if the shear number of git repos we've created here is needed. Like with puppet-tripleo we were able to combine a set of "small lightweight" manifests in a way to wrap them around the upstream Puppet modules. Why not do the same with ansible-role-tripleo? My concern is that we've created so many cookie cutter repos with boilerplate code in them that ends up being much heavier than the files which will actually reside in many of these repos. This in addition to the extra review work and RPM packages we need to constantly maintain. Dan > > t-h-t and config-download will still drive the vast majority of playbook > creation for now, but for new playbooks (such as for operations tasks) > tripleo-ansible[1] would be our project directory. > > So in addition to the larger conversation about how deployers can start > to standardize how we're all using ansible, I'd like to also have a > tripleo-specific conversation at PTG on how we can break out some of our > ansible that's currently embedded in t-h-t into more modular and > flexible roles. > > Cheers, > Jill > > [0] http://lists.openstack.org/pipermail/openstack-dev/2018-August/13311 > 9.html > [1] https://git.openstack.org/cgit/openstack/tripleo-ansible/tree/__________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tobias.urdin at binero.se Thu Aug 23 14:46:07 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Thu, 23 Aug 2018 16:46:07 +0200 Subject: [openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes In-Reply-To: <7af84e20-2b32-e341-e584-dc2fd9ea0943@binero.se> References: <282a7bf1-ae3e-335a-e1a1-69996276f731@binero.se> <22ea4089-d830-ce0e-fd0a-502d16075bda@binero.se> <7af84e20-2b32-e341-e584-dc2fd9ea0943@binero.se> Message-ID: <9b79c6b9-5670-061c-bafc-973651fd8bc3@binero.se> Now with Fedora 26 I have etcd available but etcd fails. [root at swarm-u2rnie4d4ik6-master-0 ~]# /usr/bin/etcd --name="${ETCD_NAME}" --data-dir="${ETCD_DATA_DIR}" --listen-client-urls="${ETCD_LISTEN_CLIENT_URLS}" --debug 2018-08-23 14:34:15.596516 E | etcdmain: error verifying flags, --advertise-client-urls is required when --listen-client-urls is set explicitly. See 'etcd --help'. 2018-08-23 14:34:15.596611 E | etcdmain: When listening on specific address(es), this etcd process must advertise accessible url(s) to each connected client. There is a issue where the --advertise-client-urls and TLS --cert-file and --key-file is not passed in the systemd file, changing this to: /usr/bin/etcd --name="${ETCD_NAME}" --data-dir="${ETCD_DATA_DIR}" --listen-client-urls="${ETCD_LISTEN_CLIENT_URLS}" --advertise-client-urls="${ETCD_ADVERTISE_CLIENT_URLS}" --cert-file="${ETCD_PEER_CERT_FILE}" --key-file="${ETCD_PEER_KEY_FILE}" Makes it work, any thoughts? Best regards Tobias On 08/23/2018 03:54 PM, Tobias Urdin wrote: > Found the issue, I assume I have to use Fedora Atomic 26 until Rocky > where I can start using Fedora Atomic 27. > Will Fedora Atomia 28 be supported for Rocky? > > https://bugs.launchpad.net/magnum/+bug/1735381 (Run etcd and flanneld > in system containers, In Fedora Atomic 27 etcd and flanneld are > removed from the base image.) > https://review.openstack.org/#/c/524116/ (Run etcd and flanneld in a > system container) > > Still wondering about the "The Parameter (nodes_affinity_policy) was > not provided" when using Mesos + Ubuntu? > > Best regards > Tobias > > On 08/23/2018 02:56 PM, Tobias Urdin wrote: >> Thanks for all of your help everyone, >> >> I've been busy with other thing but was able to pick up where I left >> regarding Magnum. >> After fixing some issues I have been able to provision a working >> Kubernetes cluster. >> >> I'm still having issues with getting Docker Swarm working, I've tried >> with both Docker and flannel as the networking layer but >> none of these works. After investigating the issue seems to be that >> etcd.service is not installed (unit file doesn't exist) so the master >> doesn't work, the minion swarm node is provisioned but cannot join >> the cluster because there is no etcd. >> >> Anybody seen this issue before? I've been digging through all >> cloud-init logs and cannot see anything that would cause this. >> >> I also have another separate issue, when provisioning using the >> magnum-ui in Horizon and selecting ubuntu with Mesos I get the error >> "The Parameter (nodes_affinity_policy) was not provided". The >> nodes_affinity_policy do have a default value in magnum.conf so I'm >> starting >> to think this might be an issue with the magnum-ui dashboard? >> >> Best regards >> Tobias >> >> On 08/04/2018 06:24 PM, Joe Topjian wrote: >>> We recently deployed Magnum and I've been making my way through >>> getting both Swarm and Kubernetes running. I also ran into some >>> initial issues. These notes may or may not help, but thought I'd >>> share them in case: >>> >>> * We're using Barbican for SSL. I have not tried with the internal >>> x509keypair. >>> >>> * I was only able to get things running with Fedora Atomic 27, >>> specifically the version used in the Magnum docs: >>> https://docs.openstack.org/magnum/latest/install/launch-instance.html >>> >>> Anything beyond that wouldn't even boot in my cloud. I haven't dug >>> into this. >>> >>> * Kubernetes requires a Cluster Template to have a label of >>> cert_manager_api=true set in order for the cluster to fully come up >>> (at least, it didn't work for me until I set this). >>> >>> As far as troubleshooting methods go, check the cloud-init logs on >>> the individual instances to see if any of the "parts" have failed to >>> run. Manually re-run the parts on the command-line to get a better >>> idea of why they failed. Review the actual script, figure out the >>> variable interpolation and how it relates to the Cluster Template >>> being used. >>> >>> Eventually I was able to get clusters running with the stock >>> driver/templates, but wanted to tune them in order to better fit in >>> our cloud, so I've "forked" them. This is in no way a slight against >>> the existing drivers/templates nor do I recommend doing this until >>> you reach a point where the stock drivers won't meet your needs. But >>> I mention it because it's possible to do and it's not terribly hard. >>> This is still a work-in-progress and a bit hacky: >>> >>> https://github.com/cybera/magnum-templates >>> >>> Hope that helps, >>> Joe >>> >>> On Fri, Aug 3, 2018 at 6:46 AM, Tobias Urdin >> > wrote: >>> >>> Hello, >>> >>> I'm testing around with Magnum and have so far only had issues. >>> I've tried deploying Docker Swarm (on Fedora Atomic 27, Fedora >>> Atomic 28) and Kubernetes (on Fedora Atomic 27) and haven't been >>> able to get it working. >>> >>> Running Queens, is there any information about supported images? >>> Is Magnum maintained to support Fedora Atomic still? >>> What is in charge of population the certificates inside the >>> instances, because this seems to be the root of all issues, I'm >>> not using Barbican but the x509keypair driver >>> is that the reason? >>> >>> Perhaps I missed some documentation that x509keypair does not >>> support what I'm trying to do? >>> >>> I've seen the following issues: >>> >>> Docker: >>> * Master does not start and listen on TCP because of certificate >>> issues >>> dockerd-current[1909]: Could not load X509 key pair (cert: >>> "/etc/docker/server.crt", key: "/etc/docker/server.key") >>> >>> * Node does not start with: >>> Dependency failed for Docker Application Container Engine. >>> docker.service: Job docker.service/start failed with result >>> 'dependency'. >>> >>> Kubernetes: >>> * Master etcd does not start because /run/etcd does not exist >>> ** When that is created it fails to start because of certificate >>> 2018-08-03 12:41:16.554257 C | etcdmain: open >>> /etc/etcd/certs/server.crt: no such file or directory >>> >>> * Master kube-apiserver does not start because of certificate >>> unable to load server certificate: open >>> /etc/kubernetes/certs/server.crt: no such file or directory >>> >>> * Master heat script just sleeps forever waiting for port 8080 >>> to become available (kube-apiserver) so it can never kubectl >>> apply the final steps. >>> >>> * Node does not even start and times out when Heat deploys it, >>> probably because master never finishes >>> >>> Any help is appreciated perhaps I've missed something crucial, >>> I've not tested Kubernetes on CoreOS yet. >>> >>> Best regards >>> Tobias >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Thu Aug 23 14:50:13 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 23 Aug 2018 09:50:13 -0500 Subject: [openstack-dev] Bumping eventlet to 0.24.1 Message-ID: <20180823145013.vzt46kgd7d7lkmkj@gentoo.org> This is your warning, if you have concerns please comment in https://review.openstack.org/589382 . cross tests pass, so that's a good sign... atm this is only for stein. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From dprince at redhat.com Thu Aug 23 14:50:35 2018 From: dprince at redhat.com (Dan Prince) Date: Thu, 23 Aug 2018 10:50:35 -0400 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: <363d36a2-c25a-d7ba-3f45-c8b3aa4e6cce@gmail.com> References: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> <363d36a2-c25a-d7ba-3f45-c8b3aa4e6cce@gmail.com> Message-ID: On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes wrote: > > On 08/15/2018 04:01 PM, Emilien Macchi wrote: > > On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi > > wrote: > > > > More seriously here: there is an ongoing effort to converge the > > tools around containerization within Red Hat, and we, TripleO are > > interested to continue the containerization of our services (which > > was initially done with Docker & Docker-Distribution). > > We're looking at how these containers could be managed by k8s one > > day but way before that we plan to swap out Docker and join CRI-O > > efforts, which seem to be using Podman + Buildah (among other things). > > > > I guess my wording wasn't the best but Alex explained way better here: > > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52 > > > > If I may have a chance to rephrase, I guess our current intention is to > > continue our containerization and investigate how we can improve our > > tooling to better orchestrate the containers. > > We have a nice interface (openstack/paunch) that allows us to run > > multiple container backends, and we're currently looking outside of > > Docker to see how we could solve our current challenges with the new tools. > > We're looking at CRI-O because it happens to be a project with a great > > community, focusing on some problems that we, TripleO have been facing > > since we containerized our services. > > > > We're doing all of this in the open, so feel free to ask any question. > > I appreciate your response, Emilien, thank you. Alex' responses to > Jeremy on the #openstack-tc channel were informative, thank you Alex. > > For now, it *seems* to me that all of the chosen tooling is very Red Hat > centric. Which makes sense to me, considering Triple-O is a Red Hat product. Perhaps a slight clarification here is needed. "Director" is a Red Hat product. TripleO is an upstream project that is now largely driven by Red Hat and is today marked as single vendor. We welcome others to contribute to the project upstream just like anybody else. And for those who don't know the history the TripleO project was once multi-vendor as well. So a lot of the abstractions we have in place could easily be extended to support distro specific implementation details. (Kind of what I view podman as in the scope of this thread). > > I don't know how much of the current reinvention of container runtimes > and various tooling around containers is the result of politics. I don't > know how much is the result of certain companies wanting to "own" the > container stack from top to bottom. Or how much is a result of technical > disagreements that simply cannot (or will not) be resolved among > contributors in the container development ecosystem. > > Or is it some combination of the above? I don't know. > > What I *do* know is that the current "NIH du jour" mentality currently > playing itself out in the container ecosystem -- reminding me very much > of the Javascript ecosystem -- makes it difficult for any potential > *consumers* of container libraries, runtimes or applications to be > confident that any choice they make towards one of the other will be the > *right* choice or even a *possible* choice next year -- or next week. > Perhaps this is why things like openstack/paunch exist -- to give you > options if something doesn't pan out. This is exactly why paunch exists. Re, the podman thing I look at it as an implementation detail. The good news is that given it is almost a parity replacement for what we already use we'll still contribute to the OpenStack community in similar ways. Ultimately whether you run 'docker run' or 'podman run' you end up with the same thing as far as the existing TripleO architecture goes. Dan > > You have a tough job. I wish you all the luck in the world in making > these decisions and hope politics and internal corporate management > decisions play as little a role in them as possible. > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From openstack at fried.cc Thu Aug 23 14:51:21 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 23 Aug 2018 09:51:21 -0500 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: <756c9051-a605-4d34-6c15-99cbcbc4801d@gmail.com> References: <1535025580-sup-8617@lrrr.local> <756c9051-a605-4d34-6c15-99cbcbc4801d@gmail.com> Message-ID: <21463d02-2827-d4ce-915c-4c569aae7585@fried.cc> Do you mean an actual fixture, that would be used like: class MyTestCase(testtools.TestCase): def setUp(self): self.uuids = self.useFixture(oslofx.UUIDSentinelFixture()).uuids def test_foo(self): do_a_thing_with(self.uuids.foo) ? That's... okay I guess, but the refactoring necessary to cut over to it will now entail adding 'self.' to every reference. Is there any way around that? efried On 08/23/2018 07:40 AM, Jay Pipes wrote: > On 08/23/2018 08:06 AM, Doug Hellmann wrote: >> Excerpts from Davanum Srinivas (dims)'s message of 2018-08-23 06:46:38 >> -0400: >>> Where exactly Eric? I can't seem to find the import: >>> >>> http://codesearch.openstack.org/?q=(from%7Cimport).*oslotest&i=nope&files=&repos=oslo.utils >>> >>> >>> -- dims >> >> oslo.utils depends on oslotest via test-requirements.txt and oslotest is >> used within the test modules in oslo.utils. >> >> As I've said on both reviews, I think we do not want a global >> singleton instance of this sentinal class. We do want a formal test >> fixture.  Either library can export a test fixture and olso.utils >> already has oslo_utils.fixture.TimeFixture so there's precedent to >> adding it there, so I have a slight preference for just doing that. >> >> That said, oslo_utils.uuidutils.generate_uuid() is simply returning >> str(uuid.uuid4()). We have it wrapped up as a function so we can >> mock it out in other tests, but we hardly need to rely on that if >> we're making a test fixture for oslotest. >> >> My vote is to add a new fixture class to oslo_utils.fixture. > > OK, thanks for the helpful explanation, Doug. Works for me. > > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean.mcginnis at gmx.com Thu Aug 23 15:22:43 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 23 Aug 2018 10:22:43 -0500 Subject: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: References: Message-ID: <20180823152242.GB23060@sm-workstation> On Wed, Aug 22, 2018 at 08:23:41PM -0500, Matt Riedemann wrote: > Hi everyone, > > I have started an etherpad for cells topics at the Stein PTG [1]. The main > issue in there right now is dealing with cross-cell cold migration in nova. > > At a high level, I am going off these requirements: > > * Cells can shard across flavors (and hardware type) so operators would like > to move users off the old flavors/hardware (old cell) to new flavors in a > new cell. > > * There is network isolation between compute hosts in different cells, so no > ssh'ing the disk around like we do today. But the image service is global to > all cells. > > Based on this, for the initial support for cross-cell cold migration, I am > proposing that we leverage something like shelve offload/unshelve > masquerading as resize. We shelve offload from the source cell and unshelve > in the target cell. This should work for both volume-backed and > non-volume-backed servers (we use snapshots for shelved offloaded > non-volume-backed servers). > > There are, of course, some complications. The main ones that I need help > with right now are what happens with volumes and ports attached to the > server. Today we detach from the source and attach at the target, but that's > assuming the storage backend and network are available to both hosts > involved in the move of the server. Will that be the case across cells? I am > assuming that depends on the network topology (are routed networks being > used?) and storage backend (routed storage?). If the network and/or storage > backend are not available across cells, how do we migrate volumes and ports? > Cinder has a volume migrate API for admins but I do not know how nova would > know the proper affinity per-cell to migrate the volume to the proper host > (cinder does not have a routed storage concept like routed provider networks > in neutron, correct?). And as far as I know, there is no such thing as port > migration in Neutron. > Just speaking to iSCSI storage, I know some deployments do not route their storage traffic. If this is the case, then both cells would need to have access to the same subnet to still access the volume. I'm also referring to the case where the migration is from one compute host to another compute host, and not from one storage backend to another storage backend. I haven't gone through the workflow, but I thought shelve/unshelve could detach the volume on shelving and reattach it on unshelve. In that workflow, assuming the networking is in place to provide the connectivity, the nova compute host would be connecting to the volume just like any other attach and should work fine. The unknown or tricky part is making sure that there is the network connectivity or routing in place for the compute host to be able to log in to the storage target. If it's the other scenario mentioned where the volume needs to be migrated from one storage backend to another storage backend, then that may require a little more work. The volume would need to be retype'd or migrated (storage migration) from the original backend to the new backend. Again, in this scenario at some point there needs to be network connectivity between cells to copy over that data. There is no storage-offloaded migration in this situation, so Cinder can't currently optimize how that data gets from the original volume backend to the new one. It would require a host copy of all the data on the volume (an often slow and expensive operation) and it would require that the host doing the data copy has access to both the original backend and then new backend. From jaypipes at gmail.com Thu Aug 23 15:36:29 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 23 Aug 2018 11:36:29 -0400 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: References: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> <363d36a2-c25a-d7ba-3f45-c8b3aa4e6cce@gmail.com> Message-ID: <78bc1c3d-4d97-5a1c-f320-bb08647e8825@gmail.com> Dan, thanks for the details and answers. Appreciated. Best, -jay On 08/23/2018 10:50 AM, Dan Prince wrote: > On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes wrote: >> >> On 08/15/2018 04:01 PM, Emilien Macchi wrote: >>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi >> > wrote: >>> >>> More seriously here: there is an ongoing effort to converge the >>> tools around containerization within Red Hat, and we, TripleO are >>> interested to continue the containerization of our services (which >>> was initially done with Docker & Docker-Distribution). >>> We're looking at how these containers could be managed by k8s one >>> day but way before that we plan to swap out Docker and join CRI-O >>> efforts, which seem to be using Podman + Buildah (among other things). >>> >>> I guess my wording wasn't the best but Alex explained way better here: >>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52 >>> >>> If I may have a chance to rephrase, I guess our current intention is to >>> continue our containerization and investigate how we can improve our >>> tooling to better orchestrate the containers. >>> We have a nice interface (openstack/paunch) that allows us to run >>> multiple container backends, and we're currently looking outside of >>> Docker to see how we could solve our current challenges with the new tools. >>> We're looking at CRI-O because it happens to be a project with a great >>> community, focusing on some problems that we, TripleO have been facing >>> since we containerized our services. >>> >>> We're doing all of this in the open, so feel free to ask any question. >> >> I appreciate your response, Emilien, thank you. Alex' responses to >> Jeremy on the #openstack-tc channel were informative, thank you Alex. >> >> For now, it *seems* to me that all of the chosen tooling is very Red Hat >> centric. Which makes sense to me, considering Triple-O is a Red Hat product. > > Perhaps a slight clarification here is needed. "Director" is a Red Hat > product. TripleO is an upstream project that is now largely driven by > Red Hat and is today marked as single vendor. We welcome others to > contribute to the project upstream just like anybody else. > > And for those who don't know the history the TripleO project was once > multi-vendor as well. So a lot of the abstractions we have in place > could easily be extended to support distro specific implementation > details. (Kind of what I view podman as in the scope of this thread). > >> >> I don't know how much of the current reinvention of container runtimes >> and various tooling around containers is the result of politics. I don't >> know how much is the result of certain companies wanting to "own" the >> container stack from top to bottom. Or how much is a result of technical >> disagreements that simply cannot (or will not) be resolved among >> contributors in the container development ecosystem. >> >> Or is it some combination of the above? I don't know. >> >> What I *do* know is that the current "NIH du jour" mentality currently >> playing itself out in the container ecosystem -- reminding me very much >> of the Javascript ecosystem -- makes it difficult for any potential >> *consumers* of container libraries, runtimes or applications to be >> confident that any choice they make towards one of the other will be the >> *right* choice or even a *possible* choice next year -- or next week. >> Perhaps this is why things like openstack/paunch exist -- to give you >> options if something doesn't pan out. > > This is exactly why paunch exists. > > Re, the podman thing I look at it as an implementation detail. The > good news is that given it is almost a parity replacement for what we > already use we'll still contribute to the OpenStack community in > similar ways. Ultimately whether you run 'docker run' or 'podman run' > you end up with the same thing as far as the existing TripleO > architecture goes. > > Dan > >> >> You have a tough job. I wish you all the luck in the world in making >> these decisions and hope politics and internal corporate management >> decisions play as little a role in them as possible. >> >> Best, >> -jay >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sean.mcginnis at gmx.com Thu Aug 23 15:49:01 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 23 Aug 2018 10:49:01 -0500 Subject: [openstack-dev] [barbican][oslo][release][requirements] FFE request for castellan In-Reply-To: <20180823063237.7bq3362aaefae4pa@gentoo.org> References: <1533914109.23178.37.camel@redhat.com> <20180814185634.GA26658@sm-workstation> <1534352313.5705.35.camel@redhat.com> <8f8add49-cb63-3452-cc7c-c812bfab0877@nemebean.com> <20180821191655.xw37baq4q6ikfqts@gentoo.org> <1534993596.21877.71.camel@redhat.com> <20180823063237.7bq3362aaefae4pa@gentoo.org> Message-ID: <20180823154901.GC23060@sm-workstation> > > > > > > I've approved it for a UC only bump > > > > > We are still waiting on https://review.openstack.org/594541 to merge, > but I already voted and noted that it was FFE approved. > > -- > Matthew Thode (prometheanfire) And I have now approved the u-c update. We should be all set now. From sean.mcginnis at gmx.com Thu Aug 23 16:12:29 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 23 Aug 2018 11:12:29 -0500 Subject: [openstack-dev] [release] Release countdown for week R-0, August 27 - 31 Message-ID: <20180823161228.GA27186@sm-workstation> This is the final countdown email for the Rocky development cycle. Thanks to everyone involved in the Rocky release! Development Focus ----------------- Teams attending the PTG should be preparing for those discussions and capturing information in the etherpads: https://wiki.openstack.org/wiki/PTG/Stein/Etherpads General Information ------------------- The release team plans on doing the final Rocky release on 29 August. We will re-tag the last commit used for the final RC using the final version number. If you have not already done so, now would be a good time to take a look at the Stein schedule and start planning team activities: https://releases.openstack.org/stein/schedule.html Actions --------- PTLs and release liaisons should watch for the final release patch from the release team. While not required, we would appreciate having an ack from each team before we approve it on the 29th. We are still missing releases for the following tempest plugins. Some are pending getting pypi and release jobs set up, but please try to prioritize getting these done as soon as possible. barbican-tempest-plugin blazar-tempest-plugin cloudkitty-tempest-plugin congress-tempest-plugin ec2api-tempest-plugin magnum-tempest-plugin mistral-tempest-plugin monasca-kibana-plugin monasca-tempest-plugin murano-tempest-plugin networking-generic-switch-tempest-plugin oswin-tempest-plugin senlin-tempest-plugin telemetry-tempest-plugin tripleo-common-tempest-plugin trove-tempest-plugin watcher-tempest-plugin zaqar-tempest-plugin Upcoming Deadlines & Dates -------------------------- Final RC deadline: August 23 Rocky Release: August 29 Cycle trailing RC deadline: August 30 Stein PTG: September 10-14 Cycle trailing Rocky release: November 28 -- Sean McGinnis (smcginnis) From Kevin.Fox at pnnl.gov Thu Aug 23 16:22:50 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 23 Aug 2018 16:22:50 +0000 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: <78bc1c3d-4d97-5a1c-f320-bb08647e8825@gmail.com> References: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> <363d36a2-c25a-d7ba-3f45-c8b3aa4e6cce@gmail.com> , <78bc1c3d-4d97-5a1c-f320-bb08647e8825@gmail.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01C183A00@EX10MBOX03.pnnl.gov> Question. Rather then writing a middle layer to abstract both container engines, couldn't you just use CRI? CRI is CRI-O's native language, and there is support already for Docker as well. Thanks, Kevin ________________________________________ From: Jay Pipes [jaypipes at gmail.com] Sent: Thursday, August 23, 2018 8:36 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls Dan, thanks for the details and answers. Appreciated. Best, -jay On 08/23/2018 10:50 AM, Dan Prince wrote: > On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes wrote: >> >> On 08/15/2018 04:01 PM, Emilien Macchi wrote: >>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi >> > wrote: >>> >>> More seriously here: there is an ongoing effort to converge the >>> tools around containerization within Red Hat, and we, TripleO are >>> interested to continue the containerization of our services (which >>> was initially done with Docker & Docker-Distribution). >>> We're looking at how these containers could be managed by k8s one >>> day but way before that we plan to swap out Docker and join CRI-O >>> efforts, which seem to be using Podman + Buildah (among other things). >>> >>> I guess my wording wasn't the best but Alex explained way better here: >>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52 >>> >>> If I may have a chance to rephrase, I guess our current intention is to >>> continue our containerization and investigate how we can improve our >>> tooling to better orchestrate the containers. >>> We have a nice interface (openstack/paunch) that allows us to run >>> multiple container backends, and we're currently looking outside of >>> Docker to see how we could solve our current challenges with the new tools. >>> We're looking at CRI-O because it happens to be a project with a great >>> community, focusing on some problems that we, TripleO have been facing >>> since we containerized our services. >>> >>> We're doing all of this in the open, so feel free to ask any question. >> >> I appreciate your response, Emilien, thank you. Alex' responses to >> Jeremy on the #openstack-tc channel were informative, thank you Alex. >> >> For now, it *seems* to me that all of the chosen tooling is very Red Hat >> centric. Which makes sense to me, considering Triple-O is a Red Hat product. > > Perhaps a slight clarification here is needed. "Director" is a Red Hat > product. TripleO is an upstream project that is now largely driven by > Red Hat and is today marked as single vendor. We welcome others to > contribute to the project upstream just like anybody else. > > And for those who don't know the history the TripleO project was once > multi-vendor as well. So a lot of the abstractions we have in place > could easily be extended to support distro specific implementation > details. (Kind of what I view podman as in the scope of this thread). > >> >> I don't know how much of the current reinvention of container runtimes >> and various tooling around containers is the result of politics. I don't >> know how much is the result of certain companies wanting to "own" the >> container stack from top to bottom. Or how much is a result of technical >> disagreements that simply cannot (or will not) be resolved among >> contributors in the container development ecosystem. >> >> Or is it some combination of the above? I don't know. >> >> What I *do* know is that the current "NIH du jour" mentality currently >> playing itself out in the container ecosystem -- reminding me very much >> of the Javascript ecosystem -- makes it difficult for any potential >> *consumers* of container libraries, runtimes or applications to be >> confident that any choice they make towards one of the other will be the >> *right* choice or even a *possible* choice next year -- or next week. >> Perhaps this is why things like openstack/paunch exist -- to give you >> options if something doesn't pan out. > > This is exactly why paunch exists. > > Re, the podman thing I look at it as an implementation detail. The > good news is that given it is almost a parity replacement for what we > already use we'll still contribute to the OpenStack community in > similar ways. Ultimately whether you run 'docker run' or 'podman run' > you end up with the same thing as far as the existing TripleO > architecture goes. > > Dan > >> >> You have a tough job. I wish you all the luck in the world in making >> these decisions and hope politics and internal corporate management >> decisions play as little a role in them as possible. >> >> Best, >> -jay >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From bdobreli at redhat.com Thu Aug 23 16:30:24 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 23 Aug 2018 18:30:24 +0200 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C183A00@EX10MBOX03.pnnl.gov> References: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> <363d36a2-c25a-d7ba-3f45-c8b3aa4e6cce@gmail.com> <78bc1c3d-4d97-5a1c-f320-bb08647e8825@gmail.com> <1A3C52DFCD06494D8528644858247BF01C183A00@EX10MBOX03.pnnl.gov> Message-ID: On 8/23/18 6:22 PM, Fox, Kevin M wrote: > Question. Rather then writing a middle layer to abstract both container engines, couldn't you just use CRI? CRI is CRI-O's native language, and there is support already for Docker as well. I may be messing up abstraction levels, but IMO when it's time to support CRI-O as well, paunch should handle that just like docker or podman. So nothing changes in the moving layers of tripleo components. It's nice that CRI-O also supports docker and other runtimes, but not sure we want something in tripleo moving parts to become neither docker not podman nor CRI-O bound. > > Thanks, > Kevin > ________________________________________ > From: Jay Pipes [jaypipes at gmail.com] > Sent: Thursday, August 23, 2018 8:36 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls > > Dan, thanks for the details and answers. Appreciated. > > Best, > -jay > > On 08/23/2018 10:50 AM, Dan Prince wrote: >> On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes wrote: >>> >>> On 08/15/2018 04:01 PM, Emilien Macchi wrote: >>>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi >>> > wrote: >>>> >>>> More seriously here: there is an ongoing effort to converge the >>>> tools around containerization within Red Hat, and we, TripleO are >>>> interested to continue the containerization of our services (which >>>> was initially done with Docker & Docker-Distribution). >>>> We're looking at how these containers could be managed by k8s one >>>> day but way before that we plan to swap out Docker and join CRI-O >>>> efforts, which seem to be using Podman + Buildah (among other things). >>>> >>>> I guess my wording wasn't the best but Alex explained way better here: >>>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52 >>>> >>>> If I may have a chance to rephrase, I guess our current intention is to >>>> continue our containerization and investigate how we can improve our >>>> tooling to better orchestrate the containers. >>>> We have a nice interface (openstack/paunch) that allows us to run >>>> multiple container backends, and we're currently looking outside of >>>> Docker to see how we could solve our current challenges with the new tools. >>>> We're looking at CRI-O because it happens to be a project with a great >>>> community, focusing on some problems that we, TripleO have been facing >>>> since we containerized our services. >>>> >>>> We're doing all of this in the open, so feel free to ask any question. >>> >>> I appreciate your response, Emilien, thank you. Alex' responses to >>> Jeremy on the #openstack-tc channel were informative, thank you Alex. >>> >>> For now, it *seems* to me that all of the chosen tooling is very Red Hat >>> centric. Which makes sense to me, considering Triple-O is a Red Hat product. >> >> Perhaps a slight clarification here is needed. "Director" is a Red Hat >> product. TripleO is an upstream project that is now largely driven by >> Red Hat and is today marked as single vendor. We welcome others to >> contribute to the project upstream just like anybody else. >> >> And for those who don't know the history the TripleO project was once >> multi-vendor as well. So a lot of the abstractions we have in place >> could easily be extended to support distro specific implementation >> details. (Kind of what I view podman as in the scope of this thread). >> >>> >>> I don't know how much of the current reinvention of container runtimes >>> and various tooling around containers is the result of politics. I don't >>> know how much is the result of certain companies wanting to "own" the >>> container stack from top to bottom. Or how much is a result of technical >>> disagreements that simply cannot (or will not) be resolved among >>> contributors in the container development ecosystem. >>> >>> Or is it some combination of the above? I don't know. >>> >>> What I *do* know is that the current "NIH du jour" mentality currently >>> playing itself out in the container ecosystem -- reminding me very much >>> of the Javascript ecosystem -- makes it difficult for any potential >>> *consumers* of container libraries, runtimes or applications to be >>> confident that any choice they make towards one of the other will be the >>> *right* choice or even a *possible* choice next year -- or next week. >>> Perhaps this is why things like openstack/paunch exist -- to give you >>> options if something doesn't pan out. >> >> This is exactly why paunch exists. >> >> Re, the podman thing I look at it as an implementation detail. The >> good news is that given it is almost a parity replacement for what we >> already use we'll still contribute to the OpenStack community in >> similar ways. Ultimately whether you run 'docker run' or 'podman run' >> you end up with the same thing as far as the existing TripleO >> architecture goes. >> >> Dan >> >>> >>> You have a tough job. I wish you all the luck in the world in making >>> these decisions and hope politics and internal corporate management >>> decisions play as little a role in them as possible. >>> >>> Best, >>> -jay >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From amotoki at gmail.com Thu Aug 23 16:35:41 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Fri, 24 Aug 2018 01:35:41 +0900 Subject: [openstack-dev] [release] Release countdown for week R-0, August 27 - 31 In-Reply-To: <20180823161228.GA27186@sm-workstation> References: <20180823161228.GA27186@sm-workstation> Message-ID: 2018年8月24日(金) 1:12 Sean McGinnis : > > This is the final countdown email for the Rocky development cycle. Thanks to > everyone involved in the Rocky release! > > Development Focus > ----------------- > > Teams attending the PTG should be preparing for those discussions and capturing > information in the etherpads: > > https://wiki.openstack.org/wiki/PTG/Stein/Etherpads > > General Information > ------------------- > > The release team plans on doing the final Rocky release on 29 August. We will > re-tag the last commit used for the final RC using the final version number. > > If you have not already done so, now would be a good time to take a look at the > Stein schedule and start planning team activities: > > https://releases.openstack.org/stein/schedule.html > > Actions > --------- > > PTLs and release liaisons should watch for the final release patch from the > release team. While not required, we would appreciate having an ack from each > team before we approve it on the 29th. > > We are still missing releases for the following tempest plugins. Some are > pending getting pypi and release jobs set up, but please try to prioritize > getting these done as soon as possible. > > barbican-tempest-plugin > blazar-tempest-plugin > cloudkitty-tempest-plugin > congress-tempest-plugin > ec2api-tempest-plugin > magnum-tempest-plugin > mistral-tempest-plugin > monasca-kibana-plugin > monasca-tempest-plugin > murano-tempest-plugin > networking-generic-switch-tempest-plugin > oswin-tempest-plugin > senlin-tempest-plugin > telemetry-tempest-plugin > tripleo-common-tempest-plugin > trove-tempest-plugin > watcher-tempest-plugin > zaqar-tempest-plugin tempest-horizon is missing from the list. horizon team needs to release tempest-horizon. It does not follow the naming convention so it seems to be missed from the list. Thanks, Akihiro Motoki (amotoki) > > Upcoming Deadlines & Dates > -------------------------- > > Final RC deadline: August 23 > Rocky Release: August 29 > Cycle trailing RC deadline: August 30 > Stein PTG: September 10-14 > Cycle trailing Rocky release: November 28 > > -- > Sean McGinnis (smcginnis) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Kevin.Fox at pnnl.gov Thu Aug 23 16:36:34 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 23 Aug 2018 16:36:34 +0000 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C183A00@EX10MBOX03.pnnl.gov> References: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> <363d36a2-c25a-d7ba-3f45-c8b3aa4e6cce@gmail.com> , <78bc1c3d-4d97-5a1c-f320-bb08647e8825@gmail.com>, <1A3C52DFCD06494D8528644858247BF01C183A00@EX10MBOX03.pnnl.gov> Message-ID: <1A3C52DFCD06494D8528644858247BF01C183A4A@EX10MBOX03.pnnl.gov> Or use kubelet in standalone mode. It can be configured for either Cri-o or Docker. You can drive the static manifests from heat/ansible per host as normal and it would be a step in the greater direction of getting to Kubernetes without needing the whole thing at once, if that is the goal. Thanks, Kevin ________________________________________ From: Fox, Kevin M [Kevin.Fox at pnnl.gov] Sent: Thursday, August 23, 2018 9:22 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls Question. Rather then writing a middle layer to abstract both container engines, couldn't you just use CRI? CRI is CRI-O's native language, and there is support already for Docker as well. Thanks, Kevin ________________________________________ From: Jay Pipes [jaypipes at gmail.com] Sent: Thursday, August 23, 2018 8:36 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls Dan, thanks for the details and answers. Appreciated. Best, -jay On 08/23/2018 10:50 AM, Dan Prince wrote: > On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes wrote: >> >> On 08/15/2018 04:01 PM, Emilien Macchi wrote: >>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi >> > wrote: >>> >>> More seriously here: there is an ongoing effort to converge the >>> tools around containerization within Red Hat, and we, TripleO are >>> interested to continue the containerization of our services (which >>> was initially done with Docker & Docker-Distribution). >>> We're looking at how these containers could be managed by k8s one >>> day but way before that we plan to swap out Docker and join CRI-O >>> efforts, which seem to be using Podman + Buildah (among other things). >>> >>> I guess my wording wasn't the best but Alex explained way better here: >>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52 >>> >>> If I may have a chance to rephrase, I guess our current intention is to >>> continue our containerization and investigate how we can improve our >>> tooling to better orchestrate the containers. >>> We have a nice interface (openstack/paunch) that allows us to run >>> multiple container backends, and we're currently looking outside of >>> Docker to see how we could solve our current challenges with the new tools. >>> We're looking at CRI-O because it happens to be a project with a great >>> community, focusing on some problems that we, TripleO have been facing >>> since we containerized our services. >>> >>> We're doing all of this in the open, so feel free to ask any question. >> >> I appreciate your response, Emilien, thank you. Alex' responses to >> Jeremy on the #openstack-tc channel were informative, thank you Alex. >> >> For now, it *seems* to me that all of the chosen tooling is very Red Hat >> centric. Which makes sense to me, considering Triple-O is a Red Hat product. > > Perhaps a slight clarification here is needed. "Director" is a Red Hat > product. TripleO is an upstream project that is now largely driven by > Red Hat and is today marked as single vendor. We welcome others to > contribute to the project upstream just like anybody else. > > And for those who don't know the history the TripleO project was once > multi-vendor as well. So a lot of the abstractions we have in place > could easily be extended to support distro specific implementation > details. (Kind of what I view podman as in the scope of this thread). > >> >> I don't know how much of the current reinvention of container runtimes >> and various tooling around containers is the result of politics. I don't >> know how much is the result of certain companies wanting to "own" the >> container stack from top to bottom. Or how much is a result of technical >> disagreements that simply cannot (or will not) be resolved among >> contributors in the container development ecosystem. >> >> Or is it some combination of the above? I don't know. >> >> What I *do* know is that the current "NIH du jour" mentality currently >> playing itself out in the container ecosystem -- reminding me very much >> of the Javascript ecosystem -- makes it difficult for any potential >> *consumers* of container libraries, runtimes or applications to be >> confident that any choice they make towards one of the other will be the >> *right* choice or even a *possible* choice next year -- or next week. >> Perhaps this is why things like openstack/paunch exist -- to give you >> options if something doesn't pan out. > > This is exactly why paunch exists. > > Re, the podman thing I look at it as an implementation detail. The > good news is that given it is almost a parity replacement for what we > already use we'll still contribute to the OpenStack community in > similar ways. Ultimately whether you run 'docker run' or 'podman run' > you end up with the same thing as far as the existing TripleO > architecture goes. > > Dan > >> >> You have a tough job. I wish you all the luck in the world in making >> these decisions and hope politics and internal corporate management >> decisions play as little a role in them as possible. >> >> Best, >> -jay >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From bdobreli at redhat.com Thu Aug 23 16:40:14 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 23 Aug 2018 18:40:14 +0200 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C183A4A@EX10MBOX03.pnnl.gov> References: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> <363d36a2-c25a-d7ba-3f45-c8b3aa4e6cce@gmail.com> <78bc1c3d-4d97-5a1c-f320-bb08647e8825@gmail.com> <1A3C52DFCD06494D8528644858247BF01C183A00@EX10MBOX03.pnnl.gov> <1A3C52DFCD06494D8528644858247BF01C183A4A@EX10MBOX03.pnnl.gov> Message-ID: <291ec409-4c32-038e-51a4-e87e4273d3b0@redhat.com> On 8/23/18 6:36 PM, Fox, Kevin M wrote: > Or use kubelet in standalone mode. It can be configured for either Cri-o or Docker. You can drive the static manifests from heat/ansible per host as normal and it would be a step in the greater direction of getting to Kubernetes without needing the whole thing at once, if that is the goal. I like the idea to adopt k8s components early and deprecate paunch! Just not that time had shown the plans for k8s integration in tripleo look too distant now and we need the solution today... > > Thanks, > Kevin > ________________________________________ > From: Fox, Kevin M [Kevin.Fox at pnnl.gov] > Sent: Thursday, August 23, 2018 9:22 AM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls > > Question. Rather then writing a middle layer to abstract both container engines, couldn't you just use CRI? CRI is CRI-O's native language, and there is support already for Docker as well. > > Thanks, > Kevin > ________________________________________ > From: Jay Pipes [jaypipes at gmail.com] > Sent: Thursday, August 23, 2018 8:36 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls > > Dan, thanks for the details and answers. Appreciated. > > Best, > -jay > > On 08/23/2018 10:50 AM, Dan Prince wrote: >> On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes wrote: >>> >>> On 08/15/2018 04:01 PM, Emilien Macchi wrote: >>>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi >>> > wrote: >>>> >>>> More seriously here: there is an ongoing effort to converge the >>>> tools around containerization within Red Hat, and we, TripleO are >>>> interested to continue the containerization of our services (which >>>> was initially done with Docker & Docker-Distribution). >>>> We're looking at how these containers could be managed by k8s one >>>> day but way before that we plan to swap out Docker and join CRI-O >>>> efforts, which seem to be using Podman + Buildah (among other things). >>>> >>>> I guess my wording wasn't the best but Alex explained way better here: >>>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52 >>>> >>>> If I may have a chance to rephrase, I guess our current intention is to >>>> continue our containerization and investigate how we can improve our >>>> tooling to better orchestrate the containers. >>>> We have a nice interface (openstack/paunch) that allows us to run >>>> multiple container backends, and we're currently looking outside of >>>> Docker to see how we could solve our current challenges with the new tools. >>>> We're looking at CRI-O because it happens to be a project with a great >>>> community, focusing on some problems that we, TripleO have been facing >>>> since we containerized our services. >>>> >>>> We're doing all of this in the open, so feel free to ask any question. >>> >>> I appreciate your response, Emilien, thank you. Alex' responses to >>> Jeremy on the #openstack-tc channel were informative, thank you Alex. >>> >>> For now, it *seems* to me that all of the chosen tooling is very Red Hat >>> centric. Which makes sense to me, considering Triple-O is a Red Hat product. >> >> Perhaps a slight clarification here is needed. "Director" is a Red Hat >> product. TripleO is an upstream project that is now largely driven by >> Red Hat and is today marked as single vendor. We welcome others to >> contribute to the project upstream just like anybody else. >> >> And for those who don't know the history the TripleO project was once >> multi-vendor as well. So a lot of the abstractions we have in place >> could easily be extended to support distro specific implementation >> details. (Kind of what I view podman as in the scope of this thread). >> >>> >>> I don't know how much of the current reinvention of container runtimes >>> and various tooling around containers is the result of politics. I don't >>> know how much is the result of certain companies wanting to "own" the >>> container stack from top to bottom. Or how much is a result of technical >>> disagreements that simply cannot (or will not) be resolved among >>> contributors in the container development ecosystem. >>> >>> Or is it some combination of the above? I don't know. >>> >>> What I *do* know is that the current "NIH du jour" mentality currently >>> playing itself out in the container ecosystem -- reminding me very much >>> of the Javascript ecosystem -- makes it difficult for any potential >>> *consumers* of container libraries, runtimes or applications to be >>> confident that any choice they make towards one of the other will be the >>> *right* choice or even a *possible* choice next year -- or next week. >>> Perhaps this is why things like openstack/paunch exist -- to give you >>> options if something doesn't pan out. >> >> This is exactly why paunch exists. >> >> Re, the podman thing I look at it as an implementation detail. The >> good news is that given it is almost a parity replacement for what we >> already use we'll still contribute to the OpenStack community in >> similar ways. Ultimately whether you run 'docker run' or 'podman run' >> you end up with the same thing as far as the existing TripleO >> architecture goes. >> >> Dan >> >>> >>> You have a tough job. I wish you all the luck in the world in making >>> these decisions and hope politics and internal corporate management >>> decisions play as little a role in them as possible. >>> >>> Best, >>> -jay >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From duc.openstack at gmail.com Thu Aug 23 16:52:03 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Thu, 23 Aug 2018 09:52:03 -0700 Subject: [openstack-dev] [senlin] Senlin Weekly Meeting Time Change In-Reply-To: <20180821091212.GA13959@rcp.sl.cloud9.ibm.com> References: <20180821091212.GA13959@rcp.sl.cloud9.ibm.com> Message-ID: Thanks everyone for replying. Since there were no objections, we will move to the new meeting time. Our first meeting will be this week on Friday August 24 at 5:30 UTC. The meeting agenda has been posted: https://wiki.openstack.org/wiki/Meetings/SenlinAgenda#Agenda_.282018-08-24_0530_UTC.29 Feel free to add any items you want to discuss. Looking forward to seeing everyone at the meeting. Duc From miguel at mlavalle.com Thu Aug 23 16:57:24 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 23 Aug 2018 11:57:24 -0500 Subject: [openstack-dev] [neutron][api][grapql] Proof of Concept In-Reply-To: References: Message-ID: Hi Gilles, Ed pinged me earlier today in IRC in regards to this topic. After reading your message, I assumed that you had patches up for review in Gerrit. I looked for them, with the intent to list them in the agenda of the next Neutron team meeting, to draw attention to them. I couldn't find any, though: https://review.openstack.org/#/q/owner:%22Gilles+Dubreuil+%253Cgdubreui%2540redhat.com%253E%22 So, how can we help? This is our meetings schedule: http://eavesdrop.openstack.org/#Neutron_Team_Meeting. Given that you are Down Under at UTC+10, the most convenient meeting for you is the one on Monday (even weeks), which would be Tuesday at 7am for you. Please note that we have an on demand section in our agenda: https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda. Feel free to add topics in that section when you have something to discuss with the Neutron team. Best regards Miguel On Sun, Aug 19, 2018 at 10:57 PM, Gilles Dubreuil wrote: > > > On 25/07/18 23:48, Ed Leafe wrote: > >> On Jun 6, 2018, at 7:35 PM, Gilles Dubreuil wrote: >> >>> The branch is now available under feature/graphql on the neutron core >>> repository [1]. >>> >> I wanted to follow up with you on this effort. I haven’t seen any >> activity on StoryBoard for several weeks now, and wanted to be sure that >> there was nothing blocking you that we could help with. >> >> >> -- Ed Leafe >> >> >> >> Hi Ed, > > Thanks for following up. > > There has been 2 essential counterproductive factors to the effort. > > The first is that I've been busy attending issues on other part of my job. > The second one is the lack of response/follow-up from the Neutron core > team. > > We have all the plumbing in place but we need to layer the data through > oslo policies. > > Cheers, > Gilles > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msm at redhat.com Thu Aug 23 17:02:29 2018 From: msm at redhat.com (Michael McCune) Date: Thu, 23 Aug 2018 13:02:29 -0400 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, This week's meeting brings the return of the full SIG core-quartet as all core members were in attendance. The main topics were the agenda [7] for the upcoming Denver PTG [8], and the API-SIG still being listed as TC working group in the governance repository reference files. We also pushed a minor technical change related to the reorganization of the project-config for the upcoming Python 3 transition [9] On the topic of the PTG, there were no new items added or comments about the current list [7]. There was brief talk about who will be attending the gathering, but the details have not been finalized yet. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * None # API Guidelines Proposed for Freeze * None # Guidelines that are ready for wider review by the whole community. * None # Guidelines Currently Under Review [3] * Add an api-design doc with design advice https://review.openstack.org/592003 * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://etherpad.openstack.org/p/api-sig-stein-ptg [8] https://www.openstack.org/ptg/ [9] https://review.openstack.org/#/c/593943/ Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg From dtantsur at redhat.com Thu Aug 23 17:05:31 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 23 Aug 2018 19:05:31 +0200 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: <69cb3ecc-0fa9-d43b-7f44-e01bed9fd240@redhat.com> References: <69cb3ecc-0fa9-d43b-7f44-e01bed9fd240@redhat.com> Message-ID: <6ec18970-b835-4604-6f55-541aca5afbe4@redhat.com> On 08/17/2018 07:45 AM, Cédric Jeanneret wrote: > > > On 08/17/2018 12:25 AM, Steve Baker wrote: >> >> >> On 15/08/18 21:32, Cédric Jeanneret wrote: >>> Dear Community, >>> >>> As you may know, a move toward Podman as replacement of Docker is starting. >>> >>> One of the issues with podman is the lack of daemon, precisely the lack >>> of a socket allowing to send commands and get a "computer formatted >>> output" (like JSON or YAML or...). >>> >>> In order to work that out, Podman has added support for varlink¹, using >>> the "socket activation" feature in Systemd. >>> >>> On my side, I would like to push forward the integration of varlink in >>> TripleO deployed containers, especially since it will allow the following: >>> # proper interface with Paunch (via python link) >> I'm not sure this would be desirable. If we're going to all container >> management via a socket I think we'd be better supported by using CRI-O. >> One of the advantages I see of podman is being able to manage services >> with systemd again. > > Using the socket wouldn't prevent a "per service" systemd unit. Varlink > would just provide another way to manage the containers. > It's NOT like the docker daemon - it will not manage the containers on > startup for example. It's just an API endpoint, without any "automated > powers". > > See it as an interesting complement to the CLI, allowing to access > containers data easily with a computer-oriented language like python3. > >>> # a way to manage containers from within specific containers (think >>> "healthcheck", "monitoring") by mounting the socket as a shared volume >>> >>> # a way to get container statistics (think "metrics") >>> >>> # a way, if needed, to get an ansible module being able to talk to >>> podman (JSON is always better than plain text) >>> >>> # a way to secure the accesses to Podman management (we have to define >>> how varlink talks to Podman, maybe providing dedicated socket with >>> dedicated rights so that we can have dedicated users for specific tasks) >> Some of these cases might prove to be useful, but I do wonder if just >> making podman calls would be just as simple without the complexity of >> having another host-level service to manage. We can still do podman >> operations inside containers by bind-mounting in the container state. > > I wouldn't mount the container state as-is for mainly security reasons. > I'd rather get the varlink abstraction rather than the plain `podman' > CLI - in addition, it is far, far easier for applications to get a > proper JSON instead of some random plain text - even if `podman' seems > to get a "--format" option. I really dislike calling "subprocess" things > when there is a nice API interface - maybe that's just me ;). > > In addition, apparently the state is managed by some sqlite DB - > concurrent accesses to that DB isn't really a good idea, we really don't > want a corruption, do we? IIRC sqlite handles concurrent accesses, it just does them slowly. > >> >>> That said, I have some questions: >>> ° Does any of you have some experience with varlink and podman interface? >>> ° What do you think about that integration wish? >>> ° Does any of you have concern with this possible addition? >> I do worry a bit that it is advocating for a solution before we really >> understand the problems. The biggest unknown for me is what we do about >> healthchecks. Maybe varlink is part of the solution here, or maybe its a >> systemd timer which executes the healthcheck and restarts the service >> when required. > > Maybe. My main concern is: would it be interesting to compare both > solutions? > The Healthchecks are clearly docker-specific, no interface exists atm in > the libpod for that. So we have to mimic it in the best way. > Maybe the healthchecks place is in systemd, and varlink would be used > only for external monitoring and metrics. That would also be a nice way > to explore. > > I would not focus on only one of the possibilities I've listed. There > are probably even more possibilities I didn't see - once we get a proper > socket, anything is possible, the good and the bad ;). > >>> Thank you for your feedback and ideas. >>> >>> Have a great day (or evening, or whatever suits the time you're reading >>> this ;))! >>> >>> C. >>> >>> >>> ¹ https://www.projectatomic.io/blog/2018/05/podman-varlink/ >>> >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dms at danplanet.com Thu Aug 23 17:06:51 2018 From: dms at danplanet.com (Dan Smith) Date: Thu, 23 Aug 2018 10:06:51 -0700 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: <21463d02-2827-d4ce-915c-4c569aae7585@fried.cc> (Eric Fried's message of "Thu, 23 Aug 2018 09:51:21 -0500") References: <1535025580-sup-8617@lrrr.local> <756c9051-a605-4d34-6c15-99cbcbc4801d@gmail.com> <21463d02-2827-d4ce-915c-4c569aae7585@fried.cc> Message-ID: > Do you mean an actual fixture, that would be used like: > > class MyTestCase(testtools.TestCase): > def setUp(self): > self.uuids = self.useFixture(oslofx.UUIDSentinelFixture()).uuids > > def test_foo(self): > do_a_thing_with(self.uuids.foo) > > ? > > That's... okay I guess, but the refactoring necessary to cut over to it > will now entail adding 'self.' to every reference. Is there any way > around that? I don't think it's okay. It makes it a lot more work to use it, where merely importing it (exactly like mock.sentinel) is a large factor in how incredibly convenient it is. --Dan From geguileo at redhat.com Thu Aug 23 17:07:56 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 23 Aug 2018 19:07:56 +0200 Subject: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: References: <20180823104210.kgctxfjiq47uru34@localhost> Message-ID: <20180823170756.sz5qj2lxdy4i4od2@localhost> On 23/08, Dan Smith wrote: > > I think Nova should never have to rely on Cinder's hosts/backends > > information to do migrations or any other operation. > > > > In this case even if Nova had that info, it wouldn't be the solution. > > Cinder would reject migrations if there's an incompatibility on the > > Volume Type (AZ, Referenced backend, capabilities...) > > I think I'm missing a bunch of cinder knowledge required to fully grok > this situation and probably need to do some reading. Is there some > reason that a volume type can't exist in multiple backends or something? > I guess I think of volume type as flavor, and the same definition in two > places would be interchangeable -- is that not the case? > Hi, I just know the basics of flavors, and they are kind of similar, though I'm sure there are quite a few differences. Sure, multiple storage arrays can meet the requirements of a Volume Type, but then when you create the volume you don't know where it's going to land. If your volume type is too generic you volume could land somewhere your cell cannot reach. > > I don't know anything about Nova cells, so I don't know the specifics of > > how we could do the mapping between them and Cinder backends, but > > considering the limited range of possibilities in Cinder I would say we > > only have Volume Types and AZs to work a solution. > > I think the only mapping we need is affinity or distance. The point of > needing to migrate the volume would purely be because moving cells > likely means you moved physically farther away from where you were, > potentially with different storage connections and networking. It > doesn't *have* to mean that, but I think in reality it would. So the > question I think Matt is looking to answer here is "how do we move an > instance from a DC in building A to building C and make sure the > volume gets moved to some storage local in the new building so we're > not just transiting back to the original home for no reason?" > > Does that explanation help or are you saying that's fundamentally hard > to do/orchestrate? > > Fundamentally, the cells thing doesn't even need to be part of the > discussion, as the same rules would apply if we're just doing a normal > migration but need to make sure that storage remains affined to compute. > We could probably work something out using the affinity filter, but right now we don't have a way of doing what you need. We could probably rework the migration to accept scheduler hints to be used with the affinity filter and to accept calls with the host or the hints, that way it could migrate a volume without knowing the destination host and decide it based on affinity. We may have to do more modifications, but it could be a way to do it. > > I don't know how the Nova Placement works, but it could hold an > > equivalency mapping of volume types to cells as in: > > > > Cell#1 Cell#2 > > > > VolTypeA <--> VolTypeD > > VolTypeB <--> VolTypeE > > VolTypeC <--> VolTypeF > > > > Then it could do volume retypes (allowing migration) and that would > > properly move the volumes from one backend to another. > > The only way I can think that we could do this in placement would be if > volume types were resource providers and we assigned them traits that > had special meaning to nova indicating equivalence. Several of the words > in that sentence are likely to freak out placement people, myself > included :) > > So is the concern just that we need to know what volume types in one > backend map to those in another so that when we do the migration we know > what to ask for? Is "they are the same name" not enough? Going back to > the flavor analogy, you could kinda compare two flavor definitions and > have a good idea if they're equivalent or not... > > --Dan In Cinder you don't get that from Volume Types, unless all your backends have the same hardware and are configured exactly the same. There can be some storage specific information there, which doesn't correlate to anything on other hardware. Volume types may refer to a specific pool that has been configured in the array to use specific type of disks. But even the info on the type of disks is unknown to the volume type. I haven't checked the PTG agenda yet, but is there a meeting on this? Because we may want to have one to try to understand the requirements and figure out if there's a way to do it with current Cinder functionality of if we'd need something new. Cheers, Gorka. From mriedemos at gmail.com Thu Aug 23 17:13:24 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 23 Aug 2018 12:13:24 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <20180822182544.iuxhmrugmclc42wh@yuggoth.org> References: <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> <1534883437-sup-4403@lrrr.local> <06afaecc-158c-a6d2-2e4d-c586116eac73@gmail.com> <1534945106-sup-4359@lrrr.local> <775949fc-a058-a076-06a5-c42bb8d016ec@gmail.com> <20180822182544.iuxhmrugmclc42wh@yuggoth.org> Message-ID: <5cd8ad14-9f08-3f51-c60e-0204bd47b518@gmail.com> On 8/22/2018 1:25 PM, Jeremy Stanley wrote: > On 2018-08-22 11:03:43 -0700 (-0700), melanie witt wrote: > [...] >> I think it's about context. If two separate projects do their own priority >> and goal setting, separately, I think they will naturally be more different >> than they would be if they were one project. Currently, we agree on goals >> and priorities together, in the compute context. If placement has its own >> separate context, the priority setting and goal planning will be done in the >> context of placement. In two separate groups, someone who is a member of >> both the Nova and Placement teams would have to persuade Placement-only >> members to agree to prioritize a particular item. This may sound subtle, but >> it's a notable difference in how things work when it's one team vs two >> separate teams. I think having shared context and alignment, at this point >> in time, when we have outstanding closely coupled nova/placement work to do, >> is critical in delivering for operators and users who are depending on us. > [...] > > I'm clearly missing some critical detail about the relationships in > the Nova team. Don't the Nova+Placement contributors already have to > convince the Placement-only contributors what to prioritize working > on? Yes. But it's not a huge gun to the head kind of situation. It's more like, "We (nova) need X (in Placement) otherwise we can't get to Y." There are people that clearly work more on placement than the rest of nova (Chris and Tetsuro come to mind). So what normally happens is Chris, or Eric, or Jay, or someone will work on the Placement side stuff and we'll be stacking the nova-side client bits on top. That's exactly how [1] worked. Chris did the placement stuff that Dan need to do the nova stuff. For [2] Chris and Eric are both working on the placement stuff and Eric has done the framework stuff in nova for the virt drivers to interface with. Despite what is coming up in the ML thread and the tc channel, I myself am not seeing a horde of feature requests breaking down the door and being ignored/rejected because they are placement-only things that nova doesn't itself need. Cyborg is probably as close to consuming/using placement as we have outside of nova. Apparently blazar and zun have thought about using placement, but I'm not aware of anything more than talk so far. If those projects (or other people) "feel" like their requests will be rejected because the mean old nova monsters don't like non-nova things, then I would say that feeling is unjustified until the specific technical feature requests are brought up. > Or are you saying that if they disagree that's fine because the > Nova+Placement contributors will get along just fine without the > Placement-only contributors helping them get it done? It's a mixed team for the most part. As I said, Jay and Eric work on both nova and placement. Chris and Tetsuro are mostly Placement but the work they are doing is to enable things that nova needs. I would not say "get along just fine". The technical/talent gap would be felt, which is true of losing any strong contributors to a piece of a project - that's true of any time someone leaves the community, whether on their own choosing (e.g. danpb/sdague) or not (e.g. alaski/johnthetubaguy). [1] https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/migration-allocations.html [2] https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/reshape-provider-tree.html -- Thanks, Matt From mriedemos at gmail.com Thu Aug 23 17:24:02 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 23 Aug 2018 12:24:02 -0500 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <68248e7c-14f6-d6f2-d87f-8fceb1eed7d6@openstack.org> References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> <1534883437-sup-4403@lrrr.local> <06afaecc-158c-a6d2-2e4d-c586116eac73@gmail.com> <1534945106-sup-4359@lrrr.local> <775949fc-a058-a076-06a5-c42bb8d016ec@gmail.com> <68248e7c-14f6-d6f2-d87f-8fceb1eed7d6@openstack.org> Message-ID: <6b52fa7b-4b13-3663-6a65-fdfa0ed1b425@gmail.com> On 8/23/2018 4:00 AM, Thierry Carrez wrote: > In the OpenStack governance model, contributors to a given piece of code > control its destiny. This is pretty damn fuzzy. So if someone wants to split out nova-compute into a new repo/project/governance with a REST API and all that, nova-core has no say in the matter? -- Thanks, Matt From doug at doughellmann.com Thu Aug 23 17:25:59 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 23 Aug 2018 13:25:59 -0400 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: <21463d02-2827-d4ce-915c-4c569aae7585@fried.cc> References: <1535025580-sup-8617@lrrr.local> <756c9051-a605-4d34-6c15-99cbcbc4801d@gmail.com> <21463d02-2827-d4ce-915c-4c569aae7585@fried.cc> Message-ID: <1535045097-sup-986@lrrr.local> Excerpts from Eric Fried's message of 2018-08-23 09:51:21 -0500: > Do you mean an actual fixture, that would be used like: > > class MyTestCase(testtools.TestCase): > def setUp(self): > self.uuids = self.useFixture(oslofx.UUIDSentinelFixture()).uuids > > def test_foo(self): > do_a_thing_with(self.uuids.foo) > > ? > > That's... okay I guess, but the refactoring necessary to cut over to it > will now entail adding 'self.' to every reference. Is there any way > around that? That is what I had envisioned, yes. In the absence of a global, which we do not want, what other API would you propose? Doug > > efried > > On 08/23/2018 07:40 AM, Jay Pipes wrote: > > On 08/23/2018 08:06 AM, Doug Hellmann wrote: > >> Excerpts from Davanum Srinivas (dims)'s message of 2018-08-23 06:46:38 > >> -0400: > >>> Where exactly Eric? I can't seem to find the import: > >>> > >>> http://codesearch.openstack.org/?q=(from%7Cimport).*oslotest&i=nope&files=&repos=oslo.utils > >>> > >>> > >>> -- dims > >> > >> oslo.utils depends on oslotest via test-requirements.txt and oslotest is > >> used within the test modules in oslo.utils. > >> > >> As I've said on both reviews, I think we do not want a global > >> singleton instance of this sentinal class. We do want a formal test > >> fixture.  Either library can export a test fixture and olso.utils > >> already has oslo_utils.fixture.TimeFixture so there's precedent to > >> adding it there, so I have a slight preference for just doing that. > >> > >> That said, oslo_utils.uuidutils.generate_uuid() is simply returning > >> str(uuid.uuid4()). We have it wrapped up as a function so we can > >> mock it out in other tests, but we hardly need to rely on that if > >> we're making a test fixture for oslotest. > >> > >> My vote is to add a new fixture class to oslo_utils.fixture. > > > > OK, thanks for the helpful explanation, Doug. Works for me. > > > > -jay > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jaypipes at gmail.com Thu Aug 23 17:41:34 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 23 Aug 2018 13:41:34 -0400 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: <1535045097-sup-986@lrrr.local> References: <1535025580-sup-8617@lrrr.local> <756c9051-a605-4d34-6c15-99cbcbc4801d@gmail.com> <21463d02-2827-d4ce-915c-4c569aae7585@fried.cc> <1535045097-sup-986@lrrr.local> Message-ID: <315eac7a-2fed-e2ae-538e-e589dea7cf93@gmail.com> On 08/23/2018 01:25 PM, Doug Hellmann wrote: > Excerpts from Eric Fried's message of 2018-08-23 09:51:21 -0500: >> Do you mean an actual fixture, that would be used like: >> >> class MyTestCase(testtools.TestCase): >> def setUp(self): >> self.uuids = self.useFixture(oslofx.UUIDSentinelFixture()).uuids >> >> def test_foo(self): >> do_a_thing_with(self.uuids.foo) >> >> ? >> >> That's... okay I guess, but the refactoring necessary to cut over to it >> will now entail adding 'self.' to every reference. Is there any way >> around that? > > That is what I had envisioned, yes. In the absence of a global, > which we do not want, what other API would you propose? As dansmith mentioned, the niceness and simplicity of being able to do: import nova.tests.uuidsentinel as uuids .. def test_something(self): my_uuid = uuids.instance1 is remarkably powerful and is something I would want to keep. Best, -jay From sean.mcginnis at gmx.com Thu Aug 23 18:08:17 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 23 Aug 2018 13:08:17 -0500 Subject: [openstack-dev] [release] Release countdown for week R-0, August 27 - 31 In-Reply-To: References: <20180823161228.GA27186@sm-workstation> Message-ID: <20180823180817.GA31156@sm-workstation> > > > > We are still missing releases for the following tempest plugins. Some are > > pending getting pypi and release jobs set up, but please try to prioritize > > getting these done as soon as possible. > > > > barbican-tempest-plugin > > blazar-tempest-plugin > > cloudkitty-tempest-plugin > > congress-tempest-plugin > > ec2api-tempest-plugin > > magnum-tempest-plugin > > mistral-tempest-plugin > > monasca-kibana-plugin > > monasca-tempest-plugin > > murano-tempest-plugin > > networking-generic-switch-tempest-plugin > > oswin-tempest-plugin > > senlin-tempest-plugin > > telemetry-tempest-plugin > > tripleo-common-tempest-plugin > > trove-tempest-plugin > > watcher-tempest-plugin > > zaqar-tempest-plugin > > tempest-horizon is missing from the list. horizon team needs to > release tempest-horizon. > It does not follow the naming convention so it seems to be missed from the list. > > Thanks, > Akihiro Motoki (amotoki) > Ah, good catch Akihiro, thanks! Maybe if it can be done quickly, before a release might be a good time to update the package name to match the convention used elsewhere. But we are running short on time and there's probably more involved in doing that than just updating the package name. From juliaashleykreger at gmail.com Thu Aug 23 18:24:08 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 23 Aug 2018 12:24:08 -0600 Subject: [openstack-dev] [ironic][bifrost][sushy][ironic-inspector][ironic-ui][virtualbmc] sub-project/repository core reviewer changes Message-ID: Greetings everyone! In our team meeting this week we stumbled across the subject of promoting contributors to be sub-project's core reviewers. Traditionally it is something we've only addressed as needed or desired by consensus with-in those sub-projects, but we were past due time to take a look at the entire picture since not everything should fall to ironic-core. And so, I've taken a look at our various repositories and I'm proposing the following additions: For sushy-core, sushy-tools-core, and virtualbmc-core: Ilya Etingof[1]. Ilya has been actively involved with sushy, sushy-tools, and virtualbmc this past cycle. I've found many of his reviews and non-voting review comments insightful and willing to understand. He has taken on some of the effort that is needed to maintain and keep these tools usable for the community, and as such adding him to the core group for these repositories makes lots of sense. For ironic-inspector-core and ironic-specs-core: Kaifeng Wang[2]. Kaifeng has taken on some hard problems in ironic and ironic-inspector, as well as brought up insightful feedback in ironic-specs. They are demonstrating a solid understanding that I only see growing as time goes on. For sushy-core: Debayan Ray[3]. Debayan has been involved with the community for some time and has worked on sushy from early on in its life. He has indicated it is near and dear to him, and he has been actively reviewing and engaging in discussion on patchsets as his time has permitted. With any addition it is good to look at inactivity as well. It saddens me to say that we've had some contributors move on as priorities have shifted to where they are no longer involved with the ironic community. Each person listed below has been inactive for a year or more and is no longer active in the ironic community. As such I've removed their group membership from the sub-project core reviewer groups. Should they return, we will welcome them back to the community with open arms. bifrost-core: Stephanie Miller[4] ironic-inspector-core: Anton Arefivev[5] ironic-ui-core: Peter Peila[6], Beth Elwell[7] Thanks, -Julia [1]: http://stackalytics.com/?user_id=etingof&metric=marks [2]: http://stackalytics.com/?user_id=kaifeng&metric=marks [3]: http://stackalytics.com/?user_id=deray&metric=marks&release=all [4]: http://stackalytics.com/?metric=marks&release=all&user_id=stephaneeee [5]: http://stackalytics.com/?user_id=aarefiev&metric=marks [6]: http://stackalytics.com/?metric=marks&release=all&user_id=ppiela [7]: http://stackalytics.com/?metric=marks&release=all&user_id=bethelwell&module=ironic-ui From openstack at fried.cc Thu Aug 23 18:42:50 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 23 Aug 2018 13:42:50 -0500 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: <315eac7a-2fed-e2ae-538e-e589dea7cf93@gmail.com> References: <1535025580-sup-8617@lrrr.local> <756c9051-a605-4d34-6c15-99cbcbc4801d@gmail.com> <21463d02-2827-d4ce-915c-4c569aae7585@fried.cc> <1535045097-sup-986@lrrr.local> <315eac7a-2fed-e2ae-538e-e589dea7cf93@gmail.com> Message-ID: <3f2131e5-6785-0429-e731-81c1287b39ff@fried.cc> The compromise, using the patch as currently written [1], would entail adding one line at the top of each test file: uuids = uuidsentinel.UUIDSentinels() ...as seen (more or less) at [2]. The subtle difference being that this `uuids` wouldn't share a namespace across the whole process, only within that file. Given current usage, that shouldn't cause a problem, but it's a change. -efried [1] https://review.openstack.org/#/c/594068/9 [2] https://review.openstack.org/#/c/594068/9/oslotest/tests/unit/test_uuidsentinel.py at 22 On 08/23/2018 12:41 PM, Jay Pipes wrote: > On 08/23/2018 01:25 PM, Doug Hellmann wrote: >> Excerpts from Eric Fried's message of 2018-08-23 09:51:21 -0500: >>> Do you mean an actual fixture, that would be used like: >>> >>>   class MyTestCase(testtools.TestCase): >>>       def setUp(self): >>>           self.uuids = >>> self.useFixture(oslofx.UUIDSentinelFixture()).uuids >>> >>>       def test_foo(self): >>>           do_a_thing_with(self.uuids.foo) >>> >>> ? >>> >>> That's... okay I guess, but the refactoring necessary to cut over to it >>> will now entail adding 'self.' to every reference. Is there any way >>> around that? >> >> That is what I had envisioned, yes.  In the absence of a global, >> which we do not want, what other API would you propose? > > As dansmith mentioned, the niceness and simplicity of being able to do: > >  import nova.tests.uuidsentinel as uuids > >  .. > >  def test_something(self): >      my_uuid = uuids.instance1 > > is remarkably powerful and is something I would want to keep. > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From skaplons at redhat.com Thu Aug 23 18:58:55 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Thu, 23 Aug 2018 20:58:55 +0200 Subject: [openstack-dev] [neutron][api][grapql] Proof of Concept In-Reply-To: References: Message-ID: <9248BC68-780C-4A6F-8236-F381C5A78D56@redhat.com> Hi Miguel, I’m not sure but maybe You were looking for those patches: https://review.openstack.org/#/q/project:openstack/neutron+branch:feature/graphql > Wiadomość napisana przez Miguel Lavalle w dniu 23.08.2018, o godz. 18:57: > > Hi Gilles, > > Ed pinged me earlier today in IRC in regards to this topic. After reading your message, I assumed that you had patches up for review in Gerrit. I looked for them, with the intent to list them in the agenda of the next Neutron team meeting, to draw attention to them. I couldn't find any, though: https://review.openstack.org/#/q/owner:%22Gilles+Dubreuil+%253Cgdubreui%2540redhat.com%253E%22 > > So, how can we help? This is our meetings schedule: http://eavesdrop.openstack.org/#Neutron_Team_Meeting. Given that you are Down Under at UTC+10, the most convenient meeting for you is the one on Monday (even weeks), which would be Tuesday at 7am for you. Please note that we have an on demand section in our agenda: https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda. Feel free to add topics in that section when you have something to discuss with the Neutron team. > > Best regards > > Miguel > > On Sun, Aug 19, 2018 at 10:57 PM, Gilles Dubreuil wrote: > > > On 25/07/18 23:48, Ed Leafe wrote: > On Jun 6, 2018, at 7:35 PM, Gilles Dubreuil wrote: > The branch is now available under feature/graphql on the neutron core repository [1]. > I wanted to follow up with you on this effort. I haven’t seen any activity on StoryBoard for several weeks now, and wanted to be sure that there was nothing blocking you that we could help with. > > > -- Ed Leafe > > > > Hi Ed, > > Thanks for following up. > > There has been 2 essential counterproductive factors to the effort. > > The first is that I've been busy attending issues on other part of my job. > The second one is the lack of response/follow-up from the Neutron core team. > > We have all the plumbing in place but we need to layer the data through oslo policies. > > Cheers, > Gilles > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From no-reply at openstack.org Thu Aug 23 19:00:47 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 23 Aug 2018 19:00:47 -0000 Subject: [openstack-dev] neutron-fwaas 13.0.0.0rc2 (rocky) Message-ID: Hello everyone, A new release candidate for neutron-fwaas for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/neutron-fwaas/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/neutron-fwaas/log/?h=stable/rocky Release notes for neutron-fwaas can be found at: https://docs.openstack.org/releasenotes/neutron-fwaas/ From dms at danplanet.com Thu Aug 23 19:02:39 2018 From: dms at danplanet.com (Dan Smith) Date: Thu, 23 Aug 2018 12:02:39 -0700 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: <3f2131e5-6785-0429-e731-81c1287b39ff@fried.cc> (Eric Fried's message of "Thu, 23 Aug 2018 13:42:50 -0500") References: <1535025580-sup-8617@lrrr.local> <756c9051-a605-4d34-6c15-99cbcbc4801d@gmail.com> <21463d02-2827-d4ce-915c-4c569aae7585@fried.cc> <1535045097-sup-986@lrrr.local> <315eac7a-2fed-e2ae-538e-e589dea7cf93@gmail.com> <3f2131e5-6785-0429-e731-81c1287b39ff@fried.cc> Message-ID: > The compromise, using the patch as currently written [1], would entail > adding one line at the top of each test file: > > uuids = uuidsentinel.UUIDSentinels() > > ...as seen (more or less) at [2]. The subtle difference being that this > `uuids` wouldn't share a namespace across the whole process, only within > that file. Given current usage, that shouldn't cause a problem, but it's > a change. ...and it doesn't work like mock.sentinel does, which is part of the value. I really think we should put this wherever it needs to be so that it can continue to be as useful as is is today. Even if that means just copying it into another project -- it's not that complicated of a thing. --Dan From cdent+os at anticdent.org Thu Aug 23 19:05:40 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 23 Aug 2018 20:05:40 +0100 (BST) Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: References: <1535025580-sup-8617@lrrr.local> <756c9051-a605-4d34-6c15-99cbcbc4801d@gmail.com> <21463d02-2827-d4ce-915c-4c569aae7585@fried.cc> <1535045097-sup-986@lrrr.local> <315eac7a-2fed-e2ae-538e-e589dea7cf93@gmail.com> <3f2131e5-6785-0429-e731-81c1287b39ff@fried.cc> Message-ID: On Thu, 23 Aug 2018, Dan Smith wrote: > ...and it doesn't work like mock.sentinel does, which is part of the > value. I really think we should put this wherever it needs to be so that > it can continue to be as useful as is is today. Even if that means just > copying it into another project -- it's not that complicated of a thing. Yeah, I agree. I had hoped that we could make something that was generally useful, but its main value is its interface and if we can't have that interface in a library, having it per codebase is no biggie. For example it's been copied straight from nova into the placement extractions experiments with no changes and, as one would expect, works just fine. Unless people are wed to doing something else, Dan's right, let's just do that. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From mriedemos at gmail.com Thu Aug 23 19:28:57 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 23 Aug 2018 14:28:57 -0500 Subject: [openstack-dev] [nova] Rocky blueprint burndown chart In-Reply-To: <045ab2da-8784-03e6-ad82-8d013a95d2d7@gmail.com> References: <045ab2da-8784-03e6-ad82-8d013a95d2d7@gmail.com> Message-ID: On 8/15/2018 3:47 PM, melanie witt wrote: > I think part of the miss on the number of approvals might be because we > extended the spec freeze date to milestone r-2 because of runways, > thinking that if we completed enough things, we could approve more > things. We didn't predict that accurately but with the experience, my > hope is we can do better in Stein. We could consider moving spec freeze > back to milestone s-1 or have rough criteria on whether to approve more > blueprints close to s-2 (for example, if 30%? of approved blueprints > have been completed, OK to approve more). > > If you have feedback or thoughts on any of this, feel free to reply to > this thread or add your comments to the Rocky retrospective etherpad [4] > and we can discuss at the PTG. The completion percentage was about the same as Queens, which is good to know. And I think is good at around 80%. Some things get deferred not because of a lack of reviewer attention but because the contributor stalled out or had higher priority work to complete. We approved more stuff in Rocky because we had more time to approve stuff (spec freeze in Queens was the first milestone, it was the second milestone in Rocky). So with completion rates about the same but with more stuff approved/completed in Rocky, what is the difference? From a relatively intangible / gut feeling standpoint, I would say one answer is in Queens we had a pretty stable, issue free release period but I can't say that is the same for Rocky where we're down to the wire getting stuff done for our third release candidate on the final day for release candidates. So it stands to reason that the earlier we cut the approvals on new stuff and have more burn in time for what we do complete, we have a smoother release at the end. That's not really rocket science, it's common sense. So I think going back to spec freeze on s-1 is likely a good idea in Stein now that we know how runways went. We can always make exceptions for high priority stuff if needed after s-1, like we did with reshaper in Rocky (even though we didn't get it done). -- Thanks, Matt From jim at jimrollenhagen.com Thu Aug 23 19:43:27 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Thu, 23 Aug 2018 15:43:27 -0400 Subject: [openstack-dev] [ironic][bifrost][sushy][ironic-inspector][ironic-ui][virtualbmc] sub-project/repository core reviewer changes In-Reply-To: References: Message-ID: ++ // jim On Thu, Aug 23, 2018 at 2:24 PM, Julia Kreger wrote: > Greetings everyone! > > In our team meeting this week we stumbled across the subject of > promoting contributors to be sub-project's core reviewers. > Traditionally it is something we've only addressed as needed or > desired by consensus with-in those sub-projects, but we were past due > time to take a look at the entire picture since not everything should > fall to ironic-core. > > And so, I've taken a look at our various repositories and I'm > proposing the following additions: > > For sushy-core, sushy-tools-core, and virtualbmc-core: Ilya > Etingof[1]. Ilya has been actively involved with sushy, sushy-tools, > and virtualbmc this past cycle. I've found many of his reviews and > non-voting review comments insightful and willing to understand. He > has taken on some of the effort that is needed to maintain and keep > these tools usable for the community, and as such adding him to the > core group for these repositories makes lots of sense. > > For ironic-inspector-core and ironic-specs-core: Kaifeng Wang[2]. > Kaifeng has taken on some hard problems in ironic and > ironic-inspector, as well as brought up insightful feedback in > ironic-specs. They are demonstrating a solid understanding that I only > see growing as time goes on. > > For sushy-core: Debayan Ray[3]. Debayan has been involved with the > community for some time and has worked on sushy from early on in its > life. He has indicated it is near and dear to him, and he has been > actively reviewing and engaging in discussion on patchsets as his time > has permitted. > > With any addition it is good to look at inactivity as well. It saddens > me to say that we've had some contributors move on as priorities have > shifted to where they are no longer involved with the ironic > community. Each person listed below has been inactive for a year or > more and is no longer active in the ironic community. As such I've > removed their group membership from the sub-project core reviewer > groups. Should they return, we will welcome them back to the community > with open arms. > > bifrost-core: Stephanie Miller[4] > ironic-inspector-core: Anton Arefivev[5] > ironic-ui-core: Peter Peila[6], Beth Elwell[7] > > Thanks, > > -Julia > > [1]: http://stackalytics.com/?user_id=etingof&metric=marks > [2]: http://stackalytics.com/?user_id=kaifeng&metric=marks > [3]: http://stackalytics.com/?user_id=deray&metric=marks&release=all > [4]: http://stackalytics.com/?metric=marks&release=all&user_id=stephaneeee > [5]: http://stackalytics.com/?user_id=aarefiev&metric=marks > [6]: http://stackalytics.com/?metric=marks&release=all&user_id=ppiela > [7]: http://stackalytics.com/?metric=marks&release=all&user_ > id=bethelwell&module=ironic-ui > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Thu Aug 23 19:45:17 2018 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 23 Aug 2018 13:45:17 -0600 Subject: [openstack-dev] [release] Release countdown for week R-0, August 27 - 31 In-Reply-To: <20180823161228.GA27186@sm-workstation> References: <20180823161228.GA27186@sm-workstation> Message-ID: On Thu, Aug 23, 2018 at 10:12 AM, Sean McGinnis wrote: > This is the final countdown email for the Rocky development cycle. Thanks to > everyone involved in the Rocky release! > > Development Focus > ----------------- > > Teams attending the PTG should be preparing for those discussions and capturing > information in the etherpads: > > https://wiki.openstack.org/wiki/PTG/Stein/Etherpads > > General Information > ------------------- > > The release team plans on doing the final Rocky release on 29 August. We will > re-tag the last commit used for the final RC using the final version number. > > If you have not already done so, now would be a good time to take a look at the > Stein schedule and start planning team activities: > > https://releases.openstack.org/stein/schedule.html > > Actions > --------- > > PTLs and release liaisons should watch for the final release patch from the > release team. While not required, we would appreciate having an ack from each > team before we approve it on the 29th. > > We are still missing releases for the following tempest plugins. Some are > pending getting pypi and release jobs set up, but please try to prioritize > getting these done as soon as possible. > > barbican-tempest-plugin > blazar-tempest-plugin > cloudkitty-tempest-plugin > congress-tempest-plugin > ec2api-tempest-plugin > magnum-tempest-plugin > mistral-tempest-plugin > monasca-kibana-plugin > monasca-tempest-plugin > murano-tempest-plugin > networking-generic-switch-tempest-plugin > oswin-tempest-plugin > senlin-tempest-plugin > telemetry-tempest-plugin > tripleo-common-tempest-plugin To speak for the tripleo-common-template-plugin, it's currently not used and there aren't any tests so I don't think it's in a spot for it's first release during Rocky. I'm not sure the current status of this effort so it'll be something we'll need to raise at the PTG. > trove-tempest-plugin > watcher-tempest-plugin > zaqar-tempest-plugin > > Upcoming Deadlines & Dates > -------------------------- > > Final RC deadline: August 23 > Rocky Release: August 29 > Cycle trailing RC deadline: August 30 > Stein PTG: September 10-14 > Cycle trailing Rocky release: November 28 > > -- > Sean McGinnis (smcginnis) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From openstack at nemebean.com Thu Aug 23 20:01:30 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 23 Aug 2018 15:01:30 -0500 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: <1535045097-sup-986@lrrr.local> References: <1535025580-sup-8617@lrrr.local> <756c9051-a605-4d34-6c15-99cbcbc4801d@gmail.com> <21463d02-2827-d4ce-915c-4c569aae7585@fried.cc> <1535045097-sup-986@lrrr.local> Message-ID: <1caecc54-c681-cad6-9664-8281ab2d4323@nemebean.com> On 08/23/2018 12:25 PM, Doug Hellmann wrote: > Excerpts from Eric Fried's message of 2018-08-23 09:51:21 -0500: >> Do you mean an actual fixture, that would be used like: >> >> class MyTestCase(testtools.TestCase): >> def setUp(self): >> self.uuids = self.useFixture(oslofx.UUIDSentinelFixture()).uuids >> >> def test_foo(self): >> do_a_thing_with(self.uuids.foo) >> >> ? >> >> That's... okay I guess, but the refactoring necessary to cut over to it >> will now entail adding 'self.' to every reference. Is there any way >> around that? > > That is what I had envisioned, yes. In the absence of a global, > which we do not want, what other API would you propose? If we put it in oslotest instead, would the global still be a problem? Especially since mock has already established a pattern for this functionality? From melwittt at gmail.com Thu Aug 23 20:27:26 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 23 Aug 2018 13:27:26 -0700 Subject: [openstack-dev] [nova][vmware] need help triaging a vmware driver bug In-Reply-To: References: <45e95976-1e14-c466-8b4f-45aff35df4fb@gmail.com> Message-ID: <07bbd498-69e8-56ff-5e01-83ef0eea4cfd@gmail.com> On Fri, 17 Aug 2018 10:50:30 +0300, Radoslav Gerganov wrote: > Hi, > > On 17.08.2018 04:10, melanie witt wrote: >> >> Can anyone help triage this bug? >> > > I have requested more info from the person who submitted this and provided some tips how to correlate nova-compute logs to vCenter logs in order to better understand what went wrong. > Would it be possible to include this kind of information in the Launchpad bug template for VMware related bugs? Thank you for your help, Rado. So, I think we could add something to the launchpad bug template to link to a doc that explains tips about reporting VMware related bugs. I suggest linking to a doc because the bug template is already really long and looks like it would be best to have something short, like, "For tips on reporting VMware virt driver bugs, see this doc: " and provide a link to, for example, a openstack wiki about the VMware virt driver (is there one?). The question is, where can we put the doc? Wiki? Or maybe here at the bottom [1]? Let me know what you think. -melanie [1] https://docs.openstack.org/nova/latest/admin/configuration/hypervisor-vmware.html From mark at stackhpc.com Thu Aug 23 20:38:05 2018 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 23 Aug 2018 21:38:05 +0100 Subject: [openstack-dev] [ironic][bifrost][sushy][ironic-inspector][ironic-ui][virtualbmc] sub-project/repository core reviewer changes In-Reply-To: References: Message-ID: +1 On Thu, 23 Aug 2018, 20:43 Jim Rollenhagen, wrote: > ++ > > > // jim > > On Thu, Aug 23, 2018 at 2:24 PM, Julia Kreger > wrote: > >> Greetings everyone! >> >> In our team meeting this week we stumbled across the subject of >> promoting contributors to be sub-project's core reviewers. >> Traditionally it is something we've only addressed as needed or >> desired by consensus with-in those sub-projects, but we were past due >> time to take a look at the entire picture since not everything should >> fall to ironic-core. >> >> And so, I've taken a look at our various repositories and I'm >> proposing the following additions: >> >> For sushy-core, sushy-tools-core, and virtualbmc-core: Ilya >> Etingof[1]. Ilya has been actively involved with sushy, sushy-tools, >> and virtualbmc this past cycle. I've found many of his reviews and >> non-voting review comments insightful and willing to understand. He >> has taken on some of the effort that is needed to maintain and keep >> these tools usable for the community, and as such adding him to the >> core group for these repositories makes lots of sense. >> >> For ironic-inspector-core and ironic-specs-core: Kaifeng Wang[2]. >> Kaifeng has taken on some hard problems in ironic and >> ironic-inspector, as well as brought up insightful feedback in >> ironic-specs. They are demonstrating a solid understanding that I only >> see growing as time goes on. >> >> For sushy-core: Debayan Ray[3]. Debayan has been involved with the >> community for some time and has worked on sushy from early on in its >> life. He has indicated it is near and dear to him, and he has been >> actively reviewing and engaging in discussion on patchsets as his time >> has permitted. >> >> With any addition it is good to look at inactivity as well. It saddens >> me to say that we've had some contributors move on as priorities have >> shifted to where they are no longer involved with the ironic >> community. Each person listed below has been inactive for a year or >> more and is no longer active in the ironic community. As such I've >> removed their group membership from the sub-project core reviewer >> groups. Should they return, we will welcome them back to the community >> with open arms. >> >> bifrost-core: Stephanie Miller[4] >> ironic-inspector-core: Anton Arefivev[5] >> ironic-ui-core: Peter Peila[6], Beth Elwell[7] >> >> Thanks, >> >> -Julia >> >> [1]: http://stackalytics.com/?user_id=etingof&metric=marks >> [2]: http://stackalytics.com/?user_id=kaifeng&metric=marks >> [3]: http://stackalytics.com/?user_id=deray&metric=marks&release=all >> [4]: >> http://stackalytics.com/?metric=marks&release=all&user_id=stephaneeee >> [5]: http://stackalytics.com/?user_id=aarefiev&metric=marks >> [6]: http://stackalytics.com/?metric=marks&release=all&user_id=ppiela >> [7]: >> http://stackalytics.com/?metric=marks&release=all&user_id=bethelwell&module=ironic-ui >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Aug 23 20:59:13 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 23 Aug 2018 15:59:13 -0500 Subject: [openstack-dev] [release] Release countdown for week R-0, August 27 - 31 In-Reply-To: References: <20180823161228.GA27186@sm-workstation> Message-ID: <20180823205912.GA14963@sm-workstation> > > > > We are still missing releases for the following tempest plugins. Some are > > pending getting pypi and release jobs set up, but please try to prioritize > > getting these done as soon as possible. > > > > barbican-tempest-plugin > > blazar-tempest-plugin > > cloudkitty-tempest-plugin > > congress-tempest-plugin > > ec2api-tempest-plugin > > magnum-tempest-plugin > > mistral-tempest-plugin > > monasca-kibana-plugin > > monasca-tempest-plugin > > murano-tempest-plugin > > networking-generic-switch-tempest-plugin > > oswin-tempest-plugin > > senlin-tempest-plugin > > telemetry-tempest-plugin > > tripleo-common-tempest-plugin > > To speak for the tripleo-common-template-plugin, it's currently not > used and there aren't any tests so I don't think it's in a spot for > it's first release during Rocky. I'm not sure the current status of > this effort so it'll be something we'll need to raise at the PTG. > Thanks Alex. Odd that a repo was created with no tests. I think the goal was to split out in-repo tempest tests, not to ensure that every project has one whether they need it or not. I wonder if we should "retire" this repo until it is actually needed. I will propose a patch to the releases repo to drop the deliverable file at least. That will keep it from showing up in our list of unreleased repos. From no-reply at openstack.org Thu Aug 23 21:18:50 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 23 Aug 2018 21:18:50 -0000 Subject: [openstack-dev] sahara 9.0.0.0rc2 (rocky) Message-ID: Hello everyone, A new release candidate for sahara for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/sahara/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/sahara/log/?h=stable/rocky Release notes for sahara can be found at: https://docs.openstack.org/releasenotes/sahara/ From no-reply at openstack.org Thu Aug 23 21:19:12 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 23 Aug 2018 21:19:12 -0000 Subject: [openstack-dev] sahara-dashboard 9.0.0.0rc2 (rocky) Message-ID: Hello everyone, A new release candidate for sahara-dashboard for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/sahara-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/sahara-dashboard/log/?h=stable/rocky Release notes for sahara-dashboard can be found at: https://docs.openstack.org/releasenotes/sahara-dashboard/ From anne at openstack.org Thu Aug 23 21:21:26 2018 From: anne at openstack.org (Anne Bertucio) Date: Thu, 23 Aug 2018 14:21:26 -0700 Subject: [openstack-dev] [community][Rocky] Community Meeting: Rocky + project updates In-Reply-To: <87363388-E7B9-499B-AC96-D2751504DAEB@openstack.org> References: <87363388-E7B9-499B-AC96-D2751504DAEB@openstack.org> Message-ID: <50F06905-4D8D-4DFA-AC5E-AFEC5A234B89@openstack.org> Hi all, Updated meeting information below for the OpenStack Community Meeting on August 30 at 3pm UTC. We’ll cover what’s new in the Rocky release, hear updates from the Airship, Kata Containers, StarlingX and Zuul projects, and get a preview of the Berlin Summit. Hope you can join us, but if not, it will be recorded! When: Aug 30, 2018 8:00 AM Pacific Time (US and Canada) Topic: OpenStack Community Meeting Please click the link below to join the webinar: https://zoom.us/j/551803657 Or iPhone one-tap : US: +16699006833,,551803657# or +16468769923,,551803657# Or Telephone: Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 Webinar ID: 551 803 657 International numbers available: https://zoom.us/u/bh2jVweqf Cheers, Anne Bertucio OpenStack Foundation anne at openstack.org | irc: annabelleB > On Aug 16, 2018, at 9:46 AM, Anne Bertucio wrote: > > Hi all, > > Save the date for an OpenStack community meeting on August 30 at 3pm UTC. This is the evolution of the “Marketing Community Release Preview” meeting that we’ve had each cycle. While that meeting has always been open to all, we wanted to expand the topics and encourage anyone who was interested in getting updates on the Rocky release or the newer projects at OSF to attend. > > We’ll cover: > —What’s new in Rocky > (This info will still be at a fairly high level, so might not be new information if you’re someone who stays up to date in the dev ML or is actively involved in upstream work) > > —Updates from Airship, Kata Containers, StarlingX, and Zuul > > —What you can expect at the Berlin Summit in November > > This meeting will be run over Zoom (look for info closer to the 30th) and will be recorded, so if you can’t make the time, don’t panic! > > Cheers, > Anne Bertucio > OpenStack Foundation > anne at openstack.org | irc: annabelleB > > > > > > _______________________________________________ > Marketing mailing list > Marketing at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/marketing -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Fri Aug 24 00:09:13 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Fri, 24 Aug 2018 10:09:13 +1000 Subject: [openstack-dev] [neutron][api][grapql] Proof of Concept In-Reply-To: <9248BC68-780C-4A6F-8236-F381C5A78D56@redhat.com> References: <9248BC68-780C-4A6F-8236-F381C5A78D56@redhat.com> Message-ID: On 24/08/18 04:58, Slawomir Kaplonski wrote: > Hi Miguel, > > I’m not sure but maybe You were looking for those patches: > > https://review.openstack.org/#/q/project:openstack/neutron+branch:feature/graphql > Yes that's the one, it's under Tristan Cacqueray name as he helped getting started. >> Wiadomość napisana przez Miguel Lavalle w dniu 23.08.2018, o godz. 18:57: >> >> Hi Gilles, >> >> Ed pinged me earlier today in IRC in regards to this topic. After reading your message, I assumed that you had patches up for review in Gerrit. I looked for them, with the intent to list them in the agenda of the next Neutron team meeting, to draw attention to them. I couldn't find any, though: https://review.openstack.org/#/q/owner:%22Gilles+Dubreuil+%253Cgdubreui%2540redhat.com%253E%22 >> >> So, how can we help? This is our meetings schedule: http://eavesdrop.openstack.org/#Neutron_Team_Meeting. Given that you are Down Under at UTC+10, the most convenient meeting for you is the one on Monday (even weeks), which would be Tuesday at 7am for you. Please note that we have an on demand section in our agenda: https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda. Feel free to add topics in that section when you have something to discuss with the Neutron team. Now that we have a working base API serving GraphQL requests we need to do provide the data in respect of Oslo Policy and such. Thanks for the pointers, I'll add the latter to the Agenda and will be at next meeting. >> >> Best regards >> >> Miguel >> >> On Sun, Aug 19, 2018 at 10:57 PM, Gilles Dubreuil wrote: >> >> >> On 25/07/18 23:48, Ed Leafe wrote: >> On Jun 6, 2018, at 7:35 PM, Gilles Dubreuil wrote: >> The branch is now available under feature/graphql on the neutron core repository [1]. >> I wanted to follow up with you on this effort. I haven’t seen any activity on StoryBoard for several weeks now, and wanted to be sure that there was nothing blocking you that we could help with. >> >> >> -- Ed Leafe >> >> >> >> Hi Ed, >> >> Thanks for following up. >> >> There has been 2 essential counterproductive factors to the effort. >> >> The first is that I've been busy attending issues on other part of my job. >> The second one is the lack of response/follow-up from the Neutron core team. >> >> We have all the plumbing in place but we need to layer the data through oslo policies. >> >> Cheers, >> Gilles >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 From andy at andybotting.com Fri Aug 24 00:43:24 2018 From: andy at andybotting.com (Andy Botting) Date: Fri, 24 Aug 2018 10:43:24 +1000 Subject: [openstack-dev] [glance][horizon] Issues we found when using Community Images In-Reply-To: References: Message-ID: Hi Jeremy, > Can you comment more on what needs to be updated in Sahara? Are they > simply issues in the UI (sahara-dashboard) or is there a problem > consuming community images on the server side? We haven't looked into it much yet, so I couldn't tell you. I think it would be great to extend the Glance API to include a visibility=all filter, so we can actually get ALL available images in a single request, then projects could switch over to this. It might need some thought on how to manage the new API request when using an older version of Glance that didn't support visibility=all, but I'm sure that could be worked out. It would be great to hear from one of the Glance devs what they think about this approach. cheers, Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Fri Aug 24 00:59:12 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 24 Aug 2018 00:59:12 -0000 Subject: [openstack-dev] cinder 13.0.0.0rc3 (rocky) Message-ID: Hello everyone, A new release candidate for cinder for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/cinder/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/cinder/log/?h=stable/rocky Release notes for cinder can be found at: https://docs.openstack.org/releasenotes/cinder/ From doug at doughellmann.com Fri Aug 24 04:23:50 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 24 Aug 2018 00:23:50 -0400 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: <1caecc54-c681-cad6-9664-8281ab2d4323@nemebean.com> References: <1535025580-sup-8617@lrrr.local> <756c9051-a605-4d34-6c15-99cbcbc4801d@gmail.com> <21463d02-2827-d4ce-915c-4c569aae7585@fried.cc> <1535045097-sup-986@lrrr.local> <1caecc54-c681-cad6-9664-8281ab2d4323@nemebean.com> Message-ID: > On Aug 23, 2018, at 4:01 PM, Ben Nemec wrote: > > > >> On 08/23/2018 12:25 PM, Doug Hellmann wrote: >> Excerpts from Eric Fried's message of 2018-08-23 09:51:21 -0500: >>> Do you mean an actual fixture, that would be used like: >>> >>> class MyTestCase(testtools.TestCase): >>> def setUp(self): >>> self.uuids = self.useFixture(oslofx.UUIDSentinelFixture()).uuids >>> >>> def test_foo(self): >>> do_a_thing_with(self.uuids.foo) >>> >>> ? >>> >>> That's... okay I guess, but the refactoring necessary to cut over to it >>> will now entail adding 'self.' to every reference. Is there any way >>> around that? >> That is what I had envisioned, yes. In the absence of a global, >> which we do not want, what other API would you propose? > > If we put it in oslotest instead, would the global still be a problem? Especially since mock has already established a pattern for this functionality? I guess all of the people who complained so loudly about the global in oslo.config are gone? If we don’t care about the global then we could just put the code from Eric’s threadsafe version in oslo.utils somewhere. Doug From sundar.nadathur at intel.com Fri Aug 24 05:01:03 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Thu, 23 Aug 2018 22:01:03 -0700 Subject: [openstack-dev] [Cyborg] Zoom URL for Aug 29 meeting In-Reply-To: <45034d8e-22fe-6b6f-2542-1e53cd7b5b86@intel.com> References: <45034d8e-22fe-6b6f-2542-1e53cd7b5b86@intel.com> Message-ID: <8037a056-9052-1fe9-bdd5-bde798c34037@intel.com> Please use this invite instead, because it does not have the time limits of the old one (updated in  Cyborg wiki as well). Time: Aug 29, 2018 10:00 AM Eastern Time (US and Canada) Join from PC, Mac, Linux, iOS or Android: *https://zoom.us/j/395326369* Or iPhone one-tap :     US: +16699006833,,395326369#  or +16465588665,,395326369# Or Telephone:     Dial(for higher quality, dial a number based on your current location):         US: +1 669 900 6833  or +1 646 558 8665     Meeting ID: 395 326 369     International numbers available: https://zoom.us/u/eGbqK3pMh Thanks, Sundar On 8/22/2018 11:39 PM, Nadathur, Sundar wrote: > > For the August 29 weekly meeting [1], the main agenda is the > discussion of Cyborg device/data models. > > We will use this meeting invite to present slides: > > Join from PC, Mac, Linux, iOS or Android: https://zoom.us/j/189707867 > > Or iPhone one-tap : >     US: +16465588665,,189707867#  or +14086380986,,189707867# > Or Telephone: >     Dial(for higher quality, dial a number based on your current > location): >         US: +1 646 558 8665  or +1 408 638 0986 >     Meeting ID: 189 707 867 >     International numbers available: https://zoom.us/u/dnYoZcYYJ > > [1] https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting > > Regards, > Sundar > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmtank at gmail.com Fri Aug 24 05:35:30 2018 From: dmtank at gmail.com (Darshan Tank) Date: Fri, 24 Aug 2018 11:05:30 +0530 Subject: [openstack-dev] Regarding cache-based cross-VM side channel attacks in OpenStack Message-ID: Dear Sir, I would like to know, whether cache-based cross-VM side channel attacks are possible in OpenStack VM or not ? If the answer of above question is no, then what are the mechanisms employed in OpenStack to prevent or to mitigate such types of security threats? I'm looking forward to hearing from you. Thanks in advance for your support. With Warm Regards, *Darshan Tank * [image: Please consider the environment before printing] -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruijing.guo at intel.com Fri Aug 24 07:55:18 2018 From: ruijing.guo at intel.com (Guo, Ruijing) Date: Fri, 24 Aug 2018 07:55:18 +0000 Subject: [openstack-dev] [nova][neutron] numa aware vswitch Message-ID: <2EE296D083DF2940BF4EBB91D39BB89F3BBF05C0@shsmsx102.ccr.corp.intel.com> Hi, All, I am verifying numa aware vwitch features (https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/numa-aware-vswitches.html). But the result is not my expectation. What I missing? Nova configuration: [filter_scheduler] track_instance_changes = False enabled_filters = RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,SameHostFilter,DifferentHostFilter,NUMATopologyFilter [neutron] physnets = physnet0,physnet1 [neutron_physnet_physnet0] numa_nodes = 0 [neutron_physnet_physnet1] numa_nodes = 1 ml2 configuration: [ml2_type_vlan] network_vlan_ranges = physnet0,physnet1 [ovs] vhostuser_socket_dir = /var/lib/libvirt/qemu bridge_mappings = physnet0:br-physnet0,physnet1:br-physnet1 command list: openstack network create net0 --external --provider-network-type=vlan --provider-physical-network=physnet0 --provider-segment=100 openstack network create net1 --external --provider-network-type=vlan --provider-physical-network=physnet1 --provider-segment=200 openstack subnet create --network=net0 --subnet-range=192.168.1.0/24 --allocation-pool start=192.168.1.200,end=192.168.1.250 --gateway 192.168.1.1 subnet0 openstack subnet create --network=net1 --subnet-range=192.168.2.0/24 --allocation-pool start=192.168.2.200,end=192.168.2.250 --gateway 192.168.2.1 subnet1 openstack server create --flavor 1 --image=cirros-0.3.5-x86_64-disk --nic net-id=net0 vm0 openstack server create --flavor 1 --image=cirros-0.3.5-x86_64-disk --nic net-id=net1 vm1 vm0 and vm1 are created but numa is not enabled: 1 1024 Thanks, -Ruijing -------------- next part -------------- An HTML attachment was scrubbed... URL: From aheczko at mirantis.com Fri Aug 24 08:33:41 2018 From: aheczko at mirantis.com (Adam Heczko) Date: Fri, 24 Aug 2018 10:33:41 +0200 Subject: [openstack-dev] Regarding cache-based cross-VM side channel attacks in OpenStack In-Reply-To: References: Message-ID: Hi Darshan, I believe you are referring to the recent Foreshadow / l1tf vulnerability? If that's the case OpenStack compute workloads are protected with all relevant to the specific hypervisor type mechanisms. AFAIK OpenStack at this moment supports KVM-Qemu, Xen, vSphere/ESXI and Hyper-V hypervisors. All of the above hypervisors offer side channel protection mechanisms implementations. You can also consult OpenStack Security Guide, compute sections seems to be most relevant to the question you raised, https://docs.openstack.org/security-guide/compute.html HTH, On Fri, Aug 24, 2018 at 7:35 AM Darshan Tank wrote: > Dear Sir, > > I would like to know, whether cache-based cross-VM side channel attacks > are possible in OpenStack VM or not ? > > If the answer of above question is no, then what are the mechanisms > employed in OpenStack to prevent or to mitigate such types of security > threats? > > I'm looking forward to hearing from you. > > Thanks in advance for your support. > > With Warm Regards, > *Darshan Tank * > > [image: Please consider the environment before printing] > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Adam Heczko Security Engineer @ Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sambetts at cisco.com Fri Aug 24 09:12:01 2018 From: sambetts at cisco.com (Sam Betts (sambetts)) Date: Fri, 24 Aug 2018 09:12:01 +0000 Subject: [openstack-dev] [ironic][bifrost][sushy][ironic-inspector][ironic-ui][virtualbmc] sub-project/repository core reviewer changes In-Reply-To: References: Message-ID: <29D3778C-1E7A-4156-A840-1C736FA74875@cisco.com> +1 Sam On 23/08/2018, 21:38, "Mark Goddard" > wrote: +1 On Thu, 23 Aug 2018, 20:43 Jim Rollenhagen, > wrote: ++ // jim On Thu, Aug 23, 2018 at 2:24 PM, Julia Kreger > wrote: Greetings everyone! In our team meeting this week we stumbled across the subject of promoting contributors to be sub-project's core reviewers. Traditionally it is something we've only addressed as needed or desired by consensus with-in those sub-projects, but we were past due time to take a look at the entire picture since not everything should fall to ironic-core. And so, I've taken a look at our various repositories and I'm proposing the following additions: For sushy-core, sushy-tools-core, and virtualbmc-core: Ilya Etingof[1]. Ilya has been actively involved with sushy, sushy-tools, and virtualbmc this past cycle. I've found many of his reviews and non-voting review comments insightful and willing to understand. He has taken on some of the effort that is needed to maintain and keep these tools usable for the community, and as such adding him to the core group for these repositories makes lots of sense. For ironic-inspector-core and ironic-specs-core: Kaifeng Wang[2]. Kaifeng has taken on some hard problems in ironic and ironic-inspector, as well as brought up insightful feedback in ironic-specs. They are demonstrating a solid understanding that I only see growing as time goes on. For sushy-core: Debayan Ray[3]. Debayan has been involved with the community for some time and has worked on sushy from early on in its life. He has indicated it is near and dear to him, and he has been actively reviewing and engaging in discussion on patchsets as his time has permitted. With any addition it is good to look at inactivity as well. It saddens me to say that we've had some contributors move on as priorities have shifted to where they are no longer involved with the ironic community. Each person listed below has been inactive for a year or more and is no longer active in the ironic community. As such I've removed their group membership from the sub-project core reviewer groups. Should they return, we will welcome them back to the community with open arms. bifrost-core: Stephanie Miller[4] ironic-inspector-core: Anton Arefivev[5] ironic-ui-core: Peter Peila[6], Beth Elwell[7] Thanks, -Julia [1]: http://stackalytics.com/?user_id=etingof&metric=marks [2]: http://stackalytics.com/?user_id=kaifeng&metric=marks [3]: http://stackalytics.com/?user_id=deray&metric=marks&release=all [4]: http://stackalytics.com/?metric=marks&release=all&user_id=stephaneeee [5]: http://stackalytics.com/?user_id=aarefiev&metric=marks [6]: http://stackalytics.com/?metric=marks&release=all&user_id=ppiela [7]: http://stackalytics.com/?metric=marks&release=all&user_id=bethelwell&module=ironic-ui __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbadiapa at redhat.com Fri Aug 24 09:17:33 2018 From: jbadiapa at redhat.com (Juan Badia Payno) Date: Fri, 24 Aug 2018 11:17:33 +0200 Subject: [openstack-dev] [Tripleo] fluentd logging status Message-ID: Recently, I did a little test regarding fluentd logging on the gates master[1], queens[2], pike [3]. I don't like the status of it, I'm still working on them, but basically there are quite a lot of misconfigured logs and some services that they are not configured at all. I think we need to put some effort on the logging. The purpose of this email is to point out that we need to do a little effort on the task. First of all, I think we need to enable fluentd on all the scenarios, as it is on the tests [1][2][3] commented on the beginning of the email. Once everything is ok and some automatic test regarding logging is done they can be disabled. I'd love not to create a new bug for every misconfigured/unconfigured service, but if requested to grab more attention on it, I will open it. The plan I have in mind is something like: * Make an initial picture of what the fluentd/log status is (from pike upwards). * Fix all misconfigured services. (designate,...) * Add the non-configured services. (manila,...) * Add an automated check to find a possible unconfigured/misconfigured problem. Any comments, doubts or questions are welcome Cheers, Juan [1] https://review.openstack.org/594836 [2] https://review.openstack.org/594838 [3] https://review.openstack.org/594840 -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Aug 24 09:20:53 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 24 Aug 2018 11:20:53 +0200 Subject: [openstack-dev] [ptg] Post-lunch presentations schedule Message-ID: Hi! The PTG starts in two weeks in Denver! As in Dublin, we'll have some presentations running during the second half of the lunch break in the lunch room. Here is the schedule: Monday: Welcome to the PTG Welcome new teams / Ops meetup, Housekeeping, Community update, Set stage for the week, Present Stein goals (ttx, mnaser, kendallW) Tuesday: Three demo presentations on tools Gertty (corvus), Storyboard (diablo_rojo), and Simplifying backports with git-deps and git-explode (aspiers) Wednesday: Three general talks Release management (smcginnis), Project navigator (jimmymcarthur), and Tech vision statement intro (zaneb, cdent) Thursday: PTG: present and future Our traditional event feedback session, including a presentation of future PTG/summit co-location plans for 2019 (jbryce, ttx) Friday: Lightning talks Fast-paced 5-min segments to talk about anything... Summaries of team plans for Stein encouraged. A presentation of Sphinx in OpenStack by stephenfin will open the show. Hopefully this time we won't have snow disrupting that schedule. Cheers, -- Thierry Carrez (ttx) From thierry at openstack.org Fri Aug 24 09:36:20 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 24 Aug 2018 11:36:20 +0200 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <6b52fa7b-4b13-3663-6a65-fdfa0ed1b425@gmail.com> References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> <73e3697f-cab9-572c-f96a-082f8a92b0c4@gmail.com> <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> <1534883437-sup-4403@lrrr.local> <06afaecc-158c-a6d2-2e4d-c586116eac73@gmail.com> <1534945106-sup-4359@lrrr.local> <775949fc-a058-a076-06a5-c42bb8d016ec@gmail.com> <68248e7c-14f6-d6f2-d87f-8fceb1eed7d6@openstack.org> <6b52fa7b-4b13-3663-6a65-fdfa0ed1b425@gmail.com> Message-ID: <8b67ee9d-8c6a-8d79-8bc4-821b3d7e8cc8@openstack.org> Matt Riedemann wrote: > On 8/23/2018 4:00 AM, Thierry Carrez wrote: >> In the OpenStack governance model, contributors to a given piece of >> code control its destiny. > > This is pretty damn fuzzy. Yes, it's definitely not binary. > So if someone wants to split out nova-compute > into a new repo/project/governance with a REST API and all that, > nova-core has no say in the matter? I'd consider the repository split to be a prerequisite. Then if most people working on the nova-compute repository (not just "someone") feel like they are in a distinct group working on a distinct piece of code and that the larger group is not representative of them, then yes, IMHO they can make a case that a separate project team would be more healthy... -- Thierry Carrez (ttx) From no-reply at openstack.org Fri Aug 24 09:53:25 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 24 Aug 2018 09:53:25 -0000 Subject: [openstack-dev] keystone 14.0.0.0rc2 (rocky) Message-ID: Hello everyone, A new release candidate for keystone for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/keystone/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/keystone/log/?h=stable/rocky Release notes for keystone can be found at: https://docs.openstack.org/releasenotes/keystone/ From james.slagle at gmail.com Fri Aug 24 11:26:40 2018 From: james.slagle at gmail.com (James Slagle) Date: Fri, 24 Aug 2018 07:26:40 -0400 Subject: [openstack-dev] [TripleO]Addressing Edge/Multi-site/Multi-cloud deployment use cases (new squad) In-Reply-To: References: Message-ID: On Wed, Aug 22, 2018 at 4:21 AM Csatari, Gergely (Nokia - HU/Budapest) wrote: > > Hi, > > This is good news. We could even have an hour session to discuss ideas about TripleO-s place in the edge cloud infrastructure. Would you be open for that? Yes, that sounds good. I'll add something to the etherpad. Thanks. -- -- James Slagle -- From cdent+os at anticdent.org Fri Aug 24 12:36:24 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 24 Aug 2018 13:36:24 +0100 (BST) Subject: [openstack-dev] [nova] [placement] extraction (technical) update Message-ID: Over the past few days a few of us have been experimenting with extracting placement to its own repo, as has been discussed at length on this list, and in some etherpads: https://etherpad.openstack.org/p/placement-extract-stein https://etherpad.openstack.org/p/placement-extraction-file-notes As part of that, I've been doing some exploration to tease out the issues we're going to hit as we do it. None of this is work that will be merged, rather it is stuff to figure out what we need to know to do the eventual merging correctly and efficiently. Please note that doing that is just the near edge of a large collection of changes that will cascade in many ways to many projects, tools, distros, etc. The people doing this are aware of that, and the relative simplicity (and fairly immediate success) of these experiments is not misleading people into thinking "hey, no big deal". It's a big deal. There's a strategy now (described at the end of the first etherpad listed above) for trimming the nova history to create a thing which is placement. From the first run of that Ed created a github repo and I branched that to eventually create: https://github.com/EdLeafe/placement/pull/2 In that, all the placement unit and functional tests are now passing, and my placecat [1] integration suite also passes. That work has highlighted some gaps in the process for trimming history which will be refined to create another interim repo. We'll repeat this until the process is smooth, eventually resulting in an openstack/placement. To take things further, this morning I pip installed the placement code represented by that pull request into a nova repo and made some changes to remove placement from nova. With some minor adjustments I got the remaining unit and functional tests working. That work is in gerrit at https://review.openstack.org/#/c/596291/ with a hopefully clear commit message about what's going on. As with the rest of this work, this is not something to merge, rather an experiment to learn from. The hot spots in the changes are relatively limited and about what you would expect so, with luck, should be pretty easy to deal with, some of them even before we actually do any extracting (to enhance the boundaries between the two services). If you're interested in this process please have a look at all the links and leave comments there, in response to this email, or join #openstack-placement on freenode to talk about it. Thanks. [1] https://github.com/cdent/placecat -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From tobias.urdin at binero.se Fri Aug 24 13:23:36 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Fri, 24 Aug 2018 15:23:36 +0200 Subject: [openstack-dev] [puppet] Puppet weekly recap - week 34 Message-ID: <7868cd35-ad30-af92-5abc-28547e99afd2@binero.se> Hello all Puppeteers! Welcome to the weekly Puppet recap for week 34. This is a weekly overview of what has changed in the Puppet OpenStack project the past week. CHANGES ======= We haven't had much changes this week, mostly CI fixes due to changes in packaging. * We've merged all stable/rocky related changes except for Keystone [1] [2] ** This is blocked by packaging issue [3] because we dont update packages before runs in the beaker tests. ** Please review [4] and let us know what you think. ** This is also blocking this [5] * Fixed puppet-ovn to make sure OVS bridge is created before setting mac-table-size [6] [1] https://review.openstack.org/#/c/593787/ [2] https://review.openstack.org/#/c/593786/ [3] https://bugzilla.redhat.com/show_bug.cgi?id=1620221 [4] https://review.openstack.org/#/c/595370/ [5] https://review.openstack.org/#/c/589877/ [6] https://review.openstack.org/#/c/594128/ REVIEWS ====== We have some open changes that needs reviews. * Update packages after adding repos https://review.openstack.org/#/c/595370/ * Make vlan_transparent in neutron.conf configurable https://review.openstack.org/#/c/591899/ * neutron-dynamic-routing wrong package for Debian https://review.openstack.org/#/c/594058/ (and backports) * Add workers to magnum api and conductor https://review.openstack.org/#/c/595228/ * Correct default number of threads https://review.openstack.org/#/c/591493/ * Deprecate unused notify_on_api_faults parameter https://review.openstack.org/#/c/593034/ * Resolve duplicate declaration with split of api / metadata wsgi https://review.openstack.org/#/c/595523/ SPECS ===== No new specs, only one open spec for review. * Add parameter data types spec https://review.openstack.org/#/c/568929/ OTHER ===== * No new progress on the Storyboard migration, we will continue letting you know once we have more details about dates. * Going to the PTG? We have some cores that will be there, make sure you say hi! [7] ** We dont have any planned talks or discussions and therefore dont need any session or a moderator, but we are always available if you need us on IRC at #puppet-openstack * Interested in the current status for Rocky? See [8] or maybe you want to plan some awesome new cool thing then... ** Start planning Stein now [9] and let us know! We would love any new contributors with new cool ideas! * We should have a walk through with abandoning old open changes, if anybody is interested in helping with such an effort, please let me know. [7] https://etherpad.openstack.org/p/puppet-ptg-stein [8] https://etherpad.openstack.org/p/puppet-openstack-rocky [9] https://etherpad.openstack.org/p/puppet-openstack-stein Wishing you all a great weekend! Best regards Tobias (tobias-urdin @ IRC) From sfinucan at redhat.com Fri Aug 24 13:58:48 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Fri, 24 Aug 2018 14:58:48 +0100 Subject: [openstack-dev] [nova][neutron] numa aware vswitch In-Reply-To: <2EE296D083DF2940BF4EBB91D39BB89F3BBF05C0@shsmsx102.ccr.corp.intel.com> References: <2EE296D083DF2940BF4EBB91D39BB89F3BBF05C0@shsmsx102.ccr.corp.intel.com> Message-ID: On Fri, 2018-08-24 at 07:55 +0000, Guo, Ruijing wrote: > Hi, All, > > I am verifying numa aware vwitch features (https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/numa-aware-vswitches.html). But the result is not my expectation. > > What I missing? > > > Nova configuration: > > [filter_scheduler] > track_instance_changes = False > enabled_filters = RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,SameHostFilter,DifferentHostFilter,NUMATopologyFilter > > [neutron] > physnets = physnet0,physnet1 > > [neutron_physnet_physnet0] > numa_nodes = 0 > > [neutron_physnet_physnet1] > numa_nodes = 1 > > > ml2 configuration: > > [ml2_type_vlan] > network_vlan_ranges = physnet0,physnet1 > [ovs] > vhostuser_socket_dir = /var/lib/libvirt/qemu > bridge_mappings = physnet0:br-physnet0,physnet1:br-physnet1 > > > command list: > > openstack network create net0 --external --provider-network-type=vlan --provider-physical-network=physnet0 --provider-segment=100 > openstack network create net1 --external --provider-network-type=vlan --provider-physical-network=physnet1 --provider-segment=200 > openstack subnet create --network=net0 --subnet-range=192.168.1.0/24 --allocation-pool start=192.168.1.200,end=192.168.1.250 --gateway 192.168.1.1 subnet0 > openstack subnet create --network=net1 --subnet-range=192.168.2.0/24 --allocation-pool start=192.168.2.200,end=192.168.2.250 --gateway 192.168.2.1 subnet1 > openstack server create --flavor 1 --image=cirros-0.3.5-x86_64-disk --nic net-id=net0 vm0 > openstack server create --flavor 1 --image=cirros-0.3.5-x86_64-disk --nic net-id=net1 vm1 > > vm0 and vm1 are created but numa is not enabled: > 1 > > 1024 > Using this won't add a NUMA topology - it'll just control how any topology present will be mapped to the guest. You need to enable dedicated CPUs or a explicitly request a NUMA topology for this to work. openstack flavor set --property hw:numa_nodes=1 1 openstack flavor set --property hw:cpu_policy=dedicated 1 This is perhaps something that we could change in the future, though I haven't given it much thought yet. Regards, Stephen From mriedemos at gmail.com Fri Aug 24 14:13:24 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 24 Aug 2018 09:13:24 -0500 Subject: [openstack-dev] [nova][neutron] numa aware vswitch In-Reply-To: References: <2EE296D083DF2940BF4EBB91D39BB89F3BBF05C0@shsmsx102.ccr.corp.intel.com> Message-ID: On 8/24/2018 8:58 AM, Stephen Finucane wrote: > Using this won't add a NUMA topology - it'll just control how any > topology present will be mapped to the guest. You need to enable > dedicated CPUs or a explicitly request a NUMA topology for this to > work. > > openstack flavor set --property hw:numa_nodes=1 1 > > > > openstack flavor set --property hw:cpu_policy=dedicated 1 > > > This is perhaps something that we could change in the future, though I > haven't given it much thought yet. Looks like the admin guide [1] should be updated to at least refer to the flavor user guide on setting up these types of flavors? [1] https://docs.openstack.org/nova/latest/admin/networking.html#numa-affinity -- Thanks, Matt From openstack at nemebean.com Fri Aug 24 14:37:00 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 24 Aug 2018 09:37:00 -0500 Subject: [openstack-dev] [Tripleo] fluentd logging status In-Reply-To: References: Message-ID: On 08/24/2018 04:17 AM, Juan Badia Payno wrote: > Recently, I did a little test regarding fluentd logging on the gates > master[1], queens[2], pike [3]. I don't like the status of it, I'm still > working on them, but basically there are quite a lot of misconfigured > logs and some services that they are not configured at all. > > I think we need to put some effort on the logging. The purpose of this > email is to point out that we need to do a little effort on the task. > > First of all, I think we need to enable fluentd on all the scenarios, as > it is on the tests [1][2][3] commented on the beginning of the email. > Once everything is ok and some automatic test regarding logging is done > they can be disabled. > > I'd love not to create a new bug for every misconfigured/unconfigured > service, but if requested to grab more attention on it, I will open it. > > The plan I have in mind is something like: >  * Make an initial picture of what the fluentd/log status is (from pike > upwards). >  * Fix all misconfigured services. (designate,...) For the record, Designate in TripleO is not considered production-ready at this time. There are a few other issues that need to be resolved too. I'll add this to my todo list though. >  * Add the non-configured services. (manila,...) >  * Add an automated check to find a possible unconfigured/misconfigured > problem. This would be good. I copy-pasted the log config from another service but had no idea whether it was correct (apparently it wasn't :-). > > Any comments, doubts or questions are welcome > > Cheers, > Juan > > [1] https://review.openstack.org/594836 > [2] https://review.openstack.org/594838 > [3] https://review.openstack.org/594840 > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sean.mcginnis at gmx.com Fri Aug 24 14:43:14 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 24 Aug 2018 09:43:14 -0500 Subject: [openstack-dev] [python-jenkins][Release-job-failures] Release of openstack/python-jenkins failed Message-ID: <20180824144314.GA8094@sm-workstation> See below for links to a release job failure for python-jenkins. This was a ReadTheDocs publishing job. It appears to have failed due to the necessary steps missing from this earlier post: http://lists.openstack.org/pipermail/openstack-dev/2018-August/132836.html ----- Forwarded message from zuul at openstack.org ----- Date: Fri, 24 Aug 2018 14:33:25 +0000 From: zuul at openstack.org To: release-job-failures at lists.openstack.org Subject: [Release-job-failures] Release of openstack/python-jenkins failed Reply-To: openstack-dev at lists.openstack.org Build failed. - trigger-readthedocs-webhook http://logs.openstack.org/c4/c473b0af94342b269593dd24e5093d33a94b5046/release/trigger-readthedocs-webhook/cec87fd/ : FAILURE in 1m 49s - release-openstack-python http://logs.openstack.org/c4/c473b0af94342b269593dd24e5093d33a94b5046/release/release-openstack-python/68b356f/ : SUCCESS in 4m 03s - announce-release http://logs.openstack.org/c4/c473b0af94342b269593dd24e5093d33a94b5046/release/announce-release/04fd7c3/ : SUCCESS in 4m 10s - propose-update-constraints http://logs.openstack.org/c4/c473b0af94342b269593dd24e5093d33a94b5046/release/propose-update-constraints/3eaf094/ : SUCCESS in 2m 08s _______________________________________________ Release-job-failures mailing list Release-job-failures at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures ----- End forwarded message ----- From remo at rm.ht Fri Aug 24 14:49:46 2018 From: remo at rm.ht (Remo Mattei) Date: Fri, 24 Aug 2018 07:49:46 -0700 Subject: [openstack-dev] [Tripleo] fluentd logging status In-Reply-To: References: Message-ID: My co-worker has it working on OOO, Pike release bm not containers. There was a plan to clean up the code and open it up since it’s all ansible-playbooks doing the work. Remo > On Aug 24, 2018, at 07:37, Ben Nemec wrote: > > > > On 08/24/2018 04:17 AM, Juan Badia Payno wrote: >> Recently, I did a little test regarding fluentd logging on the gates master[1], queens[2], pike [3]. I don't like the status of it, I'm still working on them, but basically there are quite a lot of misconfigured logs and some services that they are not configured at all. >> I think we need to put some effort on the logging. The purpose of this email is to point out that we need to do a little effort on the task. >> First of all, I think we need to enable fluentd on all the scenarios, as it is on the tests [1][2][3] commented on the beginning of the email. Once everything is ok and some automatic test regarding logging is done they can be disabled. >> I'd love not to create a new bug for every misconfigured/unconfigured service, but if requested to grab more attention on it, I will open it. >> The plan I have in mind is something like: >> * Make an initial picture of what the fluentd/log status is (from pike upwards). >> * Fix all misconfigured services. (designate,...) > > For the record, Designate in TripleO is not considered production-ready at this time. There are a few other issues that need to be resolved too. I'll add this to my todo list though. > >> * Add the non-configured services. (manila,...) >> * Add an automated check to find a possible unconfigured/misconfigured problem. > > This would be good. I copy-pasted the log config from another service but had no idea whether it was correct (apparently it wasn't :-). > >> Any comments, doubts or questions are welcome >> Cheers, >> Juan >> [1] https://review.openstack.org/594836 >> [2] https://review.openstack.org/594838 >> [3] https://review.openstack.org/594840 >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From no-reply at openstack.org Fri Aug 24 14:59:51 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 24 Aug 2018 14:59:51 -0000 Subject: [openstack-dev] nova_powervm 7.0.0.0rc2 (rocky) Message-ID: Hello everyone, A new release candidate for nova_powervm for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/nova-powervm/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/nova_powervm/log/?h=stable/rocky Release notes for nova_powervm can be found at: https://docs.openstack.org/releasenotes/nova_powervm/ From no-reply at openstack.org Fri Aug 24 15:00:17 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 24 Aug 2018 15:00:17 -0000 Subject: [openstack-dev] tripleo-heat-templates 9.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for tripleo-heat-templates for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/tripleo-heat-templates/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/tripleo-heat-templates/log/?h=stable/rocky Release notes for tripleo-heat-templates can be found at: https://docs.openstack.org/releasenotes/tripleo-heat-templates/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/tripleo and tag it *rocky-rc-potential* to bring it to the tripleo-heat-templates release crew's attention. From no-reply at openstack.org Fri Aug 24 15:06:49 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 24 Aug 2018 15:06:49 -0000 Subject: [openstack-dev] tripleo-image-elements 9.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for tripleo-image-elements for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/tripleo-image-elements/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/tripleo-image-elements/log/?h=stable/rocky Release notes for tripleo-image-elements can be found at: https://docs.openstack.org/releasenotes/tripleo-image-elements/ From no-reply at openstack.org Fri Aug 24 15:07:43 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 24 Aug 2018 15:07:43 -0000 Subject: [openstack-dev] tripleo-puppet-elements 9.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for tripleo-puppet-elements for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/tripleo-puppet-elements/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/tripleo-puppet-elements/log/?h=stable/rocky Release notes for tripleo-puppet-elements can be found at: https://docs.openstack.org/releasenotes/tripleo-puppet-elements/ From emilien at redhat.com Fri Aug 24 15:09:16 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 24 Aug 2018 11:09:16 -0400 Subject: [openstack-dev] [tripleo] Rocky RC1 released! Message-ID: We just released Rocky RC1 and branched stable/rocky for most of tripleo repos, please let us know if we missed something. Please don't forget to backport the patches that land in master and that you want in Rocky. We're currently investigating if we whether or not we'll need an RC2 so don't be surprised if Launchpad bugs are moved around during the next days. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremyfreudberg at gmail.com Fri Aug 24 15:13:22 2018 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Fri, 24 Aug 2018 11:13:22 -0400 Subject: [openstack-dev] [glance][horizon] Issues we found when using Community Images In-Reply-To: References: Message-ID: Hi again Andy, Thanks for the update. Sounds like there is some work to do in various client libraries first. I also just tried to launch a Sahara cluster against a community image-- it failed, because our current validation wants the image ID to actually appear in the image list. So there will have to be a server side tweak to Sahara as well (not necessarily using your desired "list all" mechanism, but it could be). Anyway, the Sahara team is aware, and we'll keep an eye on this moving forward. Cheers, Jeremy On Thu, Aug 23, 2018 at 8:43 PM, Andy Botting wrote: > Hi Jeremy, > >> >> Can you comment more on what needs to be updated in Sahara? Are they >> simply issues in the UI (sahara-dashboard) or is there a problem >> consuming community images on the server side? > > > We haven't looked into it much yet, so I couldn't tell you. > > I think it would be great to extend the Glance API to include a > visibility=all filter, so we can actually get ALL available images in a > single request, then projects could switch over to this. > > It might need some thought on how to manage the new API request when using > an older version of Glance that didn't support visibility=all, but I'm sure > that could be worked out. > > It would be great to hear from one of the Glance devs what they think about > this approach. > > cheers, > Andy > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From colleen at gazlene.net Fri Aug 24 15:15:31 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 24 Aug 2018 17:15:31 +0200 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 20 August 2018 Message-ID: <1535123731.1775331.1485067120.6BDA6AA8@webmail.messagingengine.com> # Keystone Team Update - Week of 20 August 2018 ## News We ended up releasing an RC2 after all in order to include placeholder sqlalchemy migrations for Rocky, thanks wxy for catching it! ## Open Specs Search query: https://bit.ly/2Pi6dGj Lance reproposed the auth receipts and application credentials specs that we punted on last cycle for Stein. ## Recently Merged Changes Search query: https://bit.ly/2IACk3F We merged 13 changes this week. ## Changes that need Attention Search query: https://bit.ly/2wv7QLK There are 75 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. If that seems like a lot more than last week, it's because someone has helpfully proposed many patches supporting the python3-first community goal[1]. However, they haven't coordinated with the goal champions and have missed some steps[2], like proposing the removal of jobs from project-config and proposing jobs to the stable branches. I would recommend coordinating with the python3-first goal champions on merging these patches. The good news is that all of our projects seem to work with python 3.6! [1] https://governance.openstack.org/tc/goals/stein/python3-first.html [2] http://lists.openstack.org/pipermail/openstack-dev/2018-August/133610.html ## Bugs This week we opened 4 new bugs and closed 1. Bugs opened (4)  Bug #1788415 (keystone:High) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1788415  Bug #1788694 (keystone:High) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1788694  Bug #1787874 (keystone:Medium) opened by wangxiyuan https://bugs.launchpad.net/keystone/+bug/1787874  Bug #1788183 (oslo.policy:Undecided) opened by Stephen Finucane https://bugs.launchpad.net/oslo.policy/+bug/1788183  Bugs closed (1)  Bug #1771203 (python-keystoneclient:Undecided) https://bugs.launchpad.net/python-keystoneclient/+bug/1771203  Bugs fixed (0) ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html We're at the end of the RC period with the official release happening next week. ## Shout-outs Thanks everyone for a great release! ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 From jungleboyj at gmail.com Fri Aug 24 15:44:16 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Fri, 24 Aug 2018 10:44:16 -0500 Subject: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: <20180823170756.sz5qj2lxdy4i4od2@localhost> References: <20180823104210.kgctxfjiq47uru34@localhost> <20180823170756.sz5qj2lxdy4i4od2@localhost> Message-ID: <880e2ff0-cf3a-7d6d-a805-816464858aee@gmail.com> On 8/23/2018 12:07 PM, Gorka Eguileor wrote: > On 23/08, Dan Smith wrote: >>> I think Nova should never have to rely on Cinder's hosts/backends >>> information to do migrations or any other operation. >>> >>> In this case even if Nova had that info, it wouldn't be the solution. >>> Cinder would reject migrations if there's an incompatibility on the >>> Volume Type (AZ, Referenced backend, capabilities...) >> I think I'm missing a bunch of cinder knowledge required to fully grok >> this situation and probably need to do some reading. Is there some >> reason that a volume type can't exist in multiple backends or something? >> I guess I think of volume type as flavor, and the same definition in two >> places would be interchangeable -- is that not the case? >> > Hi, > > I just know the basics of flavors, and they are kind of similar, though > I'm sure there are quite a few differences. > > Sure, multiple storage arrays can meet the requirements of a Volume > Type, but then when you create the volume you don't know where it's > going to land. If your volume type is too generic you volume could land > somewhere your cell cannot reach. > > >>> I don't know anything about Nova cells, so I don't know the specifics of >>> how we could do the mapping between them and Cinder backends, but >>> considering the limited range of possibilities in Cinder I would say we >>> only have Volume Types and AZs to work a solution. >> I think the only mapping we need is affinity or distance. The point of >> needing to migrate the volume would purely be because moving cells >> likely means you moved physically farther away from where you were, >> potentially with different storage connections and networking. It >> doesn't *have* to mean that, but I think in reality it would. So the >> question I think Matt is looking to answer here is "how do we move an >> instance from a DC in building A to building C and make sure the >> volume gets moved to some storage local in the new building so we're >> not just transiting back to the original home for no reason?" >> >> Does that explanation help or are you saying that's fundamentally hard >> to do/orchestrate? >> >> Fundamentally, the cells thing doesn't even need to be part of the >> discussion, as the same rules would apply if we're just doing a normal >> migration but need to make sure that storage remains affined to compute. >> > We could probably work something out using the affinity filter, but > right now we don't have a way of doing what you need. > > We could probably rework the migration to accept scheduler hints to be > used with the affinity filter and to accept calls with the host or the > hints, that way it could migrate a volume without knowing the > destination host and decide it based on affinity. > > We may have to do more modifications, but it could be a way to do it. > > > >>> I don't know how the Nova Placement works, but it could hold an >>> equivalency mapping of volume types to cells as in: >>> >>> Cell#1 Cell#2 >>> >>> VolTypeA <--> VolTypeD >>> VolTypeB <--> VolTypeE >>> VolTypeC <--> VolTypeF >>> >>> Then it could do volume retypes (allowing migration) and that would >>> properly move the volumes from one backend to another. >> The only way I can think that we could do this in placement would be if >> volume types were resource providers and we assigned them traits that >> had special meaning to nova indicating equivalence. Several of the words >> in that sentence are likely to freak out placement people, myself >> included :) >> >> So is the concern just that we need to know what volume types in one >> backend map to those in another so that when we do the migration we know >> what to ask for? Is "they are the same name" not enough? Going back to >> the flavor analogy, you could kinda compare two flavor definitions and >> have a good idea if they're equivalent or not... >> >> --Dan > In Cinder you don't get that from Volume Types, unless all your backends > have the same hardware and are configured exactly the same. > > There can be some storage specific information there, which doesn't > correlate to anything on other hardware. Volume types may refer to a > specific pool that has been configured in the array to use specific type > of disks. But even the info on the type of disks is unknown to the > volume type. > > I haven't checked the PTG agenda yet, but is there a meeting on this? > Because we may want to have one to try to understand the requirements > and figure out if there's a way to do it with current Cinder > functionality of if we'd need something new. Gorka, I don't think that this has been put on the agenda yet.  Might be good to add.  I don't think we have a cross project time officially planned with Nova.  I will start that discussion with Melanie so that we can cover the couple of cross projects subjects we have. Jay > Cheers, > Gorka. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From aschultz at redhat.com Fri Aug 24 15:53:01 2018 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 24 Aug 2018 09:53:01 -0600 Subject: [openstack-dev] [tripleo] Rocky RC1 released! In-Reply-To: References: Message-ID: On Fri, Aug 24, 2018 at 9:09 AM, Emilien Macchi wrote: > We just released Rocky RC1 and branched stable/rocky for most of tripleo > repos, please let us know if we missed something. > Please don't forget to backport the patches that land in master and that you > want in Rocky. > > We're currently investigating if we whether or not we'll need an RC2 so > don't be surprised if Launchpad bugs are moved around during the next days. > I've created a Rocky RC2 milestone in launchpad and moved the current open critical bugs over to it. I would like to target August 31, 2018 (next Friday) as a date to identify any major blockers that would require an RC2. If none are found, I propose that we mark RC1 as the final release for Rocky. Please take a look at the current open Critical issues and move them to Stein if appropriate. https://bugs.launchpad.net/tripleo/?field.searchtext=&orderby=-importance&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.milestone%3Alist=86388&field.tag=&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on&search=Search Thanks, -Alex > Thanks, > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From miguel at mlavalle.com Fri Aug 24 16:03:53 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 24 Aug 2018 11:03:53 -0500 Subject: [openstack-dev] [neutron][api][grapql] Proof of Concept In-Reply-To: References: <9248BC68-780C-4A6F-8236-F381C5A78D56@redhat.com> Message-ID: Gilles, Ok. Added the patches in Gerrit to this coming Tuesday Neutron weekly meeting agenda. I will highlight the patches during the meeting Regards On Thu, Aug 23, 2018 at 7:09 PM, Gilles Dubreuil wrote: > > > On 24/08/18 04:58, Slawomir Kaplonski wrote: > >> Hi Miguel, >> >> I’m not sure but maybe You were looking for those patches: >> >> https://review.openstack.org/#/q/project:openstack/neutron+b >> ranch:feature/graphql >> >> > Yes that's the one, it's under Tristan Cacqueray name as he helped getting > started. > > Wiadomość napisana przez Miguel Lavalle w dniu >>> 23.08.2018, o godz. 18:57: >>> >>> Hi Gilles, >>> >>> Ed pinged me earlier today in IRC in regards to this topic. After >>> reading your message, I assumed that you had patches up for review in >>> Gerrit. I looked for them, with the intent to list them in the agenda of >>> the next Neutron team meeting, to draw attention to them. I couldn't find >>> any, though: https://review.openstack.org/# >>> /q/owner:%22Gilles+Dubreuil+%253Cgdubreui%2540redhat.com%253E%22 >>> >>> So, how can we help? This is our meetings schedule: >>> http://eavesdrop.openstack.org/#Neutron_Team_Meeting. Given that you >>> are Down Under at UTC+10, the most convenient meeting for you is the one on >>> Monday (even weeks), which would be Tuesday at 7am for you. Please note >>> that we have an on demand section in our agenda: >>> https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda. Feel >>> free to add topics in that section when you have something to discuss with >>> the Neutron team. >>> >> > Now that we have a working base API serving GraphQL requests we need to do > provide the data in respect of Oslo Policy and such. > > Thanks for the pointers, I'll add the latter to the Agenda and will be at > next meeting. > > > > >>> Best regards >>> >>> Miguel >>> >>> On Sun, Aug 19, 2018 at 10:57 PM, Gilles Dubreuil >>> wrote: >>> >>> >>> On 25/07/18 23:48, Ed Leafe wrote: >>> On Jun 6, 2018, at 7:35 PM, Gilles Dubreuil wrote: >>> The branch is now available under feature/graphql on the neutron core >>> repository [1]. >>> I wanted to follow up with you on this effort. I haven’t seen any >>> activity on StoryBoard for several weeks now, and wanted to be sure that >>> there was nothing blocking you that we could help with. >>> >>> >>> -- Ed Leafe >>> >>> >>> >>> Hi Ed, >>> >>> Thanks for following up. >>> >>> There has been 2 essential counterproductive factors to the effort. >>> >>> The first is that I've been busy attending issues on other part of my >>> job. >>> The second one is the lack of response/follow-up from the Neutron core >>> team. >>> >>> We have all the plumbing in place but we need to layer the data through >>> oslo policies. >>> >>> Cheers, >>> Gilles >>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> — >> Slawek Kaplonski >> Senior software engineer >> Red Hat >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -- > Gilles Dubreuil > Senior Software Engineer - Red Hat - Openstack DFG Integration > Email: gilles at redhat.com > GitHub/IRC: gildub > Mobile: +61 400 894 219 > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfinucan at redhat.com Fri Aug 24 16:15:06 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Fri, 24 Aug 2018 17:15:06 +0100 Subject: [openstack-dev] [nova][neutron] numa aware vswitch In-Reply-To: References: <2EE296D083DF2940BF4EBB91D39BB89F3BBF05C0@shsmsx102.ccr.corp.intel.com> Message-ID: <492b65f562d3deb2f8fcb55b5c981f057b24cfa8.camel@redhat.com> On Fri, 2018-08-24 at 09:13 -0500, Matt Riedemann wrote: > On 8/24/2018 8:58 AM, Stephen Finucane wrote: > > Using this won't add a NUMA topology - it'll just control how any > > topology present will be mapped to the guest. You need to enable > > dedicated CPUs or a explicitly request a NUMA topology for this to > > work. > > > > openstack flavor set --property hw:numa_nodes=1 1 > > > > > > > > openstack flavor set --property hw:cpu_policy=dedicated 1 > > > > > > This is perhaps something that we could change in the future, though I > > haven't given it much thought yet. > > Looks like the admin guide [1] should be updated to at least refer to > the flavor user guide on setting up these types of flavors? > > [1] https://docs.openstack.org/nova/latest/admin/networking.html#numa-affinity Good idea. https://review.openstack.org/596393 Stephen From cdent+os at anticdent.org Fri Aug 24 16:25:02 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 24 Aug 2018 17:25:02 +0100 (BST) Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: Message-ID: On Fri, 24 Aug 2018, Chris Dent wrote: > That work is in gerrit at > > https://review.openstack.org/#/c/596291/ > > with a hopefully clear commit message about what's going on. As with > the rest of this work, this is not something to merge, rather an > experiment to learn from. The hot spots in the changes are > relatively limited and about what you would expect so, with luck, > should be pretty easy to deal with, some of them even before we > actually do any extracting (to enhance the boundaries between the > two services). After some prompting from gibi, that code has now been adjusted so that requirements.txt and tox.ini [1] make sure that the extract placement branch is installed into the test virtualenvs. So in the gate the unit and functional tests pass. Other jobs do not because of [1]. In the intervening time I've taken that code, built a devstack that uses a nova-placement-api wsgi script that uses nova.conf and the extracted placement code. It runs against the nova-api database. Created a few servers. Worked. Then I switched the devstack at placement-unit unit file to point to the placement-api wsgi script, and configured /etc/placement/placement.conf to have a [placement_database]/connection of the nova-api db. Created a few servers. Worked. Thanks. [1] As far as I can tell a requirements.txt entry of -e git+https://github.com/cdent/placement-1.git at cd/make-it-work#egg=placement will install just fine with 'pip install -r requirements.txt', but if I do 'pip install nova' and that line is in requirements.txt it does not work. This means I had to change tox.ini to have a deps setting of: deps = -r{toxinidir}/test-requirements.txt -r{toxinidir}/requirements.txt to get the functional and unit tests to build working virtualenvs. That this is not happening in the dsvm-based zuul jobs mean that the tests can't run or pass. What's going on here? Ideas? -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From lbragstad at gmail.com Fri Aug 24 16:42:32 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 24 Aug 2018 11:42:32 -0500 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 20 August 2018 In-Reply-To: <1535123731.1775331.1485067120.6BDA6AA8@webmail.messagingengine.com> References: <1535123731.1775331.1485067120.6BDA6AA8@webmail.messagingengine.com> Message-ID: <118df264-800b-c35e-c948-75f003117ffd@gmail.com> On 08/24/2018 10:15 AM, Colleen Murphy wrote: > # Keystone Team Update - Week of 20 August 2018 > > ## News > > We ended up releasing an RC2 after all in order to include placeholder sqlalchemy migrations for Rocky, thanks wxy for catching it! > > ## Open Specs > > Search query: https://bit.ly/2Pi6dGj > > Lance reproposed the auth receipts and application credentials specs that we punted on last cycle for Stein. > > ## Recently Merged Changes > > Search query: https://bit.ly/2IACk3F > > We merged 13 changes this week. > > ## Changes that need Attention > > Search query: https://bit.ly/2wv7QLK > > There are 75 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. > > If that seems like a lot more than last week, it's because someone has helpfully proposed many patches supporting the python3-first community goal[1]. However, they haven't coordinated with the goal champions and have missed some steps[2], like proposing the removal of jobs from project-config and proposing jobs to the stable branches. I would recommend coordinating with the python3-first goal champions on merging these patches. The good news is that all of our projects seem to work with python 3.6! > > [1] https://governance.openstack.org/tc/goals/stein/python3-first.html > [2] http://lists.openstack.org/pipermail/openstack-dev/2018-August/133610.html > > ## Bugs > > This week we opened 4 new bugs and closed 1. > > Bugs opened (4)  > Bug #1788415 (keystone:High) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1788415  > Bug #1788694 (keystone:High) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1788694  > Bug #1787874 (keystone:Medium) opened by wangxiyuan https://bugs.launchpad.net/keystone/+bug/1787874  > Bug #1788183 (oslo.policy:Undecided) opened by Stephen Finucane https://bugs.launchpad.net/oslo.policy/+bug/1788183  > > Bugs closed (1)  > Bug #1771203 (python-keystoneclient:Undecided) https://bugs.launchpad.net/python-keystoneclient/+bug/1771203  > > Bugs fixed (0) > > ## Milestone Outlook > > https://releases.openstack.org/rocky/schedule.html > > We're at the end of the RC period with the official release happening next week. > > ## Shout-outs > > Thanks everyone for a great release! ++ I can't say thanks enough to everyone who contributes to this in some way, shape, or form. I'm looking forward to Stein :) > > ## Help with this newsletter > > Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter > Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From jillr at redhat.com Fri Aug 24 17:44:27 2018 From: jillr at redhat.com (Jill Rouleau) Date: Fri, 24 Aug 2018 10:44:27 -0700 Subject: [openstack-dev] [tripleo] ansible roles in tripleo In-Reply-To: References: <1534269113.6400.11.camel@redhat.com> Message-ID: <1535132667.4697.9.camel@redhat.com> On Thu, 2018-08-23 at 10:42 -0400, Dan Prince wrote: > On Tue, Aug 14, 2018 at 1:53 PM Jill Rouleau wrote: > > > > > > Hey folks, > > > > Like Alex mentioned[0] earlier, we've created a bunch of ansible > > roles > > for tripleo specific bits.  The idea is to start putting some basic > > cookiecutter type things in them to get things started, then move > > some > > low-hanging fruit out of tripleo-heat-templates and into the > > appropriate > > roles.  For example, docker/services/keystone.yaml could have > > upgrade_tasks and fast_forward_upgrade_tasks moved into ansible- > > role- > > tripleo-keystone/tasks/(upgrade.yml|fast_forward_upgrade.yml), and > > the > > t-h-t updated to > > include_role: ansible-role-tripleo-keystone > >   tasks_from: upgrade.yml > > without having to modify any puppet or heat directives. > > > > This would let us define some patterns for implementing these > > tripleo > > roles during Stein while looking at how we can make use of ansible > > for > > things like core config. > I like the idea of consolidating the Ansible stuff and getting out of > the practice of inlining it into t-h-t. Especially the "core config" > which I take to mean moving away from Puppet and towards Ansible for > service level configuration. But presumably we are going to rely on > the upstream Openstack ansible-os_* projects to do the heavy config > lifting for us here though right? We won't have to do much on our side > to leverage that I hope other than translating old hiera to equivalent > settings for the config files to ensure some backwards comparability. > We'll hopefully be able to rely on the OSA roles for a lot of the config, yes, but there will still be a fair bit of TripleO specific stuff that will need to be handled, and that's what we plan to do in these ansible-role-tripleo-* repos.   > While I agree with the goals I do wonder if the shear number of git > repos we've created here is needed. Like with puppet-tripleo we were > able to combine a set of "small lightweight" manifests in a way to > wrap them around the upstream Puppet modules. Why not do the same with > ansible-role-tripleo? My concern is that we've created so many cookie > cutter repos with boilerplate code in them that ends up being much > heavier than the files which will actually reside in many of these > repos. This in addition to the extra review work and RPM packages we > need to constantly maintain. > In theory it should be roughly the same amount of commits/review work, just a question of what repo they go to - service specific patches go to the appropriate role and shared plugins, libs, etc go to the tripleo- ansible project repo. We want the roles to be modular rather than monolithic so only the roles that are being used in a given environment need to be pulled in.  Also by having them separated, they should be easier to parse and contribute to.  Yes it's a higher number of repos that could be contributed to, but when doing so a person won't have to mentally frontload how all of the possible things work just to be able to add an upgrade task for service $foo like it is today with t-h-t.  Unless there's a different breakdown/layout you're thinking of beyond "dump everything in one place"? I'm interested in other options if we have some to reduce packaging or maintenance overhead.  With other deployers I've done stable branches checked out straight from git, but I doubt that would fly for downstream.  We could push the roles to Ansible Galaxy but we would need to think about how that would work for offline deploys and they still need to be maintained there, it's just painting the problem a different color. - Jill > Dan > > > > > > > t-h-t and config-download will still drive the vast majority of > > playbook > > creation for now, but for new playbooks (such as for operations > > tasks) > > tripleo-ansible[1] would be our project directory. > > > > So in addition to the larger conversation about how deployers can > > start > > to standardize how we're all using ansible, I'd like to also have a > > tripleo-specific conversation at PTG on how we can break out some of > > our > > ansible that's currently embedded in t-h-t into more modular and > > flexible roles. > > > > Cheers, > > Jill > > > > [0] http://lists.openstack.org/pipermail/openstack-dev/2018-August/1 > > 3311 > > 9.html > > [1] https://git.openstack.org/cgit/openstack/tripleo-ansible/tree/__ > > ____________________________________________________________________ > > ____ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsub > > scribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > ______________________________________________________________________ > ____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubsc > ribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From kennelson11 at gmail.com Fri Aug 24 18:15:26 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 24 Aug 2018 11:15:26 -0700 Subject: [openstack-dev] Berlin Community Contributor Awards Message-ID: Hello Everyone! As we approach the Summit (still a ways away thankfully), I thought I would kick off the Community Contributor Award nominations early this round. For those of you that already know what they are, here is the form[1]. For those of you that have never heard of the CCA, I'll briefly explain what they are :) We all know people in the community that do the dirty jobs, we all know people that will bend over backwards trying to help someone new, we all know someone that is a savant in some area of the code we could never hope to understand. These people rarely get the thanks they deserve and the Community Contributor Awards are a chance to make sure they know that they are appreciated for the amazing work they do and skills they have. So go forth and nominate these amazing community members[1]! Nominations will close on October 21st at 7:00 UTC and winners will be announced at the OpenStack Summit in Berlin. -Kendall (diablo_rojo) [1] https://openstackfoundation.formstack.com/forms/berlin_stein_ccas -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Fri Aug 24 18:16:23 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 24 Aug 2018 13:16:23 -0500 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: References: <1535025580-sup-8617@lrrr.local> <756c9051-a605-4d34-6c15-99cbcbc4801d@gmail.com> <21463d02-2827-d4ce-915c-4c569aae7585@fried.cc> <1535045097-sup-986@lrrr.local> <1caecc54-c681-cad6-9664-8281ab2d4323@nemebean.com> Message-ID: <13cd3f9f-0d2a-361b-7c75-5c07be2b1c17@fried.cc> So... Restore the PS of the oslo_utils version that exposed the global [1]? Or use the forced-singleton pattern from nova [2] to put it in its own importable module, e.g. oslo_utils.uuidutils.uuidsentinel? (FTR, "import only modules" is a thing for me too, but I've noticed it doesn't seem to be a hard and fast rule in OpenStack; and in this case it seemed most important to emulate the existing syntax+behavior for consumers.) -efried [1] https://review.openstack.org/#/c/594179/2/oslo_utils/uuidutils.py [2] https://github.com/openstack/nova/blob/a421bd2a8c3b549c603df7860e6357738e79c7c3/nova/tests/uuidsentinel.py#L30 On 08/23/2018 11:23 PM, Doug Hellmann wrote: > > >> On Aug 23, 2018, at 4:01 PM, Ben Nemec wrote: >> >> >> >>> On 08/23/2018 12:25 PM, Doug Hellmann wrote: >>> Excerpts from Eric Fried's message of 2018-08-23 09:51:21 -0500: >>>> Do you mean an actual fixture, that would be used like: >>>> >>>> class MyTestCase(testtools.TestCase): >>>> def setUp(self): >>>> self.uuids = self.useFixture(oslofx.UUIDSentinelFixture()).uuids >>>> >>>> def test_foo(self): >>>> do_a_thing_with(self.uuids.foo) >>>> >>>> ? >>>> >>>> That's... okay I guess, but the refactoring necessary to cut over to it >>>> will now entail adding 'self.' to every reference. Is there any way >>>> around that? >>> That is what I had envisioned, yes. In the absence of a global, >>> which we do not want, what other API would you propose? >> >> If we put it in oslotest instead, would the global still be a problem? Especially since mock has already established a pattern for this functionality? > > I guess all of the people who complained so loudly about the global in oslo.config are gone? > > If we don’t care about the global then we could just put the code from Eric’s threadsafe version in oslo.utils somewhere. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From cdent+os at anticdent.org Fri Aug 24 18:23:33 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 24 Aug 2018 19:23:33 +0100 (BST) Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: References: <1535025580-sup-8617@lrrr.local> <756c9051-a605-4d34-6c15-99cbcbc4801d@gmail.com> <21463d02-2827-d4ce-915c-4c569aae7585@fried.cc> <1535045097-sup-986@lrrr.local> <1caecc54-c681-cad6-9664-8281ab2d4323@nemebean.com> Message-ID: On Fri, 24 Aug 2018, Doug Hellmann wrote: > I guess all of the people who complained so loudly about the global in oslo.config are gone? It's a diffent context. In a testing environment where there is already a well established pattern of use it's not a big deal. Global in oslo.config is still horrible, but again: a well established pattern of use. This is part of why I think it is better positioned in oslotest as that signals its limitations. However, like I said in my other message, copying nova's thing has proven fine. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From sean.mcginnis at gmx.com Fri Aug 24 18:36:13 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 24 Aug 2018 13:36:13 -0500 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: Message-ID: <20180824183613.GB23577@sm-workstation> > > After some prompting from gibi, that code has now been adjusted so > that requirements.txt and tox.ini [1] make sure that the extract > placement branch is installed into the test virtualenvs. So in the > gate the unit and functional tests pass. Other jobs do not because > of [1]. > > In the intervening time I've taken that code, built a devstack that > uses a nova-placement-api wsgi script that uses nova.conf and the > extracted placement code. It runs against the nova-api database. > > Created a few servers. Worked. > Excellent! > Then I switched the devstack at placement-unit unit file to point to > the placement-api wsgi script, and configured > /etc/placement/placement.conf to have a > [placement_database]/connection of the nova-api db. > > Created a few servers. Worked. > > Thanks. > > [1] As far as I can tell a requirements.txt entry of > > -e git+https://github.com/cdent/placement-1.git at cd/make-it-work#egg=placement > > will install just fine with 'pip install -r requirements.txt', but > if I do 'pip install nova' and that line is in requirements.txt it > does not work. This means I had to change tox.ini to have a deps > setting of: > > deps = -r{toxinidir}/test-requirements.txt > -r{toxinidir}/requirements.txt > > to get the functional and unit tests to build working virtualenvs. > That this is not happening in the dsvm-based zuul jobs mean that the > tests can't run or pass. What's going on here? Ideas? Just conjecture on my part, but I know we have it documented somewhere that URL paths to requirements are not allowed. Maybe we do something to actively prevent that? From emilien at redhat.com Fri Aug 24 18:40:01 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 24 Aug 2018 14:40:01 -0400 Subject: [openstack-dev] [tripleo] The Weekly Owl - 29th Edition Message-ID: Welcome to the twenty-ninthest edition of a weekly update in TripleO world! The goal is to provide a short reading (less than 5 minutes) to learnwhat's new this week. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-August/133094.html General announcements ===================================================================== +--> This week we released Rocky RC1, branched stable/rocky and unless there are critical bugs we'll call it our final stable release. +--> The team is preparing for the next PTG: https://etherpad.openstack.org/p/tripleo-ptg-stein CI status ===================================================================== +--> Sprint theme: Zuul v3 migration ( https://trello.com/b/U1ITy0cu/tripleo-and-rdo-ci?menu=filter&filter=label:Sprint%2018%20CI ) +--> The Ruck and Rover for this sprint are Marios and Wes. Please tell them any CI issue. +--> Promotion on master is 11 days, 1 day on Rocky, 3 days on Queens, 3 days on Pike and 1 days on Ocata. Upgrades ===================================================================== +--> Adding support for upgrades when OpenShift is deployed. Containers ===================================================================== +--> Efforts to support Podman tracked here: https://trello.com/b/S8TmOU0u/tripleo-podman config-download ===================================================================== +--> This squad is down and we move forward with the Edge squad. Edge ===================================================================== +--> New squad created by James: https://etherpad.openstack.org/p/tripleo-edge-squad-status (more to come) Integration ===================================================================== +--> No updates this week. UI/CLI ===================================================================== +--> No updates this week. Validations ===================================================================== +--> No updates this week, reviews are needed: https://etherpad.openstack.org/p/tripleo-validations-squad-status Networking ===================================================================== +--> Good progress on Ansible ML2 driver Workflows ===================================================================== +--> Planning Stein: better Ansible integration, UI convergence, etc. Security ===================================================================== +--> Working on SElinux for containers (related to podman integration mainly) Owl fact ===================================================================== "One single Owl can go fast. Multiple owls, together, can go far." Source: a mix of an African proverb and my Friday-afternoon imagination. Thank you all for reading and stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Fri Aug 24 18:57:40 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 24 Aug 2018 18:57:40 -0000 Subject: [openstack-dev] nova 18.0.0.0rc3 (rocky) Message-ID: Hello everyone, A new release candidate for nova for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/nova/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/nova/log/?h=stable/rocky Release notes for nova can be found at: https://docs.openstack.org/releasenotes/nova/ From vdrok at mirantis.com Fri Aug 24 19:08:41 2018 From: vdrok at mirantis.com (Vladyslav Drok) Date: Fri, 24 Aug 2018 22:08:41 +0300 Subject: [openstack-dev] [ironic][bifrost][sushy][ironic-inspector][ironic-ui][virtualbmc] sub-project/repository core reviewer changes In-Reply-To: <29D3778C-1E7A-4156-A840-1C736FA74875@cisco.com> References: <29D3778C-1E7A-4156-A840-1C736FA74875@cisco.com> Message-ID: +1 to all the changes. On Fri, Aug 24, 2018 at 12:12 PM Sam Betts (sambetts) wrote: > +1 > > > > Sam > > > > On 23/08/2018, 21:38, "Mark Goddard" wrote: > > > > +1 > > > > On Thu, 23 Aug 2018, 20:43 Jim Rollenhagen, > wrote: > > ++ > > > > // jim > > > > On Thu, Aug 23, 2018 at 2:24 PM, Julia Kreger > wrote: > > Greetings everyone! > > In our team meeting this week we stumbled across the subject of > promoting contributors to be sub-project's core reviewers. > Traditionally it is something we've only addressed as needed or > desired by consensus with-in those sub-projects, but we were past due > time to take a look at the entire picture since not everything should > fall to ironic-core. > > And so, I've taken a look at our various repositories and I'm > proposing the following additions: > > For sushy-core, sushy-tools-core, and virtualbmc-core: Ilya > Etingof[1]. Ilya has been actively involved with sushy, sushy-tools, > and virtualbmc this past cycle. I've found many of his reviews and > non-voting review comments insightful and willing to understand. He > has taken on some of the effort that is needed to maintain and keep > these tools usable for the community, and as such adding him to the > core group for these repositories makes lots of sense. > > For ironic-inspector-core and ironic-specs-core: Kaifeng Wang[2]. > Kaifeng has taken on some hard problems in ironic and > ironic-inspector, as well as brought up insightful feedback in > ironic-specs. They are demonstrating a solid understanding that I only > see growing as time goes on. > > For sushy-core: Debayan Ray[3]. Debayan has been involved with the > community for some time and has worked on sushy from early on in its > life. He has indicated it is near and dear to him, and he has been > actively reviewing and engaging in discussion on patchsets as his time > has permitted. > > With any addition it is good to look at inactivity as well. It saddens > me to say that we've had some contributors move on as priorities have > shifted to where they are no longer involved with the ironic > community. Each person listed below has been inactive for a year or > more and is no longer active in the ironic community. As such I've > removed their group membership from the sub-project core reviewer > groups. Should they return, we will welcome them back to the community > with open arms. > > bifrost-core: Stephanie Miller[4] > ironic-inspector-core: Anton Arefivev[5] > ironic-ui-core: Peter Peila[6], Beth Elwell[7] > > Thanks, > > -Julia > > [1]: http://stackalytics.com/?user_id=etingof&metric=marks > [2]: http://stackalytics.com/?user_id=kaifeng&metric=marks > [3]: http://stackalytics.com/?user_id=deray&metric=marks&release=all > [4]: http://stackalytics.com/?metric=marks&release=all&user_id=stephaneeee > [5]: http://stackalytics.com/?user_id=aarefiev&metric=marks > [6]: http://stackalytics.com/?metric=marks&release=all&user_id=ppiela > [7]: > http://stackalytics.com/?metric=marks&release=all&user_id=bethelwell&module=ironic-ui > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Aug 24 20:01:07 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 24 Aug 2018 15:01:07 -0500 Subject: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: References: Message-ID: <071940ca-bf30-30a7-7e51-fa1d8e3c0aa4@gmail.com> On 8/22/2018 9:14 PM, Sam Morrison wrote: > I think in our case we’d only migrate between cells if we know the network and storage is accessible and would never do it if not. > Thinking moving from old to new hardware at a cell level. If it's done via the resize API at the top, initiated by a non-admin user, how would you prevent it? We don't really know if we're going across cell boundaries until the scheduler picks a host, and today we restrict all move operations to within the same cell. But that's part of the problem that needs addressing - how to tell the scheduler when it's OK to get target hosts for a move from all cells rather than the cell that the server is currently in. > > If storage and network isn’t available ideally it would fail at the api request. Not sure this is something we can really tell beforehand in the API, but maybe possible depending on whatever we come up with regarding volumes and ports. I expect this is a whole new orchestrated task in the (super)conductor when it happens. So while I think about using shelve/unshelve from a compute operation standpoint, I don't want to try and shoehorn this into existing conductor tasks. > > There is also ceph backed instances and so this is also something to take into account which nova would be responsible for. Not everyone is using ceph and it's not really something the API is aware of...at least not today - but long-term with shared storage providers in placement we might be able to leverage this for non-volume-backed instances, i.e. if we know the source and target host are on the same shared storage, regardless of cell boundary, we could just move rather than use snapshots (shelve). But I think phase1 is easiest universally if we are using snapshots to get from cell 1 to cell 2. > > I’ll be in Denver so we can discuss more there too. Awesome. -- Thanks, Matt From mriedemos at gmail.com Fri Aug 24 20:11:22 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 24 Aug 2018 15:11:22 -0500 Subject: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: <20180823170756.sz5qj2lxdy4i4od2@localhost> References: <20180823104210.kgctxfjiq47uru34@localhost> <20180823170756.sz5qj2lxdy4i4od2@localhost> Message-ID: <1c80dd40-a483-1159-d7b5-dacd9c7ab5f9@gmail.com> On 8/23/2018 12:07 PM, Gorka Eguileor wrote: > I haven't checked the PTG agenda yet, but is there a meeting on this? > Because we may want to have one to try to understand the requirements > and figure out if there's a way to do it with current Cinder > functionality of if we'd need something new. I don't see any set schedule yet for topics like we've done in the past, I'll ask Mel since time is getting short (~2 weeks out now). But I have this as an item for discussion in the etherpad [1]. In previous PTGs, we usually have 3 days for (mostly) vertical team stuff with Wednesday being our big topics days split into morning and afternoon, e.g. cells and placement, then Thursday is split into 1-2 hour cross-project sessions, e.g. nova/cinder, nova/neutron, etc, and then Friday is the miscellaneous everything else day for stuff on the etherpad. [1] https://etherpad.openstack.org/p/nova-ptg-stein -- Thanks, Matt From whayutin at redhat.com Fri Aug 24 20:16:39 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 24 Aug 2018 14:16:39 -0600 Subject: [openstack-dev] [tripleo] The Weekly Owl - 29th Edition In-Reply-To: References: Message-ID: On Fri, Aug 24, 2018 at 2:40 PM Emilien Macchi wrote: > Welcome to the twenty-ninthest edition of a weekly update in TripleO world! > The goal is to provide a short reading (less than 5 minutes) to > learnwhat's new this week. > Any contributions and feedback are welcome. > Link to the previous version: > http://lists.openstack.org/pipermail/openstack-dev/2018-August/133094.html > > General announcements > ===================================================================== > +--> This week we released Rocky RC1, branched stable/rocky and unless > there are critical bugs we'll call it our final stable release. > +--> The team is preparing for the next PTG: > https://etherpad.openstack.org/p/tripleo-ptg-stein > > CI status > ===================================================================== > +--> Sprint theme: Zuul v3 migration ( > https://trello.com/b/U1ITy0cu/tripleo-and-rdo-ci?menu=filter&filter=label:Sprint%2018%20CI > ) > +--> The Ruck and Rover for this sprint are Marios and Wes. Please tell > them any CI issue. > It's actually Sorin and myself while Marios is on PTO. Might as well take the opportunity to welcome Sorin to the TripleO team :)) > +--> Promotion on master is 11 days, 1 day on Rocky, 3 days on Queens, 3 > days on Pike and 1 days on Ocata. > > Upgrades > ===================================================================== > +--> Adding support for upgrades when OpenShift is deployed. > > Containers > ===================================================================== > +--> Efforts to support Podman tracked here: > https://trello.com/b/S8TmOU0u/tripleo-podman > > config-download > ===================================================================== > +--> This squad is down and we move forward with the Edge squad. > > Edge > ===================================================================== > +--> New squad created by James: > https://etherpad.openstack.org/p/tripleo-edge-squad-status (more to come) > > Integration > ===================================================================== > +--> No updates this week. > > UI/CLI > ===================================================================== > +--> No updates this week. > > Validations > ===================================================================== > +--> No updates this week, reviews are needed: > https://etherpad.openstack.org/p/tripleo-validations-squad-status > > Networking > ===================================================================== > +--> Good progress on Ansible ML2 driver > > Workflows > ===================================================================== > +--> Planning Stein: better Ansible integration, UI convergence, etc. > > Security > ===================================================================== > +--> Working on SElinux for containers (related to podman integration > mainly) > > Owl fact > ===================================================================== > "One single Owl can go fast. Multiple owls, together, can go far." > Source: a mix of an African proverb and my Friday-afternoon imagination. > > > Thank you all for reading and stay tuned! > -- > Your fellow reporter, Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Wes Hayutin Associate MANAGER Red Hat whayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Fri Aug 24 20:34:07 2018 From: ed at leafe.com (Ed Leafe) Date: Fri, 24 Aug 2018 15:34:07 -0500 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: Message-ID: <57DD8BFD-2B02-48D8-8DFF-A373010AD651@leafe.com> On Aug 24, 2018, at 7:36 AM, Chris Dent wrote: > Over the past few days a few of us have been experimenting with > extracting placement to its own repo, as has been discussed at > length on this list, and in some etherpads: > > https://etherpad.openstack.org/p/placement-extract-stein > https://etherpad.openstack.org/p/placement-extraction-file-notes > > As part of that, I've been doing some exploration to tease out the > issues we're going to hit as we do it. None of this is work that > will be merged, rather it is stuff to figure out what we need to > know to do the eventual merging correctly and efficiently. I’ve re-run the extraction, re-arranged the directories, and cleaned up most of the import pathing. The code is here: https://github.com/EdLeafe/placement. I did a forced push to remove the first attempt. -- Ed Leafe From mriedemos at gmail.com Fri Aug 24 21:08:34 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 24 Aug 2018 16:08:34 -0500 Subject: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: <20180823152242.GB23060@sm-workstation> References: <20180823152242.GB23060@sm-workstation> Message-ID: On 8/23/2018 10:22 AM, Sean McGinnis wrote: > I haven't gone through the workflow, but I thought shelve/unshelve could detach > the volume on shelving and reattach it on unshelve. In that workflow, assuming > the networking is in place to provide the connectivity, the nova compute host > would be connecting to the volume just like any other attach and should work > fine. The unknown or tricky part is making sure that there is the network > connectivity or routing in place for the compute host to be able to log in to > the storage target. Yeah that's also why I like shelve/unshelve as a start since it's doing volume detach from the source host in the source cell and volume attach to the target host in the target cell. Host aggregates in Nova, as a grouping concept, are not restricted to cells at all, so you could have hosts in the same aggregate which span cells, so I'd think that's what operators would be doing if they have network/storage spanning multiple cells. Having said that, host aggregates are not exposed to non-admin end users, so again, if we rely on a normal user to do this move operation via resize, the only way we can restrict the instance to another host in the same aggregate is via availability zones, which is the user-facing aggregate construct in nova. I know Sam would care about this because NeCTAR sets [cinder]/cross_az_attach=False in nova.conf so servers/volumes are restricted to the same AZ, but that's not the default, and specifying an AZ when you create a server is not required (although there is a config option in nova which allows operators to define a default AZ for the instance if the user didn't specify one). Anyway, my point is, there are a lot of "ifs" if it's not an operator/admin explicitly telling nova where to send the server if it's moving across cells. > > If it's the other scenario mentioned where the volume needs to be migrated from > one storage backend to another storage backend, then that may require a little > more work. The volume would need to be retype'd or migrated (storage migration) > from the original backend to the new backend. Yeah, the thing with retype/volume migration that isn't great is it triggers the swap_volume callback to the source host in nova, so if nova was orchestrating the volume retype/move, we'd need to wait for the swap volume to be done (not impossible) before proceeding, and only the libvirt driver implements the swap volume API. I've always wondered, what the hell do non-libvirt deployments do with respect to the volume retype/migration APIs in Cinder? Just disable them via policy? -- Thanks, Matt From mriedemos at gmail.com Fri Aug 24 21:10:07 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 24 Aug 2018 16:10:07 -0500 Subject: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: References: <20180823152242.GB23060@sm-workstation> Message-ID: +operators On 8/24/2018 4:08 PM, Matt Riedemann wrote: > On 8/23/2018 10:22 AM, Sean McGinnis wrote: >> I haven't gone through the workflow, but I thought shelve/unshelve >> could detach >> the volume on shelving and reattach it on unshelve. In that workflow, >> assuming >> the networking is in place to provide the connectivity, the nova >> compute host >> would be connecting to the volume just like any other attach and >> should work >> fine. The unknown or tricky part is making sure that there is the network >> connectivity or routing in place for the compute host to be able to >> log in to >> the storage target. > > Yeah that's also why I like shelve/unshelve as a start since it's doing > volume detach from the source host in the source cell and volume attach > to the target host in the target cell. > > Host aggregates in Nova, as a grouping concept, are not restricted to > cells at all, so you could have hosts in the same aggregate which span > cells, so I'd think that's what operators would be doing if they have > network/storage spanning multiple cells. Having said that, host > aggregates are not exposed to non-admin end users, so again, if we rely > on a normal user to do this move operation via resize, the only way we > can restrict the instance to another host in the same aggregate is via > availability zones, which is the user-facing aggregate construct in > nova. I know Sam would care about this because NeCTAR sets > [cinder]/cross_az_attach=False in nova.conf so servers/volumes are > restricted to the same AZ, but that's not the default, and specifying an > AZ when you create a server is not required (although there is a config > option in nova which allows operators to define a default AZ for the > instance if the user didn't specify one). > > Anyway, my point is, there are a lot of "ifs" if it's not an > operator/admin explicitly telling nova where to send the server if it's > moving across cells. > >> >> If it's the other scenario mentioned where the volume needs to be >> migrated from >> one storage backend to another storage backend, then that may require >> a little >> more work. The volume would need to be retype'd or migrated (storage >> migration) >> from the original backend to the new backend. > > Yeah, the thing with retype/volume migration that isn't great is it > triggers the swap_volume callback to the source host in nova, so if nova > was orchestrating the volume retype/move, we'd need to wait for the swap > volume to be done (not impossible) before proceeding, and only the > libvirt driver implements the swap volume API. I've always wondered, > what the hell do non-libvirt deployments do with respect to the volume > retype/migration APIs in Cinder? Just disable them via policy? > -- Thanks, Matt From mriedemos at gmail.com Fri Aug 24 21:20:21 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 24 Aug 2018 16:20:21 -0500 Subject: [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration) In-Reply-To: References: Message-ID: <8f84d6ce-dc05-d126-9309-b84a97625c8c@gmail.com> On 8/20/2018 10:29 AM, Matthew Booth wrote: > Secondly, is there any reason why we shouldn't just document then you > have to delete snapshots before doing a volume migration? Hopefully > some cinder folks or operators can chime in to let me know how to back > them up or somehow make them independent before doing this, at which > point the volume itself should be migratable? Coincidentally the volume migration API never had API reference documentation. I have that here now [1]. It clearly states the preconditions to migrate a volume based on code in the volume API. However, volume migration is admin-only by default and retype (essentially like resize) is admin-or-owner so non-admins can do it and specify to migrate. In general I think it's best to have preconditions for *any* API documented, so anything needed to perform a retype should be documented in the API, like that the volume can't have snapshots. [1] https://review.openstack.org/#/c/595379/ -- Thanks, Matt From mriedemos at gmail.com Fri Aug 24 21:23:06 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 24 Aug 2018 16:23:06 -0500 Subject: [openstack-dev] [Openstack-operators] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration) In-Reply-To: <20180821103628.dk3ok76fdruwsaut@lyarwood.usersys.redhat.com> References: <20180821103628.dk3ok76fdruwsaut@lyarwood.usersys.redhat.com> Message-ID: <3a1ffb06-6b8d-883f-f1dd-21921c3066e5@gmail.com> On 8/21/2018 5:36 AM, Lee Yarwood wrote: > I'm definitely in favor of hiding this from users eventually but > wouldn't this require some form of deprecation cycle? > > Warnings within the API documentation would also be useful and even > something we could backport to stable to highlight just how fragile this > API is ahead of any policy change. The swap volume API in nova defaults to admin-only policy rules by default, so for any users that are using it directly, they are (1) admins knowingly shooting themselves, or their users, in the foot or (2) operators have opened up the policy to non-admins (or some other role of user) to hit the API directly. I would ask why that is. -- Thanks, Matt From mriedemos at gmail.com Fri Aug 24 21:35:46 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 24 Aug 2018 16:35:46 -0500 Subject: [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration) In-Reply-To: <20180822094620.kncry4ufbe6fwi5u@localhost> References: <20180822094620.kncry4ufbe6fwi5u@localhost> Message-ID: <374f9a6e-9bea-d047-8e99-c56b1612def9@gmail.com> On 8/22/2018 4:46 AM, Gorka Eguileor wrote: > The solution is conceptually simple. We add a new API microversion in > Cinder that adds and optional parameter called "generic_keep_source" > (defaults to False) to both migrate and retype operations. But if the problem is that users are not using the retype API and instead are hitting the compute swap volume API instead, they won't use this new parameter anyway. Again, retype is admin-or-owner but volume migration (in cinder) and swap volume (in nova) are both admin-only, so are admins calling swap volume directly or are people easing up the policy restrictions so non-admins can use these migration APIs? -- Thanks, Matt From lbragstad at gmail.com Fri Aug 24 21:45:09 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 24 Aug 2018 16:45:09 -0500 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 6 August 2018 In-Reply-To: <1ce4287a-e7b5-7640-855e-0207946bba0d@gmail.com> References: <1533915998.2993501.1470046096.3F011E8B@webmail.messagingengine.com> <672e0eee-1ccc-fc08-74fe-5468e5ee506b@catalyst.net.nz> <1ce4287a-e7b5-7640-855e-0207946bba0d@gmail.com> Message-ID: On 08/22/2018 07:49 AM, Lance Bragstad wrote: > > On 08/22/2018 03:23 AM, Adrian Turjak wrote: >> Bah! I saw this while on holiday and didn't get a chance to respond, >> sorry for being late to the conversation. >> >> On 11/08/18 3:46 AM, Colleen Murphy wrote: >>> ### Self-Service Keystone >>> >>> At the weekly meeting Adam suggested we make self-service keystone a focus point of the PTG[9]. Currently, policy limitations make it difficult for an unprivileged keystone user to get things done or to get information without the help of an administrator. There are some other projects that have been created to act as workflow proxies to mitigate keystone's limitations, such as Adjutant[10] (now an official OpenStack project) and Ksproj[11] (written by Kristi). The question is whether the primitives offered by keystone are sufficient building blocks for these external tools to leverage, or if we should be doing more of this logic within keystone. Certainly improving our RBAC model is going to be a major part of improving the self-service user experience. >>> >>> [9] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-08-07-16.00.log.html#l-121 >>> [10] https://adjutant.readthedocs.io/en/latest/ >>> [11] https://github.com/CCI-MOC/ksproj >> As you can probably expect, I'd love to be a part of any of these >> discussions. Anything I can nicely move to being logic directly >> supported in Keystone, the less I need to do in Adjutant. The majority >> of things though I think I can do reasonably well with the primitives >> Keystone gives me, and what I can't I tend to try and work with upstream >> to fill the gaps. >> >> System vs project scope helps a lot though, and I look forward to really >> playing with that. > Since it made sense to queue incorporating system scope after the flask > work, I just started working with that on the credentials API*. There is > a WIP series up for review that attempts to do a couple things [0]. > First it tries to incorporate system and project scope checking into the > API. Second it tries to be more explicit about protection test cases, > which I think is going to be important since we're adding another scope > type. We also support three different roles now and it would be nice to > clearly see who can do what in each case with tests. > > I'd be curious to get your feedback here if you have any. > > * Because the credentials API was already moved to flask and has room > for self-service improvements [1] > > [0] https://review.openstack.org/#/c/594547/ This should be passing tests at least now, but there are still some tests left to write. Most of what's in the patch is testing the new authorization scope (e.g. system). I'm currently taking advice on ways to extensively test six different personas without duplication running rampant across test cases (project admin, project member, project reader, system admin, system member, system reader). In summary, it does make the credential API much more self-service oriented, which is something we should try and do everywhere (I picked credentials first because it was already moved to flask). > [1] > https://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/policies/credential.py#n21 > >> I sadly won't be at the PTG, but will be at the Berlin summit. Plus I >> have a lot of Adjutant work planned for Stein, a large chunk of which is >> refactors and reshuffling blueprints and writing up a roadmap, plus some >> better entry point tasks for new contributors. >> >>> ### Standalone Keystone >>> >>> Also at the meeting and during office hours, we revived the discussion of what it would take to have a standalone keystone be a useful identity provider for non-OpenStack projects[12][13]. First up we'd need to turn keystone into a fully-fledged SAML IdP, which it's not at the moment (which is a point of confusion in our documentation), or even add support for it to act as an OpenID Connect IdP. This would be relatively easy to do (or at least not impossible). Then the application would have to use keystonemiddleware or its own middleware to route requests to keystone to issue and validate tokens (this is one aspect where we've previously discussed whether JWT could benefit us). Then the question is what should a not-OpenStack application do with keystone's "scoped RBAC"? It would all depend on how the resources of the application are grouped and whether they care about multitenancy in some form. Likely each application would have different needs and it would be difficult to find a one-size-fits-all approach. We're interested to know whether anyone has a burning use case for something like this. >>> >>> [12] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-08-07-16.00.log.html#l-192 >>> [13] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-08-07.log.html#t2018-08-07T17:01:30 >> This one is interesting because another department at Catalyst is >> actually looking to use Keystone outside of the scope of OpenStack. They >> are building a SaaS platform, and they need authn, authz (with some >> basic RBAC), a service catalog (think API endpoint per software >> offering), and most of those things are useful outside of OpenStack. >> They can then use projects to signify a customer, and a project >> (customer) could have one or more users accessing the management GUIs, >> with roles giving them some RBAC. A large part of this is because they >> can then also piggy back on a lot of work our team has done with >> OpenStack and Keystone and even reuse some of our projects and tools for >> billing and other things (Adjutant maybe?). They could use KeystoneAuth >> for CLI and client tools, they can build their APIs using >> Keystonemiddleware. >> >> >> Then another reason why this actually interests the Catalyst Cloud team >> is because we actually use Keystone with an SQL backend for our public >> cloud, with the db in a multi-region galera cluster. Keystone is our >> Idp, we don't federate it, and we now have a reasonably passable 2FA >> option on it, with a better MFA option coming in Stein when I'm done >> with Auth Receipts. We actually kind of like Keystone for our authn, and >> because we didn't have any existing users when we first built our cloud >> so using vanilla Keystone seemed like a sensible solution. We had plans >> to migrate users and federate, or move to LDAP, but they never >> materialized because maintaining more systems didn't make sense and did >> add many useful benefits. Making Keystone a fully fledged Idp with SAML >> and OpenID support would be fantastic because we could then build a tiny >> single sign on around Keystone and use that for all our non-openstack >> services. >> >> In fact I had a prototype side project planned which would be a tiny >> Flask or Django app that would act as a single sign on for Keystone. It >> would have a login form that handles the new MFA process with auth >> receipts in Keystone, and on getting the token it would wrap that into >> an OpenID token which other systems could interpret. With the >> appropriate APIs for acting as a provider and most of those just doing >> user actions with that token in Keystone. In theory I could have made it >> a tiny entirely ephemeral app which only needs to know where keystone is >> (no admin creds). Basically a tiny Idp around Keystone. >> >> But if Keystone goes down the path of supporting SAML and OpenID then >> all we really need is a login GUI that supports auth receipts (and >> plugin support for different types of MFA to match ones in Keystone), >> which probably still should be a tiny side project rather than views in >> Keystone (should Keystone really serve HTML?), or requiring Horizon >> (Horizon could use it as a SSO). I would love to help with something >> like this if we do go down that path. :) >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From mriedemos at gmail.com Fri Aug 24 21:45:57 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 24 Aug 2018 16:45:57 -0500 Subject: [openstack-dev] [nova] [placement] compute nodes use of placement In-Reply-To: <0484851a-50af-cf28-137f-c967cc2b9b44@gmail.com> References: <0484851a-50af-cf28-137f-c967cc2b9b44@gmail.com> Message-ID: <80d18d7d-0007-73a0-b338-a95c0e6ca540@gmail.com> On 7/30/2018 1:55 PM, Jay Pipes wrote: > ack. will review shortly. thanks, Chris. For those on the edge of their seats at home, we have merged [1] in Stein and assuming things don't start failing in weird ways after some period of time, we'll probably backport it. OVH is already running with it. [1] https://review.openstack.org/#/c/520024/ -- Thanks, Matt From gouthampravi at gmail.com Fri Aug 24 22:28:02 2018 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Fri, 24 Aug 2018 15:28:02 -0700 Subject: [openstack-dev] [Tripleo] fluentd logging status In-Reply-To: References: Message-ID: On Fri, Aug 24, 2018 at 2:17 AM Juan Badia Payno wrote: > > Recently, I did a little test regarding fluentd logging on the gates master[1], queens[2], pike [3]. I don't like the status of it, I'm still working on them, but basically there are quite a lot of misconfigured logs and some services that they are not configured at all. > > I think we need to put some effort on the logging. The purpose of this email is to point out that we need to do a little effort on the task. > > First of all, I think we need to enable fluentd on all the scenarios, as it is on the tests [1][2][3] commented on the beginning of the email. Once everything is ok and some automatic test regarding logging is done they can be disabled. > > I'd love not to create a new bug for every misconfigured/unconfigured service, but if requested to grab more attention on it, I will open it. > > The plan I have in mind is something like: > * Make an initial picture of what the fluentd/log status is (from pike upwards). > * Fix all misconfigured services. (designate,...) > * Add the non-configured services. (manila,...) Awesome, I noticed this with manila just yesterday, and added it to my list of To-Do/cleanup. I'm glad you're taking note/working on it, please add me to review (gouthamr) / let me know if you'd like me to do something. > * Add an automated check to find a possible unconfigured/misconfigured problem. > > Any comments, doubts or questions are welcome > > Cheers, > Juan > > [1] https://review.openstack.org/594836 > [2] https://review.openstack.org/594838 > [3] https://review.openstack.org/594840 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Fri Aug 24 23:37:20 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 24 Aug 2018 18:37:20 -0500 Subject: [openstack-dev] [nova] Deprecating Core/Disk/RamFilter Message-ID: <40ffccc5-4410-18cc-2862-77d528889ec3@gmail.com> This is just an FYI that I have proposed that we deprecate the core/ram/disk filters [1]. We should have probably done this back in Pike when we removed them from the default enabled_filters list and also deprecated the CachingScheduler, which is the only in-tree scheduler driver that benefits from enabling these filters. With the heal_allocations CLI, added in Rocky, we can probably drop the CachingScheduler in Stein so the pieces are falling into place. As we saw in a recent bug [2], having these enabled in Stein now causes blatantly incorrect filtering on ironic nodes. Comments are welcome here, the review, or in IRC. [1] https://review.openstack.org/#/c/596502/ [2] https://bugs.launchpad.net/tripleo/+bug/1787910 -- Thanks, Matt From mriedemos at gmail.com Fri Aug 24 23:51:08 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 24 Aug 2018 18:51:08 -0500 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: References: <1535025580-sup-8617@lrrr.local> <756c9051-a605-4d34-6c15-99cbcbc4801d@gmail.com> <21463d02-2827-d4ce-915c-4c569aae7585@fried.cc> <1535045097-sup-986@lrrr.local> <315eac7a-2fed-e2ae-538e-e589dea7cf93@gmail.com> <3f2131e5-6785-0429-e731-81c1287b39ff@fried.cc> Message-ID: On 8/23/2018 2:05 PM, Chris Dent wrote: > On Thu, 23 Aug 2018, Dan Smith wrote: > >> ...and it doesn't work like mock.sentinel does, which is part of the >> value. I really think we should put this wherever it needs to be so that >> it can continue to be as useful as is is today. Even if that means just >> copying it into another project -- it's not that complicated of a thing. > > Yeah, I agree. I had hoped that we could make something that was > generally useful, but its main value is its interface and if we > can't have that interface in a library, having it per codebase is no > biggie. For example it's been copied straight from nova into the > placement extractions experiments with no changes and, as one would > expect, works just fine. > > Unless people are wed to doing something else, Dan's right, let's > just do that. So just follow me here people, what if we had this common shared library where code could incubate and then we could write some tools to easily copy that common code into other projects... I'm pretty sure I could get said project approved as a top-level program under The Foundation and might even get a talk or two out of this idea. I can see the Intel money rolling in now... -- Thanks, Matt From fungi at yuggoth.org Sat Aug 25 00:01:22 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 25 Aug 2018 00:01:22 +0000 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: References: <1535025580-sup-8617@lrrr.local> <756c9051-a605-4d34-6c15-99cbcbc4801d@gmail.com> <21463d02-2827-d4ce-915c-4c569aae7585@fried.cc> <1535045097-sup-986@lrrr.local> <315eac7a-2fed-e2ae-538e-e589dea7cf93@gmail.com> <3f2131e5-6785-0429-e731-81c1287b39ff@fried.cc> Message-ID: <20180825000122.m3rwtf4iv2t6buws@yuggoth.org> On 2018-08-24 18:51:08 -0500 (-0500), Matt Riedemann wrote: [...] > So just follow me here people, what if we had this common shared > library where code could incubate and then we could write some > tools to easily copy that common code into other projects... If we do this, can we at least put it in a consistent place in all projects? Maybe name the directory something like "openstack/common" just to make it obvious. > I'm pretty sure I could get said project approved as a top-level > program under The Foundation and might even get a talk or two out > of this idea. I can see the Intel money rolling in now... Seems like a sound idea. Can we call it "Nostalgia" for no particular reason? Though maybe "Recurring Nightmare" would be a more accurate choice. -- Jeremy Stanley From soulxu at gmail.com Sat Aug 25 00:08:11 2018 From: soulxu at gmail.com (Alex Xu) Date: Sat, 25 Aug 2018 08:08:11 +0800 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> Message-ID: 2018-08-18 20:25 GMT+08:00 Chris Dent : > On Fri, 17 Aug 2018, Doug Hellmann wrote: > > If we ignore the political concerns in the short term, are there >> other projects actually interested in using placement? With what >> technical caveats? Perhaps with modifications of some sort to support >> the needs of those projects? >> > > I think ignoring the political concerns (in any term) is not > possible. We are a group of interacting humans, politics are always > present. Cordial but active debate to determine the best course of > action is warranted. > > (tl;dr: Let's have existing and potential placement contributors > decide its destiny.) > > Five topics I think are relevant here, in order of politics, least > to most: > > 1. Placement has been designed from the outset to have a hard > contract between it and the services that use it. Being embedded > and/or deeply associated with one other single service means that > that contract evolves in a way that is strongly coupled. We made > placement have an HTTP API, not use RPC, and not produce or consume > notifications because it is supposed to be bounded and independent. > Sharing code and human management doesn't enable that. As you'll > read below, placement's progress has been overly constrained by > compute. > > 2. There are other projects actively using placement, not merely > interested. If you search codesearch.o.o for terms like "resource > provider" you can find them. But to rattle off those that I'm aware > of (which I'm certain is an incomplete list): > > * Cyborg is actively working on using placement to track FPGA > e.g., https://review.openstack.org/#/c/577438/ > > * Blazar is working on using them for reservations: > https://review.openstack.org/#/q/status:open+project:opensta > ck/blazar+branch:master+topic:bp/placement-api > > * Neutron has been reporting to placement for some time and has work > in progress on minimum bandwidth handling with the help of > placement: > https://review.openstack.org/#/q/status:open+project:opensta > ck/neutron-lib+branch:master+topic:minimum-bandwidth- > allocation-placement-api > > * Ironic uses resource classes to describe types of nodes > > * Mogan (which may or may not be dead, not clear) was intending to > track nodes with placement: > http://git.openstack.org/cgit/openstack/mogan-specs/tree/spe > cs/pike/approved/track-resources-using-placement.rst > > * Zun is working to use placement for "unified resource management": > https://blueprints.launchpad.net/zun/+spec/use-placement-res > ource-management > > * Cinder has had discussion about using placement to overcome race > conditions in its existing scheduling subsystem (a purpose to > which placement was explicitly designed). > > 3. Placement's direction and progress is heavily curtailed by the > choices and priorities that compute wants or needs to make. That > means that for the past year or more much of the effort in placement > has been devoted to eventually satisfying NFV use cases driven by > "enhanced platform awareness" to the detriment of the simple use > case of "get me some resource providers". Compute is under a lot of > pressure in this area, and is under-resourced, so placement's > progress is delayed by being in the (necessarily) narrow engine of > compute. Similarly, computes's overall progress is delayed because a > lot of attention is devoted to placement. > > I think the relevance of that latter point has been under-estimated > by the voices that are hoping to keep placement near to nova. The > concern there has been that we need to continue iterating in concert > and quickly. I disagree with that from two angles. One is that we > _will_ continue to work in concert. We are OpenStack, and presumably > all the same people working on placement now will continue to do so, > and many of those are active contributors to nova. We will work > together. > > The other angle is that, actually, placement is several months ahead > of nova in terms of features and it would be to everyone's advantage if > placement, from a feature standpoint, took a time out (to extract) > while nova had a chance to catch up with fully implementing shared > providers, nested resource providers, consumer generations, resource > request groups, using the reshaper properly from the virt drivers, > having a fast forward upgrade script talking to PlacementDirect, and > other things that I'm not remembering right now. The placement side > for those things is in place. The work that it needs now is a > _diversity_ of callers (not just nova) so that the features can been > fully exercised and bugs and performance problems found. > > The projects above, which might like to--and at various times have > expressed desire to do so--work on features within placement that > would benefit their projects, are forced to compete with existing > priorities to get blueprint attention. Though runways seemed to help > a bit on that front this just-ending cycle, it's simply too dense a > competitive environment for good, clean progress. > > 4. While extracting the placement code into another repo within the > compute umbrella might help a small amount with some of the > competition described in item 3, it would be insufficient. The same > forces would apply. > > Similarly, _if_ there are factors which are preventing some people > from being willing to participate with a compute-associated project, > a repo within compute is an insufficient break. > > Also, if we are going to go to the trouble of doing any kind of > disrupting transition of the placement code, we may as well take as > a big a step as possible in this one instance as these opportunities > are rare and our capacity for change is slow. I started working on > placement in early 2016, at that time we had plans to extract it to > "it's own thing". We've passed the half-way point in 2018. > > 5. In OpenStack we have a tradition of the contributors having a > strong degree of self-determination. If that tradition is to be > upheld, then it would make sense that the people who designed and > wrote the code that is being extracted would get to choose what > happens with it. As much as Mel's and Dan's (only picking on them > here because they are the dissenting voices that have showed up so > far) input has been extremely important and helpful in the evolution > of placement, they are not those people. > > So my hope is that (in no particular order) Jay Pipes, Eric Fried, > Takashi Natsume, Tetsuro Nakamura, Matt Riedemann, Andrey Volkov, > Alex Xu, Balazs Gibizer, Ed Leafe, and any other contributor to > placement whom I'm forgetting [1] would express their preference on > what they'd like to see happen. > Sorry, I didn't read all the reply, compare to 70 replies, I prefer to review some specs...English is heavy for me. I'm not very care about the extraction. But in the currently situation, I think placement contributors and nova contributors still need work to together, the resharp API is an example. So whatever we extract the placement or not, pretty sure nova and placement should work together. And really hope we won't have separate room in the PTG for placement and nova..I don't want to make a hard choice to listen which one...I already used to stay at one spot in a week now. > > At the same time, if people from neutron, cinder, blazar, zun, > mogan, ironic, and cyborg could express their preferences, we can get > through this by acclaim and get on with getting things done. > > Thank you. > > [1] My apologies if I have left you out. It's Saturday, I'm tried > from trying to make this happen for so long, and I'm using various > forms of git blame and git log to extract names from the git history > and there's some degree of magic and guessing going on. > > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davanum at gmail.com Sat Aug 25 01:09:10 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Fri, 24 Aug 2018 21:09:10 -0400 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: <20180825000122.m3rwtf4iv2t6buws@yuggoth.org> References: <1535025580-sup-8617@lrrr.local> <756c9051-a605-4d34-6c15-99cbcbc4801d@gmail.com> <21463d02-2827-d4ce-915c-4c569aae7585@fried.cc> <1535045097-sup-986@lrrr.local> <315eac7a-2fed-e2ae-538e-e589dea7cf93@gmail.com> <3f2131e5-6785-0429-e731-81c1287b39ff@fried.cc> <20180825000122.m3rwtf4iv2t6buws@yuggoth.org> Message-ID: On Fri, Aug 24, 2018 at 8:01 PM Jeremy Stanley wrote: > On 2018-08-24 18:51:08 -0500 (-0500), Matt Riedemann wrote: > [...] > > So just follow me here people, what if we had this common shared > > library where code could incubate and then we could write some > > tools to easily copy that common code into other projects... > > If we do this, can we at least put it in a consistent place in all > projects? Maybe name the directory something like "openstack/common" > just to make it obvious. > > > I'm pretty sure I could get said project approved as a top-level > > program under The Foundation and might even get a talk or two out > > of this idea. I can see the Intel money rolling in now... > > Seems like a sound idea. Can we call it "Nostalgia" for no > particular reason? Though maybe "Recurring Nightmare" would be a > more accurate choice. > /me wakes up screaming!! > -- > Jeremy Stanley > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Sat Aug 25 04:11:05 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Sat, 25 Aug 2018 13:11:05 +0900 Subject: [openstack-dev] [Searchlight] Team meeting next week Message-ID: Dear team, I would like to organize a team meeting on Thursday next week: - Date: 30 August 2018 - Time: 15:00 UTC - Channel: #openstack-meeting-4 All existing core members and new contributors are welcome. Here is the Searchlight's Etherpad for Stein, all ideas are welcomed: https://etherpad.openstack.org/p/searchlight-stein-ptg Please reply or ping me on IRC (#openstack-searchlight, dangtrinhnt) if you want to join. Bests, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Sat Aug 25 11:51:53 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Sat, 25 Aug 2018 06:51:53 -0500 Subject: [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration) In-Reply-To: <8f84d6ce-dc05-d126-9309-b84a97625c8c@gmail.com> References: <8f84d6ce-dc05-d126-9309-b84a97625c8c@gmail.com> Message-ID: <20180825115153.GA3623@sm-workstation> On Fri, Aug 24, 2018 at 04:20:21PM -0500, Matt Riedemann wrote: > On 8/20/2018 10:29 AM, Matthew Booth wrote: > > Secondly, is there any reason why we shouldn't just document then you > > have to delete snapshots before doing a volume migration? Hopefully > > some cinder folks or operators can chime in to let me know how to back > > them up or somehow make them independent before doing this, at which > > point the volume itself should be migratable? > > Coincidentally the volume migration API never had API reference > documentation. I have that here now [1]. It clearly states the preconditions > to migrate a volume based on code in the volume API. However, volume > migration is admin-only by default and retype (essentially like resize) is > admin-or-owner so non-admins can do it and specify to migrate. In general I > think it's best to have preconditions for *any* API documented, so anything > needed to perform a retype should be documented in the API, like that the > volume can't have snapshots. That's where things get tricky though. There aren't really reconditions we can have as a blanket statement with the retype API. A retype can do a lot of different things, all dependent on what type you are coming from and trying to go to. There are some retypes where all it does is enable vendor flag ``foo`` on the volume with no change in any other state. Then there are other retypes (using --migrate-policy on-demand) that completely move the volume from one backend to another one, copying every block along the way from the original to the new volume. It really depends on what types you are trying to retype to. > > [1] https://review.openstack.org/#/c/595379/ > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From stendulker at gmail.com Sat Aug 25 12:06:07 2018 From: stendulker at gmail.com (Shivanand Tendulker) Date: Sat, 25 Aug 2018 17:36:07 +0530 Subject: [openstack-dev] [ironic][bifrost][sushy][ironic-inspector][ironic-ui][virtualbmc] sub-project/repository core reviewer changes In-Reply-To: References: Message-ID: +1 to all proposed changes. Thanks and Regards Shivanand On Thu, Aug 23, 2018 at 11:54 PM, Julia Kreger wrote: > Greetings everyone! > > In our team meeting this week we stumbled across the subject of > promoting contributors to be sub-project's core reviewers. > Traditionally it is something we've only addressed as needed or > desired by consensus with-in those sub-projects, but we were past due > time to take a look at the entire picture since not everything should > fall to ironic-core. > > And so, I've taken a look at our various repositories and I'm > proposing the following additions: > > For sushy-core, sushy-tools-core, and virtualbmc-core: Ilya > Etingof[1]. Ilya has been actively involved with sushy, sushy-tools, > and virtualbmc this past cycle. I've found many of his reviews and > non-voting review comments insightful and willing to understand. He > has taken on some of the effort that is needed to maintain and keep > these tools usable for the community, and as such adding him to the > core group for these repositories makes lots of sense. > > For ironic-inspector-core and ironic-specs-core: Kaifeng Wang[2]. > Kaifeng has taken on some hard problems in ironic and > ironic-inspector, as well as brought up insightful feedback in > ironic-specs. They are demonstrating a solid understanding that I only > see growing as time goes on. > > For sushy-core: Debayan Ray[3]. Debayan has been involved with the > community for some time and has worked on sushy from early on in its > life. He has indicated it is near and dear to him, and he has been > actively reviewing and engaging in discussion on patchsets as his time > has permitted. > > With any addition it is good to look at inactivity as well. It saddens > me to say that we've had some contributors move on as priorities have > shifted to where they are no longer involved with the ironic > community. Each person listed below has been inactive for a year or > more and is no longer active in the ironic community. As such I've > removed their group membership from the sub-project core reviewer > groups. Should they return, we will welcome them back to the community > with open arms. > > bifrost-core: Stephanie Miller[4] > ironic-inspector-core: Anton Arefivev[5] > ironic-ui-core: Peter Peila[6], Beth Elwell[7] > > Thanks, > > -Julia > > [1]: http://stackalytics.com/?user_id=etingof&metric=marks > [2]: http://stackalytics.com/?user_id=kaifeng&metric=marks > [3]: http://stackalytics.com/?user_id=deray&metric=marks&release=all > [4]: http://stackalytics.com/?metric=marks&release=all&user_id=stephaneeee > [5]: http://stackalytics.com/?user_id=aarefiev&metric=marks > [6]: http://stackalytics.com/?metric=marks&release=all&user_id=ppiela > [7]: http://stackalytics.com/?metric=marks&release=all&user_ > id=bethelwell&module=ironic-ui > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Sat Aug 25 12:20:33 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Sat, 25 Aug 2018 14:20:33 +0200 Subject: [openstack-dev] [ironic][bifrost][sushy][ironic-inspector][ironic-ui][virtualbmc] sub-project/repository core reviewer changes In-Reply-To: References: Message-ID: +1 to all On Thu, Aug 23, 2018, 20:25 Julia Kreger wrote: > Greetings everyone! > > In our team meeting this week we stumbled across the subject of > promoting contributors to be sub-project's core reviewers. > Traditionally it is something we've only addressed as needed or > desired by consensus with-in those sub-projects, but we were past due > time to take a look at the entire picture since not everything should > fall to ironic-core. > > And so, I've taken a look at our various repositories and I'm > proposing the following additions: > > For sushy-core, sushy-tools-core, and virtualbmc-core: Ilya > Etingof[1]. Ilya has been actively involved with sushy, sushy-tools, > and virtualbmc this past cycle. I've found many of his reviews and > non-voting review comments insightful and willing to understand. He > has taken on some of the effort that is needed to maintain and keep > these tools usable for the community, and as such adding him to the > core group for these repositories makes lots of sense. > > For ironic-inspector-core and ironic-specs-core: Kaifeng Wang[2]. > Kaifeng has taken on some hard problems in ironic and > ironic-inspector, as well as brought up insightful feedback in > ironic-specs. They are demonstrating a solid understanding that I only > see growing as time goes on. > > For sushy-core: Debayan Ray[3]. Debayan has been involved with the > community for some time and has worked on sushy from early on in its > life. He has indicated it is near and dear to him, and he has been > actively reviewing and engaging in discussion on patchsets as his time > has permitted. > > With any addition it is good to look at inactivity as well. It saddens > me to say that we've had some contributors move on as priorities have > shifted to where they are no longer involved with the ironic > community. Each person listed below has been inactive for a year or > more and is no longer active in the ironic community. As such I've > removed their group membership from the sub-project core reviewer > groups. Should they return, we will welcome them back to the community > with open arms. > > bifrost-core: Stephanie Miller[4] > ironic-inspector-core: Anton Arefivev[5] > ironic-ui-core: Peter Peila[6], Beth Elwell[7] > > Thanks, > > -Julia > > [1]: http://stackalytics.com/?user_id=etingof&metric=marks > [2]: http://stackalytics.com/?user_id=kaifeng&metric=marks > [3]: http://stackalytics.com/?user_id=deray&metric=marks&release=all > [4]: http://stackalytics.com/?metric=marks&release=all&user_id=stephaneeee > [5]: http://stackalytics.com/?user_id=aarefiev&metric=marks > [6]: http://stackalytics.com/?metric=marks&release=all&user_id=ppiela > [7]: > http://stackalytics.com/?metric=marks&release=all&user_id=bethelwell&module=ironic-ui > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Sat Aug 25 15:33:20 2018 From: aj at suse.com (Andreas Jaeger) Date: Sat, 25 Aug 2018 17:33:20 +0200 Subject: [openstack-dev] [fuel] time to retire (parts of) fuel? Message-ID: <6120a9ae-8b57-03d5-065a-2bc468a54a3d@suse.com> I see that many repos have not seen any merges for fuel. Do you want to retire at least parts of it? We have many bitrot jobs for fuel set up and people proposing jobs against it that get no reaction - so, I suggest to make it's state clear. I see some changes in fuel-devops - but the rest looks really dead. What's your suggestion to move forward? Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From dharmendra.kushwaha at india.nec.com Mon Aug 27 01:38:19 2018 From: dharmendra.kushwaha at india.nec.com (Dharmendra Kushwaha) Date: Mon, 27 Aug 2018 01:38:19 +0000 Subject: [openstack-dev] [Tacker] Proposing changes in Tacker Core Team In-Reply-To: References: Message-ID: Thanks all who responded. Phuoc, welcome to tacker-core team. Thanks & Regards Dharmendra Kushwaha From: Kim Bao, Long [mailto:longkb at vn.fujitsu.com] Sent: 22 August 2018 17:45 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Tacker] Proposing changes in Tacker Core Team +1 from me. First of all, I would like to thank Phuoc for his contribution in Tacker. As I know, Phuoc joined in Tacker project for a year, but his contribution in Tacker is highly appreciated. Besides, he also is one of active member in IRC, gerrit and bug report. Hope that he can help Tacker keep growing in his new role. LongKB From: Dharmendra Kushwaha [mailto:dharmendra.kushwaha at india.nec.com] Sent: Wednesday, August 22, 2018 11:21 AM To: openstack-dev > Subject: [openstack-dev] [Tacker] Proposing changes in Tacker Core Team Hi Tacker members, To keep our Tacker project growing with new active members, I would like to propose to prune +2 ability of our farmer member Kanagaraj Manickam, and propose Cong Phuoc Hoang (IRC: phuoc) to join the tacker core team. Kanagaraj is not been involved since last couple of cycle. You had a great Contribution in Tacker project like VNF scaling features which are milestone for project. Thanks for your contribution, and wish to see you again. Phuoc is contributing actively in Tacker from Pike cycle, and he has grown into a key member of this project [1]. He delivered multiple features in each cycle. Additionally tons of other activities like bug fixes, answering actively on bugs. He is also actively contributing in cross project like tosca-parser and heat-translator which is much helpful for Tacker. Please vote your +1/-1. [1]: http://stackalytics.com/?project_type=openstack&release=all&metric=commits&module=tacker-group&user_id=hoangphuoc Thanks & Regards Dharmendra Kushwaha -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbaker at redhat.com Mon Aug 27 02:01:46 2018 From: sbaker at redhat.com (Steve Baker) Date: Mon, 27 Aug 2018 14:01:46 +1200 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C183A4A@EX10MBOX03.pnnl.gov> References: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> <363d36a2-c25a-d7ba-3f45-c8b3aa4e6cce@gmail.com> <78bc1c3d-4d97-5a1c-f320-bb08647e8825@gmail.com> <1A3C52DFCD06494D8528644858247BF01C183A00@EX10MBOX03.pnnl.gov> <1A3C52DFCD06494D8528644858247BF01C183A4A@EX10MBOX03.pnnl.gov> Message-ID: On 24/08/18 04:36, Fox, Kevin M wrote: > Or use kubelet in standalone mode. It can be configured for either Cri-o or Docker. You can drive the static manifests from heat/ansible per host as normal and it would be a step in the greater direction of getting to Kubernetes without needing the whole thing at once, if that is the goal. I was an advocate for using kubectl standalone for our container orchestration needs well before we started containerizing TripleO. After talking to a few kubernetes folk I cooled on the idea, because they had one of two responses: - cautious encouragement, but uncertainty about kubectl standalone interface support and consideration for those use cases - googly eyed incomprehension followed by "why would you do that??" This was a while ago now so this could be worth revisiting in the future. We'll be making gradual changes, the first of which is using podman to manage single containers. However podman has native support for the pod format, so I'm hoping we can switch to that once this transition is complete. Then evaluating kubectl becomes much easier. > > Question. Rather then writing a middle layer to abstract both container engines, couldn't you just use CRI? CRI is CRI-O's native language, and there is support already for Docker as well. We're not writing a middle layer, we're leveraging one which is already there. CRI-O is a socket interface and podman is a CLI interface that both sit on top of the exact same Go libraries. At this point, switching to podman needs a much lower development effort because we're replacing docker CLI calls. > Thanks, > Kevin > ________________________________________ > From: Jay Pipes [jaypipes at gmail.com] > Sent: Thursday, August 23, 2018 8:36 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls > > Dan, thanks for the details and answers. Appreciated. > > Best, > -jay > > On 08/23/2018 10:50 AM, Dan Prince wrote: >> On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes wrote: >>> On 08/15/2018 04:01 PM, Emilien Macchi wrote: >>>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi >>> > wrote: >>>> >>>> More seriously here: there is an ongoing effort to converge the >>>> tools around containerization within Red Hat, and we, TripleO are >>>> interested to continue the containerization of our services (which >>>> was initially done with Docker & Docker-Distribution). >>>> We're looking at how these containers could be managed by k8s one >>>> day but way before that we plan to swap out Docker and join CRI-O >>>> efforts, which seem to be using Podman + Buildah (among other things). >>>> >>>> I guess my wording wasn't the best but Alex explained way better here: >>>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52 >>>> >>>> If I may have a chance to rephrase, I guess our current intention is to >>>> continue our containerization and investigate how we can improve our >>>> tooling to better orchestrate the containers. >>>> We have a nice interface (openstack/paunch) that allows us to run >>>> multiple container backends, and we're currently looking outside of >>>> Docker to see how we could solve our current challenges with the new tools. >>>> We're looking at CRI-O because it happens to be a project with a great >>>> community, focusing on some problems that we, TripleO have been facing >>>> since we containerized our services. >>>> >>>> We're doing all of this in the open, so feel free to ask any question. >>> I appreciate your response, Emilien, thank you. Alex' responses to >>> Jeremy on the #openstack-tc channel were informative, thank you Alex. >>> >>> For now, it *seems* to me that all of the chosen tooling is very Red Hat >>> centric. Which makes sense to me, considering Triple-O is a Red Hat product. >> Perhaps a slight clarification here is needed. "Director" is a Red Hat >> product. TripleO is an upstream project that is now largely driven by >> Red Hat and is today marked as single vendor. We welcome others to >> contribute to the project upstream just like anybody else. >> >> And for those who don't know the history the TripleO project was once >> multi-vendor as well. So a lot of the abstractions we have in place >> could easily be extended to support distro specific implementation >> details. (Kind of what I view podman as in the scope of this thread). >> >>> I don't know how much of the current reinvention of container runtimes >>> and various tooling around containers is the result of politics. I don't >>> know how much is the result of certain companies wanting to "own" the >>> container stack from top to bottom. Or how much is a result of technical >>> disagreements that simply cannot (or will not) be resolved among >>> contributors in the container development ecosystem. >>> >>> Or is it some combination of the above? I don't know. >>> >>> What I *do* know is that the current "NIH du jour" mentality currently >>> playing itself out in the container ecosystem -- reminding me very much >>> of the Javascript ecosystem -- makes it difficult for any potential >>> *consumers* of container libraries, runtimes or applications to be >>> confident that any choice they make towards one of the other will be the >>> *right* choice or even a *possible* choice next year -- or next week. >>> Perhaps this is why things like openstack/paunch exist -- to give you >>> options if something doesn't pan out. >> This is exactly why paunch exists. >> >> Re, the podman thing I look at it as an implementation detail. The >> good news is that given it is almost a parity replacement for what we >> already use we'll still contribute to the OpenStack community in >> similar ways. Ultimately whether you run 'docker run' or 'podman run' >> you end up with the same thing as far as the existing TripleO >> architecture goes. >> >> Dan >> >>> You have a tough job. I wish you all the luck in the world in making >>> these decisions and hope politics and internal corporate management >>> decisions play as little a role in them as possible. >>> >>> Best, >>> -jay >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From geng.changcai2 at zte.com.cn Mon Aug 27 02:25:48 2018 From: geng.changcai2 at zte.com.cn (geng.changcai2 at zte.com.cn) Date: Mon, 27 Aug 2018 10:25:48 +0800 (CST) Subject: [openstack-dev] =?utf-8?q?=5BFreezer=5D_Reactivate_the_team?= Message-ID: <201808271025487809975@zte.com.cn> Hi,Kendall: I agree to migrate freezer project from Launchpad to Storyboard, Thanks. By the way, When will grant privileges for gengchc2 on Launchpad and Project Gerrit repositories? Best regards, gengchc2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruijing.guo at intel.com Mon Aug 27 02:45:29 2018 From: ruijing.guo at intel.com (Guo, Ruijing) Date: Mon, 27 Aug 2018 02:45:29 +0000 Subject: [openstack-dev] [nova][neutron] numa aware vswitch In-Reply-To: <492b65f562d3deb2f8fcb55b5c981f057b24cfa8.camel@redhat.com> References: <2EE296D083DF2940BF4EBB91D39BB89F3BBF05C0@shsmsx102.ccr.corp.intel.com> <492b65f562d3deb2f8fcb55b5c981f057b24cfa8.camel@redhat.com> Message-ID: <2EE296D083DF2940BF4EBB91D39BB89F3BBF0E3B@shsmsx102.ccr.corp.intel.com> Hi, Stephen, After setting flavor, VM was created in node 0 (expect in node1). How to debug it? Nova.conf [neutron] physnets = physnet0,physnet1 [neutron_physnet_physnet1] numa_nodes = 1 openstack network create net1 --external --provider-network-type=vlan --provider-physical-network=physnet1 --provider-segment=200 ... openstack server create --flavor 1 --image=cirros-0.3.5-x86_64-disk --nic net-id=net1 vm1 1024 available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23 node 0 size: 64412 MB node 0 free: 47658 MB node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31 node 1 size: 64502 MB node 1 free: 44945 MB node distances: node 0 1 0: 10 21 1: 21 10 Thanks, -Ruijing -----Original Message----- From: Stephen Finucane [mailto:sfinucan at redhat.com] Sent: Saturday, August 25, 2018 12:15 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova][neutron] numa aware vswitch On Fri, 2018-08-24 at 09:13 -0500, Matt Riedemann wrote: > On 8/24/2018 8:58 AM, Stephen Finucane wrote: > > Using this won't add a NUMA topology - it'll just control how any > > topology present will be mapped to the guest. You need to enable > > dedicated CPUs or a explicitly request a NUMA topology for this to > > work. > > > > openstack flavor set --property hw:numa_nodes=1 1 > > > > > > > > openstack flavor set --property hw:cpu_policy=dedicated 1 > > > > > > This is perhaps something that we could change in the future, though > > I haven't given it much thought yet. > > Looks like the admin guide [1] should be updated to at least refer to > the flavor user guide on setting up these types of flavors? > > [1] > https://docs.openstack.org/nova/latest/admin/networking.html#numa-affi > nity Good idea. https://review.openstack.org/596393 Stephen __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dangtrinhnt at gmail.com Mon Aug 27 02:59:39 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Mon, 27 Aug 2018 11:59:39 +0900 Subject: [openstack-dev] [Freezer] Reactivate the team In-Reply-To: <201808271025487809975@zte.com.cn> References: <201808271025487809975@zte.com.cn> Message-ID: @Kendall: please help the Freezer team. Thanks. @gengchc2: I think you should send an email to TC and ask for help. The Freezer core seems to inactive. *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Mon, Aug 27, 2018 at 11:26 AM wrote: > Hi,Kendall: > > I agree to migrate freezer project from Launchpad to Storyboard, Thanks. > > By the way, When will grant privileges for gengchc2 on Launchpad and > Project Gerrit repositories? > > > > Best regards, > > gengchc2 > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Mon Aug 27 03:32:23 2018 From: ramishra at redhat.com (Rabi Mishra) Date: Mon, 27 Aug 2018 09:02:23 +0530 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: References: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> <363d36a2-c25a-d7ba-3f45-c8b3aa4e6cce@gmail.com> <78bc1c3d-4d97-5a1c-f320-bb08647e8825@gmail.com> <1A3C52DFCD06494D8528644858247BF01C183A00@EX10MBOX03.pnnl.gov> <1A3C52DFCD06494D8528644858247BF01C183A4A@EX10MBOX03.pnnl.gov> Message-ID: On Mon, Aug 27, 2018 at 7:31 AM, Steve Baker wrote: > > > On 24/08/18 04:36, Fox, Kevin M wrote: > >> Or use kubelet in standalone mode. It can be configured for either Cri-o >> or Docker. You can drive the static manifests from heat/ansible per host as >> normal and it would be a step in the greater direction of getting to >> Kubernetes without needing the whole thing at once, if that is the goal. >> > > I was an advocate for using kubectl standalone for our container > orchestration needs well before we started containerizing TripleO. After > talking to a few kubernetes folk I cooled on the idea, because they had one > of two responses: > - cautious encouragement, but uncertainty about kubectl standalone > interface support and consideration for those use cases > - googly eyed incomprehension followed by "why would you do that??" > > AFAIK, kubelet does not have a good set of REST API yet[1], but things like heapster do directly interface with kubelet. Last I've seen there was no general consensus for kubelet to provide a subset of api-server APIs. However, from TripleO standpoint providing a set of pod specs to kubelet generated by ansible may be sufficient? [1] https://github.com/kubernetes/kubernetes/issues/28138 > > This was a while ago now so this could be worth revisiting in the future. > We'll be making gradual changes, the first of which is using podman to > manage single containers. However podman has native support for the pod > format, so I'm hoping we can switch to that once this transition is > complete. Then evaluating kubectl becomes much easier. > > Question. Rather then writing a middle layer to abstract both container >> engines, couldn't you just use CRI? CRI is CRI-O's native language, and >> there is support already for Docker as well. >> > > We're not writing a middle layer, we're leveraging one which is already > there. > > CRI-O is a socket interface and podman is a CLI interface that both sit on > top of the exact same Go libraries. At this point, switching to podman > needs a much lower development effort because we're replacing docker CLI > calls. > > I see good value in evaluating kubelet standalone and leveraging it's inbuilt grpc interfaces with cri-o (rather than using podman) as a long term strategy, unless we just want to provide an alternative to docker container runtime with cri-o. > >> Thanks, >> Kevin >> ________________________________________ >> From: Jay Pipes [jaypipes at gmail.com] >> Sent: Thursday, August 23, 2018 8:36 AM >> To: openstack-dev at lists.openstack.org >> Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice >> API calls >> >> Dan, thanks for the details and answers. Appreciated. >> >> Best, >> -jay >> >> On 08/23/2018 10:50 AM, Dan Prince wrote: >> >>> On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes wrote: >>> >>>> On 08/15/2018 04:01 PM, Emilien Macchi wrote: >>>> >>>>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi >>>> > wrote: >>>>> >>>>> More seriously here: there is an ongoing effort to converge the >>>>> tools around containerization within Red Hat, and we, TripleO are >>>>> interested to continue the containerization of our services >>>>> (which >>>>> was initially done with Docker & Docker-Distribution). >>>>> We're looking at how these containers could be managed by k8s one >>>>> day but way before that we plan to swap out Docker and join CRI-O >>>>> efforts, which seem to be using Podman + Buildah (among other >>>>> things). >>>>> >>>>> I guess my wording wasn't the best but Alex explained way better here: >>>>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23op >>>>> enstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52 >>>>> >>>>> If I may have a chance to rephrase, I guess our current intention is to >>>>> continue our containerization and investigate how we can improve our >>>>> tooling to better orchestrate the containers. >>>>> We have a nice interface (openstack/paunch) that allows us to run >>>>> multiple container backends, and we're currently looking outside of >>>>> Docker to see how we could solve our current challenges with the new >>>>> tools. >>>>> We're looking at CRI-O because it happens to be a project with a great >>>>> community, focusing on some problems that we, TripleO have been facing >>>>> since we containerized our services. >>>>> >>>>> We're doing all of this in the open, so feel free to ask any question. >>>>> >>>> I appreciate your response, Emilien, thank you. Alex' responses to >>>> Jeremy on the #openstack-tc channel were informative, thank you Alex. >>>> >>>> For now, it *seems* to me that all of the chosen tooling is very Red Hat >>>> centric. Which makes sense to me, considering Triple-O is a Red Hat >>>> product. >>>> >>> Perhaps a slight clarification here is needed. "Director" is a Red Hat >>> product. TripleO is an upstream project that is now largely driven by >>> Red Hat and is today marked as single vendor. We welcome others to >>> contribute to the project upstream just like anybody else. >>> >>> And for those who don't know the history the TripleO project was once >>> multi-vendor as well. So a lot of the abstractions we have in place >>> could easily be extended to support distro specific implementation >>> details. (Kind of what I view podman as in the scope of this thread). >>> >>> I don't know how much of the current reinvention of container runtimes >>>> and various tooling around containers is the result of politics. I don't >>>> know how much is the result of certain companies wanting to "own" the >>>> container stack from top to bottom. Or how much is a result of technical >>>> disagreements that simply cannot (or will not) be resolved among >>>> contributors in the container development ecosystem. >>>> >>>> Or is it some combination of the above? I don't know. >>>> >>>> What I *do* know is that the current "NIH du jour" mentality currently >>>> playing itself out in the container ecosystem -- reminding me very much >>>> of the Javascript ecosystem -- makes it difficult for any potential >>>> *consumers* of container libraries, runtimes or applications to be >>>> confident that any choice they make towards one of the other will be the >>>> *right* choice or even a *possible* choice next year -- or next week. >>>> Perhaps this is why things like openstack/paunch exist -- to give you >>>> options if something doesn't pan out. >>>> >>> This is exactly why paunch exists. >>> >>> Re, the podman thing I look at it as an implementation detail. The >>> good news is that given it is almost a parity replacement for what we >>> already use we'll still contribute to the OpenStack community in >>> similar ways. Ultimately whether you run 'docker run' or 'podman run' >>> you end up with the same thing as far as the existing TripleO >>> architecture goes. >>> >>> Dan >>> >>> You have a tough job. I wish you all the luck in the world in making >>>> these decisions and hope politics and internal corporate management >>>> decisions play as little a role in them as possible. >>>> >>>> Best, >>>> -jay >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From shiina.hironori at jp.fujitsu.com Mon Aug 27 06:27:43 2018 From: shiina.hironori at jp.fujitsu.com (Shiina, Hironori) Date: Mon, 27 Aug 2018 06:27:43 +0000 Subject: [openstack-dev] [ironic][bifrost][sushy][ironic-inspector][ironic-ui][virtualbmc] sub-project/repository core reviewer changes In-Reply-To: References: Message-ID: +1 Hironori > -----Original Message----- > From: Julia Kreger [mailto:juliaashleykreger at gmail.com] > Sent: Friday, August 24, 2018 3:24 AM > To: OpenStack Development Mailing List (not for usage questions) > Subject: [openstack-dev] [ironic][bifrost][sushy][ironic-inspector][ironic-ui][virtualbmc] > sub-project/repository core reviewer changes > > Greetings everyone! > > In our team meeting this week we stumbled across the subject of > promoting contributors to be sub-project's core reviewers. > Traditionally it is something we've only addressed as needed or > desired by consensus with-in those sub-projects, but we were past due > time to take a look at the entire picture since not everything should > fall to ironic-core. > > And so, I've taken a look at our various repositories and I'm > proposing the following additions: > > For sushy-core, sushy-tools-core, and virtualbmc-core: Ilya > Etingof[1]. Ilya has been actively involved with sushy, sushy-tools, > and virtualbmc this past cycle. I've found many of his reviews and > non-voting review comments insightful and willing to understand. He > has taken on some of the effort that is needed to maintain and keep > these tools usable for the community, and as such adding him to the > core group for these repositories makes lots of sense. > > For ironic-inspector-core and ironic-specs-core: Kaifeng Wang[2]. > Kaifeng has taken on some hard problems in ironic and > ironic-inspector, as well as brought up insightful feedback in > ironic-specs. They are demonstrating a solid understanding that I only > see growing as time goes on. > > For sushy-core: Debayan Ray[3]. Debayan has been involved with the > community for some time and has worked on sushy from early on in its > life. He has indicated it is near and dear to him, and he has been > actively reviewing and engaging in discussion on patchsets as his time > has permitted. > > With any addition it is good to look at inactivity as well. It saddens > me to say that we've had some contributors move on as priorities have > shifted to where they are no longer involved with the ironic > community. Each person listed below has been inactive for a year or > more and is no longer active in the ironic community. As such I've > removed their group membership from the sub-project core reviewer > groups. Should they return, we will welcome them back to the community > with open arms. > > bifrost-core: Stephanie Miller[4] > ironic-inspector-core: Anton Arefivev[5] > ironic-ui-core: Peter Peila[6], Beth Elwell[7] > > Thanks, > > -Julia > > [1]: http://stackalytics.com/?user_id=etingof&metric=marks > [2]: http://stackalytics.com/?user_id=kaifeng&metric=marks > [3]: http://stackalytics.com/?user_id=deray&metric=marks&release=all > [4]: http://stackalytics.com/?metric=marks&release=all&user_id=stephaneeee > [5]: http://stackalytics.com/?user_id=aarefiev&metric=marks > [6]: http://stackalytics.com/?metric=marks&release=all&user_id=ppiela > [7]: http://stackalytics.com/?metric=marks&release=all&user_id=bethelwell&module=ironic-ui > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From chkumar246 at gmail.com Mon Aug 27 06:27:58 2018 From: chkumar246 at gmail.com (Chandan kumar) Date: Mon, 27 Aug 2018 11:57:58 +0530 Subject: [openstack-dev] [TripleO][kolla-ansible][DevStack][Tempest][openstack-ansible] Collaborate towards creating a unified ansible tempest role in openstack-ansible project Message-ID: Hello, Few days back, Alex initiated the conversation about sharing ansible roles [1] across different projects. It is a nice idea and it brings a lot of collaboration among different projects with in OpenStack Community. Since Tempest provides the Integration test suite for validating any deployed OpenStack cloud. We uses ansible roles for installing/configuring/running Tempest starting from DevStack Tempest Zuul based CI jobs to TripleO, OpenStack-ansible spanning to kolla-ansible projects. Across all these deployments tools have their own roles for installing/configuring/running Tempest but doing similar tasks. I think it's a good opportunity for us to collaborate towards creating an unified ansible role in openstack-ansible project by re-using and modifying the stuff and then aggregating into openstack-ansible-os_tempest [2] project. I have summarized the problem statement and requirements on this etherpad [3]. Feel free to add your requirements and questions for the same on the etherpad so that we can shape the unified ansible role in a better way. Links: 1. http://lists.openstack.org/pipermail/openstack-dev/2018-August/133119.html 2. https://github.com/openstack/openstack-ansible-os_tempest 3. https://etherpad.openstack.org/p/ansible-tempest-role Thanks, Chandan Kumar From mkr1481 at gmail.com Mon Aug 27 06:46:11 2018 From: mkr1481 at gmail.com (Kanagaraj Manickam) Date: Mon, 27 Aug 2018 12:16:11 +0530 Subject: [openstack-dev] [Tacker] Proposing changes in Tacker Core Team In-Reply-To: References: Message-ID: +1. Thanks Tacker community for providing the opportunity to server as committer/active contributor. On Wed, Aug 22, 2018 at 9:52 AM Dharmendra Kushwaha < dharmendra.kushwaha at india.nec.com> wrote: > Hi Tacker members, > > > > To keep our Tacker project growing with new active members, I would like > > to propose to prune +2 ability of our farmer member Kanagaraj Manickam, > > and propose Cong Phuoc Hoang (IRC: phuoc) to join the tacker core team. > > > > Kanagaraj is not been involved since last couple of cycle. You had a great > > Contribution in Tacker project like VNF scaling features which are > milestone > > for project. Thanks for your contribution, and wish to see you again. > > > > Phuoc is contributing actively in Tacker from Pike cycle, and > > he has grown into a key member of this project [1]. He delivered multiple > > features in each cycle. Additionally tons of other activities like bug > fixes, > > answering actively on bugs. He is also actively contributing in cross > project > > like tosca-parser and heat-translator which is much helpful for Tacker. > > > > Please vote your +1/-1. > > > > [1]: > http://stackalytics.com/?project_type=openstack&release=all&metric=commits&module=tacker-group&user_id=hoangphuoc > > > > Thanks & Regards > > Dharmendra Kushwaha > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Mon Aug 27 07:10:11 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Mon, 27 Aug 2018 16:10:11 +0900 Subject: [openstack-dev] [Searchlight] Team meeting next week In-Reply-To: References: Message-ID: Hi team, This is a kind reminder of our meeting next Thursday, 15:00 UTC. Please see below for meeting details. Bests, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Sat, Aug 25, 2018 at 1:11 PM Trinh Nguyen wrote: > Dear team, > > I would like to organize a team meeting on Thursday next week: > > - Date: 30 August 2018 > - Time: 15:00 UTC > - Channel: #openstack-meeting-4 > > All existing core members and new contributors are welcome. > > Here is the Searchlight's Etherpad for Stein, all ideas are welcomed: > > https://etherpad.openstack.org/p/searchlight-stein-ptg > > Please reply or ping me on IRC (#openstack-searchlight, dangtrinhnt) if > you want to join. > > Bests, > > *Trinh Nguyen *| Founder & Chief Architect > > > > *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Mon Aug 27 09:03:19 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 27 Aug 2018 11:03:19 +0200 Subject: [openstack-dev] [TripleO]Addressing Edge/Multi-site/Multi-cloud deployment use cases (new squad) In-Reply-To: References: Message-ID: Hi, Some additions inline. On 08/20/2018 10:47 PM, James Slagle wrote: > As we start looking at how TripleO will address next generation deployment > needs such as Edge, multi-site, and multi-cloud, I'd like to kick off a > discussion around how TripleO can evolve and adapt to meet these new > challenges. > > What are these challenges? I think the OpenStack Edge Whitepaper does a good > job summarizing some of them: > > https://www.openstack.org/assets/edge/OpenStack-EdgeWhitepaper-v3-online.pdf > > They include: > > - management of distributed infrastructure > - massive scale (thousands instead of hundreds) > - limited network connectivity > - isolation of distributed sites > - orchestration of federated services across multiple sites > > We already have a lot of ongoing work that directly or indirectly starts to > address some of these challenges. That work includes things like > config-download, split-controlplane, metalsmith integration, validations, > all-in-one, and standalone. > > I laid out some initial ideas in a previous message: > > http://lists.openstack.org/pipermail/openstack-dev/2018-July/132398.html > > I'll be reviewing some of that here and going into a bit more detail. > > These are some of the high level ideas I'd like to see TripleO start to > address: > > - More separation between planning and deploying (likely to be further defined > in spec discussion). We've had these concepts for a while, but we need to do > a better job of surfacing them to users as deployments grow in size and > complexity. > > With config-download, we can more easily separate the phases of rendering, > downloading, validating, and applying the configuration. As we increase in > scale to managing many deployments, we should take advantage of what each of > those phases offer. > > The separation also makes the deployment more portable, as we should > eliminate any restrictions that force the undercloud to be the control node > applying the configuration. > > - Management of multiple deployments from a single undercloud. This is of > course already possible today, but we need better docs and polish and more > testing to flush out any bugs. > > - Plan and template management in git. > > This could be an iterative step towards eliminating Swift in the undercloud. > Swift seemed like a natural choice at the time because it was an existing > OpenStack service. However, I think git would do a better job at tracking > history and comparing changes and is much more lightweight than Swift. We've > been managing the config-download directory as a git repo, and I like this > direction. For now, we are just putting the whole git repo in Swift, but I > wonder if it makes sense to consider eliminating Swift entirely. We need to > consider the scale of managing thousands of plans for separate edge > deployments. > > I also think this would be a step towards undercloud simplification. > > - Orchestration between plans. I think there's general agreement around scaling > up the undercloud to be more effective at managing and deploying multiple > plans. > > The plans could be different OpenStack deployments potentially sharing some > resources. Or, they could be deployments of different software stacks > (Kubernetes/OpenShift, Ceph, etc). > > We'll need to develop some common interfaces for some basic orchestration > between plans. It could include dependencies, ordering, and sharing parameter > data (such as passwords or connection info). There is already some ongoing > discussion about some of this work: > > http://lists.openstack.org/pipermail/openstack-dev/2018-August/133247.html > > I would suspect this would start out as collecting specific use cases, and > then figuring out the right generic interfaces. > > - Multiple deployments of a single plan. This could be useful for doing many > deployments that are all the same. Of course some info might be different > such as network IP's, hostnames, and node specific details. We could have > some generic input interfaces for those sorts of things without having to > create new Heat stacks, which would allow re-using the same plan/stack for > multiple deployments. When scaling to hundreds/thousands of edge deployments > this could be really effective at side-stepping managing hundreds/thousands > of Heat stacks. > > We may also need further separation between a plan and it's deployment state > to have this modularity. > > - Distributed management/application of configuration. Even though the > configuration is portable (config-download), we may still want some > automation around applying the deployment when not using the undercloud as a > control node. I think things like ansible-runner or Ansible AWX could help > here, or perhaps mistral-executor agents, or "mistral as a library". This > would also make our workflows more portable. > > - New documentation highlighting some or all of the above features and how to > take advantage of it for new use cases (thousands of edge deployments, etc). > I see this as a sort of "TripleO Edge Deployment Guide" that would highlight > how to take advantage of TripleO for Edge/multi-site use cases. I would like to also consider a distributed undercloud. For example, we have a central management node at Location0 where it all starts. Then we have more management nodes at Location1 and Location1. We deploy the undercloud on all three: 1. The one at Location0 is a typical undercloud. 2. The two at Location{1,2} contain ironic-api, ironic-conductor, ironic-inspector and neutron-dhcp-agent. The conductors have their conductor_group [*] set to Location1 and Location2 accordingly. The conductor in Location0 is left with the default (empty string). Then we can install stuff at locations. We enroll nodes in ironic using the conductor_group matching their location. The TFTP, iPXE, DHCP and IPMI/Redfish traffic will thus be contained within a location. I think the routed ctlplane feature from Queens will allow us to do networking correctly otherwise. With the metalsmith switch (if we ever move forward with it, wink-wink) we will not have problems with explaining Nova the notion of locations. We can just extend metalsmith to understand conductor_group as a valid scheduling hint. Any thoughts? [*] Introduced in the Rocky cycle, the conductor group feature allows defining affinity between nodes and ironic-conductor instances, so that nodes with a conductor_group set are only managed by conductors with the same conductor_group. > > Obviously all the ideas are a lot of work, and not something I think we'll > complete in a single cycle. > > I'd like to pull a squad together focused on Edge/multi-site/multi-cloud and > TripleO. On that note, this squad could also work together with other > deployment projects that are looking at similar use cases and look to > collaborate. > > If you're interested in working on this squad, I'd see our first tasks as > being: > > - Brainstorming additional ideas to the above > - Breaking down ideas into actionable specs/blueprints for stein (and possibly > future releases). > - Coming up with a consistent message around direction and vision for solving > these deployment challenges. > - Bringing together ongoing work that relates to these use cases together so > that we're all collaborating with shared vision and purpose and we can help > prioritize reviews/ci/etc. > - Identifying any discussion items we need to work through in person at the > upcoming Denver PTG. Count me in (modulo the PTG). Dmitry > > I'm happy to help facilitate the squad. If you have any feedback on these ideas > or would like to join the squad, reply to the thread or sign up in the > etherpad: > > https://etherpad.openstack.org/p/tripleo-edge-squad-status > > I'm just referring to the squad as "Edge" for now, but we can also pick a > cooler owl themed name :). > From geguileo at redhat.com Mon Aug 27 09:06:18 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 27 Aug 2018 11:06:18 +0200 Subject: [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration) In-Reply-To: <20180825115153.GA3623@sm-workstation> References: <8f84d6ce-dc05-d126-9309-b84a97625c8c@gmail.com> <20180825115153.GA3623@sm-workstation> Message-ID: <20180827090618.atelvj4j3jcx3yyq@localhost> On 25/08, Sean McGinnis wrote: > On Fri, Aug 24, 2018 at 04:20:21PM -0500, Matt Riedemann wrote: > > On 8/20/2018 10:29 AM, Matthew Booth wrote: > > > Secondly, is there any reason why we shouldn't just document then you > > > have to delete snapshots before doing a volume migration? Hopefully > > > some cinder folks or operators can chime in to let me know how to back > > > them up or somehow make them independent before doing this, at which > > > point the volume itself should be migratable? > > > > Coincidentally the volume migration API never had API reference > > documentation. I have that here now [1]. It clearly states the preconditions > > to migrate a volume based on code in the volume API. However, volume > > migration is admin-only by default and retype (essentially like resize) is > > admin-or-owner so non-admins can do it and specify to migrate. In general I > > think it's best to have preconditions for *any* API documented, so anything > > needed to perform a retype should be documented in the API, like that the > > volume can't have snapshots. > > That's where things get tricky though. There aren't really reconditions we can > have as a blanket statement with the retype API. > > A retype can do a lot of different things, all dependent on what type you are > coming from and trying to go to. There are some retypes where all it does is > enable vendor flag ``foo`` on the volume with no change in any other state. > Then there are other retypes (using --migrate-policy on-demand) that completely > move the volume from one backend to another one, copying every block along the > way from the original to the new volume. It really depends on what types you > are trying to retype to. > We can say that retypes that require migration between different vendor backends cannot be performed with snapshots, and between arrays from the same vendor will depend on the driver (though I don't know if any driver can actually pull this off). Cheers, Gorka. > > > > [1] https://review.openstack.org/#/c/595379/ > > > > -- > > > > Thanks, > > > > Matt > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From geguileo at redhat.com Mon Aug 27 09:16:22 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 27 Aug 2018 11:16:22 +0200 Subject: [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration) In-Reply-To: <374f9a6e-9bea-d047-8e99-c56b1612def9@gmail.com> References: <20180822094620.kncry4ufbe6fwi5u@localhost> <374f9a6e-9bea-d047-8e99-c56b1612def9@gmail.com> Message-ID: <20180827091622.irdotiyzticb6fsh@localhost> On 24/08, Matt Riedemann wrote: > On 8/22/2018 4:46 AM, Gorka Eguileor wrote: > > The solution is conceptually simple. We add a new API microversion in > > Cinder that adds and optional parameter called "generic_keep_source" > > (defaults to False) to both migrate and retype operations. > > But if the problem is that users are not using the retype API and instead > are hitting the compute swap volume API instead, they won't use this new > parameter anyway. Again, retype is admin-or-owner but volume migration (in > cinder) and swap volume (in nova) are both admin-only, so are admins calling > swap volume directly or are people easing up the policy restrictions so > non-admins can use these migration APIs? > > -- > > Thanks, > > Matt Hi, These are two different topics, and I thought we had already closed that part of the discussion. Nova needs to fix that issue, a good option could be not allowing non Cinder callers to do that operation. As mbooth mentioned, these issues come from real user needs, and this is a different topic, and the topic I was talking about with the proposed solution. I agree with mbooth that we should find a way to address these customer needs, even if it's in a limited way. Cheers, Gorka. From work at seanmooney.info Mon Aug 27 09:24:37 2018 From: work at seanmooney.info (Sean Mooney) Date: Mon, 27 Aug 2018 10:24:37 +0100 Subject: [openstack-dev] [nova][neutron] numa aware vswitch In-Reply-To: <2EE296D083DF2940BF4EBB91D39BB89F3BBF0E3B@shsmsx102.ccr.corp.intel.com> References: <2EE296D083DF2940BF4EBB91D39BB89F3BBF05C0@shsmsx102.ccr.corp.intel.com> <492b65f562d3deb2f8fcb55b5c981f057b24cfa8.camel@redhat.com> <2EE296D083DF2940BF4EBB91D39BB89F3BBF0E3B@shsmsx102.ccr.corp.intel.com> Message-ID: On Mon 27 Aug 2018, 04:20 Guo, Ruijing, wrote: > Hi, Stephen, > > After setting flavor, VM was created in node 0 (expect in node1). How to > debug it? > > Nova.conf > [neutron] > physnets = physnet0,physnet1 > > [neutron_physnet_physnet1] > numa_nodes = 1 > Have you enabled the numa topology filter its off by default and without it the numa aware vswitch code is disabled. > > openstack network create net1 --external --provider-network-type=vlan > --provider-physical-network=physnet1 --provider-segment=200 > ... > openstack server create --flavor 1 --image=cirros-0.3.5-x86_64-disk --nic > net-id=net1 vm1 > > > 1024 > > > > > available: 2 nodes (0-1) > node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23 > node 0 size: 64412 MB > node 0 free: 47658 MB > node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31 > node 1 size: 64502 MB > node 1 free: 44945 MB > node distances: > node 0 1 > 0: 10 21 > 1: 21 10 > > Thanks, > -Ruijing > > -----Original Message----- > From: Stephen Finucane [mailto:sfinucan at redhat.com] > Sent: Saturday, August 25, 2018 12:15 AM > To: OpenStack Development Mailing List (not for usage questions) < > openstack-dev at lists.openstack.org> > Subject: Re: [openstack-dev] [nova][neutron] numa aware vswitch > > On Fri, 2018-08-24 at 09:13 -0500, Matt Riedemann wrote: > > On 8/24/2018 8:58 AM, Stephen Finucane wrote: > > > Using this won't add a NUMA topology - it'll just control how any > > > topology present will be mapped to the guest. You need to enable > > > dedicated CPUs or a explicitly request a NUMA topology for this to > > > work. > > > > > > openstack flavor set --property hw:numa_nodes=1 1 > > > > > > > > > > > > openstack flavor set --property hw:cpu_policy=dedicated 1 > > > > > > > > > This is perhaps something that we could change in the future, though > > > I haven't given it much thought yet. > > > > Looks like the admin guide [1] should be updated to at least refer to > > the flavor user guide on setting up these types of flavors? > > > > [1] > > https://docs.openstack.org/nova/latest/admin/networking.html#numa-affi > > nity > > Good idea. > > https://review.openstack.org/596393 > > Stephen > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Mon Aug 27 09:32:10 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 27 Aug 2018 11:32:10 +0200 Subject: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: <880e2ff0-cf3a-7d6d-a805-816464858aee@gmail.com> References: <20180823104210.kgctxfjiq47uru34@localhost> <20180823170756.sz5qj2lxdy4i4od2@localhost> <880e2ff0-cf3a-7d6d-a805-816464858aee@gmail.com> Message-ID: <20180827093210.rgrgcrkggfims53j@localhost> On 24/08, Jay S Bryant wrote: > > > On 8/23/2018 12:07 PM, Gorka Eguileor wrote: > > On 23/08, Dan Smith wrote: > > > > I think Nova should never have to rely on Cinder's hosts/backends > > > > information to do migrations or any other operation. > > > > > > > > In this case even if Nova had that info, it wouldn't be the solution. > > > > Cinder would reject migrations if there's an incompatibility on the > > > > Volume Type (AZ, Referenced backend, capabilities...) > > > I think I'm missing a bunch of cinder knowledge required to fully grok > > > this situation and probably need to do some reading. Is there some > > > reason that a volume type can't exist in multiple backends or something? > > > I guess I think of volume type as flavor, and the same definition in two > > > places would be interchangeable -- is that not the case? > > > > > Hi, > > > > I just know the basics of flavors, and they are kind of similar, though > > I'm sure there are quite a few differences. > > > > Sure, multiple storage arrays can meet the requirements of a Volume > > Type, but then when you create the volume you don't know where it's > > going to land. If your volume type is too generic you volume could land > > somewhere your cell cannot reach. > > > > > > > > I don't know anything about Nova cells, so I don't know the specifics of > > > > how we could do the mapping between them and Cinder backends, but > > > > considering the limited range of possibilities in Cinder I would say we > > > > only have Volume Types and AZs to work a solution. > > > I think the only mapping we need is affinity or distance. The point of > > > needing to migrate the volume would purely be because moving cells > > > likely means you moved physically farther away from where you were, > > > potentially with different storage connections and networking. It > > > doesn't *have* to mean that, but I think in reality it would. So the > > > question I think Matt is looking to answer here is "how do we move an > > > instance from a DC in building A to building C and make sure the > > > volume gets moved to some storage local in the new building so we're > > > not just transiting back to the original home for no reason?" > > > > > > Does that explanation help or are you saying that's fundamentally hard > > > to do/orchestrate? > > > > > > Fundamentally, the cells thing doesn't even need to be part of the > > > discussion, as the same rules would apply if we're just doing a normal > > > migration but need to make sure that storage remains affined to compute. > > > > > We could probably work something out using the affinity filter, but > > right now we don't have a way of doing what you need. > > > > We could probably rework the migration to accept scheduler hints to be > > used with the affinity filter and to accept calls with the host or the > > hints, that way it could migrate a volume without knowing the > > destination host and decide it based on affinity. > > > > We may have to do more modifications, but it could be a way to do it. > > > > > > > > > > I don't know how the Nova Placement works, but it could hold an > > > > equivalency mapping of volume types to cells as in: > > > > > > > > Cell#1 Cell#2 > > > > > > > > VolTypeA <--> VolTypeD > > > > VolTypeB <--> VolTypeE > > > > VolTypeC <--> VolTypeF > > > > > > > > Then it could do volume retypes (allowing migration) and that would > > > > properly move the volumes from one backend to another. > > > The only way I can think that we could do this in placement would be if > > > volume types were resource providers and we assigned them traits that > > > had special meaning to nova indicating equivalence. Several of the words > > > in that sentence are likely to freak out placement people, myself > > > included :) > > > > > > So is the concern just that we need to know what volume types in one > > > backend map to those in another so that when we do the migration we know > > > what to ask for? Is "they are the same name" not enough? Going back to > > > the flavor analogy, you could kinda compare two flavor definitions and > > > have a good idea if they're equivalent or not... > > > > > > --Dan > > In Cinder you don't get that from Volume Types, unless all your backends > > have the same hardware and are configured exactly the same. > > > > There can be some storage specific information there, which doesn't > > correlate to anything on other hardware. Volume types may refer to a > > specific pool that has been configured in the array to use specific type > > of disks. But even the info on the type of disks is unknown to the > > volume type. > > > > I haven't checked the PTG agenda yet, but is there a meeting on this? > > Because we may want to have one to try to understand the requirements > > and figure out if there's a way to do it with current Cinder > > functionality of if we'd need something new. > Gorka, > > I don't think that this has been put on the agenda yet.  Might be good to > add.  I don't think we have a cross project time officially planned with > Nova.  I will start that discussion with Melanie so that we can cover the > couple of cross projects subjects we have. > > Jay Thanks Jay! > > > Cheers, > > Gorka. > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sfinucan at redhat.com Mon Aug 27 09:36:54 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Mon, 27 Aug 2018 10:36:54 +0100 Subject: [openstack-dev] [nova][neutron] numa aware vswitch In-Reply-To: References: <2EE296D083DF2940BF4EBB91D39BB89F3BBF05C0@shsmsx102.ccr.corp.intel.com> <492b65f562d3deb2f8fcb55b5c981f057b24cfa8.camel@redhat.com> <2EE296D083DF2940BF4EBB91D39BB89F3BBF0E3B@shsmsx102.ccr.corp.intel.com> Message-ID: <880a748657d5208aba9e24b4ed3e44c0879add61.camel@redhat.com> On Mon, 2018-08-27 at 10:24 +0100, Sean Mooney wrote: > > > On Mon 27 Aug 2018, 04:20 Guo, Ruijing, wrote: > > Hi, Stephen, > > > > After setting flavor, VM was created in node 0 (expect in node1). How to debug it? > > > > Nova.conf > > [neutron] > > physnets = physnet0,physnet1 > > > > [neutron_physnet_physnet1] > > numa_nodes = 1 > > Have you enabled the numa topology filter its off by default and without it the numa aware vswitch code is disabled. Yeah, make sure this is enabled. You should turn on debug-level logging as this will give you additional information about how things are being scheduled. Also, is this a new deployment? If not, you're going to need to upgrade and restart all the nova-* services since there are object changes which will need to be propagated. Stephen > > openstack network create net1 --external --provider-network-type=vlan --provider-physical-network=physnet1 --provider-segment=200 > > ... > > openstack server create --flavor 1 --image=cirros-0.3.5-x86_64-disk --nic net-id=net1 vm1 > > > > > > 1024 > > > > > > > > > > available: 2 nodes (0-1) > > node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23 > > node 0 size: 64412 MB > > node 0 free: 47658 MB > > node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31 > > node 1 size: 64502 MB > > node 1 free: 44945 MB > > node distances: > > node 0 1 > > 0: 10 21 > > 1: 21 10 > > > > Thanks, > > -Ruijing > > > > -----Original Message----- > > From: Stephen Finucane [mailto:sfinucan at redhat.com] > > Sent: Saturday, August 25, 2018 12:15 AM > > To: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] [nova][neutron] numa aware vswitch > > > > On Fri, 2018-08-24 at 09:13 -0500, Matt Riedemann wrote: > > > On 8/24/2018 8:58 AM, Stephen Finucane wrote: > > > > Using this won't add a NUMA topology - it'll just control how any > > > > topology present will be mapped to the guest. You need to enable > > > > dedicated CPUs or a explicitly request a NUMA topology for this to > > > > work. > > > > > > > > openstack flavor set --property hw:numa_nodes=1 1 > > > > > > > > > > > > > > > > openstack flavor set --property hw:cpu_policy=dedicated 1 > > > > > > > > > > > > This is perhaps something that we could change in the future, though > > > > I haven't given it much thought yet. > > > > > > Looks like the admin guide [1] should be updated to at least refer to > > > the flavor user guide on setting up these types of flavors? > > > > > > [1] > > > https://docs.openstack.org/nova/latest/admin/networking.html#numa-affi > > > nity > > > > Good idea. > > > > https://review.openstack.org/596393 > > > > Stephen From sgolovat at redhat.com Mon Aug 27 09:55:16 2018 From: sgolovat at redhat.com (Sergii Golovatiuk) Date: Mon, 27 Aug 2018 11:55:16 +0200 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: References: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> <363d36a2-c25a-d7ba-3f45-c8b3aa4e6cce@gmail.com> <78bc1c3d-4d97-5a1c-f320-bb08647e8825@gmail.com> <1A3C52DFCD06494D8528644858247BF01C183A00@EX10MBOX03.pnnl.gov> <1A3C52DFCD06494D8528644858247BF01C183A4A@EX10MBOX03.pnnl.gov> Message-ID: Hi, On Mon, Aug 27, 2018 at 5:32 AM, Rabi Mishra wrote: > On Mon, Aug 27, 2018 at 7:31 AM, Steve Baker wrote: >> >> >> >> On 24/08/18 04:36, Fox, Kevin M wrote: >>> >>> Or use kubelet in standalone mode. It can be configured for either Cri-o >>> or Docker. You can drive the static manifests from heat/ansible per host as >>> normal and it would be a step in the greater direction of getting to >>> Kubernetes without needing the whole thing at once, if that is the goal. >> >> >> I was an advocate for using kubectl standalone for our container >> orchestration needs well before we started containerizing TripleO. After >> talking to a few kubernetes folk I cooled on the idea, because they had one >> of two responses: >> - cautious encouragement, but uncertainty about kubectl standalone >> interface support and consideration for those use cases >> - googly eyed incomprehension followed by "why would you do that??" >> > > AFAIK, kubelet does not have a good set of REST API yet[1], but things like > heapster do directly interface with kubelet. Last I've seen there was no > general consensus for kubelet to provide a subset of api-server APIs. > However, from TripleO standpoint providing a set of pod specs to kubelet > generated by ansible may be sufficient? > > [1] https://github.com/kubernetes/kubernetes/issues/28138 Steve mentioned kubectl (kubernetes CLI which communicates with kube-api) not kubelet which is only one component of kubernetes. All kubernetes components may be compiled as one binary (hyperkube) which can be used to minimize footprint. Generated ansible for kubelet is not enough as kubelet doesn't have any orchestration logic. >> >> This was a while ago now so this could be worth revisiting in the future. >> We'll be making gradual changes, the first of which is using podman to >> manage single containers. However podman has native support for the pod >> format, so I'm hoping we can switch to that once this transition is >> complete. Then evaluating kubectl becomes much easier. >> >>> Question. Rather then writing a middle layer to abstract both container >>> engines, couldn't you just use CRI? CRI is CRI-O's native language, and >>> there is support already for Docker as well. >> >> >> We're not writing a middle layer, we're leveraging one which is already >> there. >> >> CRI-O is a socket interface and podman is a CLI interface that both sit on >> top of the exact same Go libraries. At this point, switching to podman needs >> a much lower development effort because we're replacing docker CLI calls. >> > I see good value in evaluating kubelet standalone and leveraging it's > inbuilt grpc interfaces with cri-o (rather than using podman) as a long term > strategy, unless we just want to provide an alternative to docker container > runtime with cri-o. I see no value using kubelet without kubernetes IMHO. > >>> >>> >>> Thanks, >>> Kevin >>> ________________________________________ >>> From: Jay Pipes [jaypipes at gmail.com] >>> Sent: Thursday, August 23, 2018 8:36 AM >>> To: openstack-dev at lists.openstack.org >>> Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice >>> API calls >>> >>> Dan, thanks for the details and answers. Appreciated. >>> >>> Best, >>> -jay >>> >>> On 08/23/2018 10:50 AM, Dan Prince wrote: >>>> >>>> On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes wrote: >>>>> >>>>> On 08/15/2018 04:01 PM, Emilien Macchi wrote: >>>>>> >>>>>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi >>>>> > wrote: >>>>>> >>>>>> More seriously here: there is an ongoing effort to converge the >>>>>> tools around containerization within Red Hat, and we, TripleO >>>>>> are >>>>>> interested to continue the containerization of our services >>>>>> (which >>>>>> was initially done with Docker & Docker-Distribution). >>>>>> We're looking at how these containers could be managed by k8s >>>>>> one >>>>>> day but way before that we plan to swap out Docker and join >>>>>> CRI-O >>>>>> efforts, which seem to be using Podman + Buildah (among other >>>>>> things). >>>>>> >>>>>> I guess my wording wasn't the best but Alex explained way better here: >>>>>> >>>>>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52 >>>>>> >>>>>> If I may have a chance to rephrase, I guess our current intention is >>>>>> to >>>>>> continue our containerization and investigate how we can improve our >>>>>> tooling to better orchestrate the containers. >>>>>> We have a nice interface (openstack/paunch) that allows us to run >>>>>> multiple container backends, and we're currently looking outside of >>>>>> Docker to see how we could solve our current challenges with the new >>>>>> tools. >>>>>> We're looking at CRI-O because it happens to be a project with a great >>>>>> community, focusing on some problems that we, TripleO have been facing >>>>>> since we containerized our services. >>>>>> >>>>>> We're doing all of this in the open, so feel free to ask any question. >>>>> >>>>> I appreciate your response, Emilien, thank you. Alex' responses to >>>>> Jeremy on the #openstack-tc channel were informative, thank you Alex. >>>>> >>>>> For now, it *seems* to me that all of the chosen tooling is very Red >>>>> Hat >>>>> centric. Which makes sense to me, considering Triple-O is a Red Hat >>>>> product. >>>> >>>> Perhaps a slight clarification here is needed. "Director" is a Red Hat >>>> product. TripleO is an upstream project that is now largely driven by >>>> Red Hat and is today marked as single vendor. We welcome others to >>>> contribute to the project upstream just like anybody else. >>>> >>>> And for those who don't know the history the TripleO project was once >>>> multi-vendor as well. So a lot of the abstractions we have in place >>>> could easily be extended to support distro specific implementation >>>> details. (Kind of what I view podman as in the scope of this thread). >>>> >>>>> I don't know how much of the current reinvention of container runtimes >>>>> and various tooling around containers is the result of politics. I >>>>> don't >>>>> know how much is the result of certain companies wanting to "own" the >>>>> container stack from top to bottom. Or how much is a result of >>>>> technical >>>>> disagreements that simply cannot (or will not) be resolved among >>>>> contributors in the container development ecosystem. >>>>> >>>>> Or is it some combination of the above? I don't know. >>>>> >>>>> What I *do* know is that the current "NIH du jour" mentality currently >>>>> playing itself out in the container ecosystem -- reminding me very much >>>>> of the Javascript ecosystem -- makes it difficult for any potential >>>>> *consumers* of container libraries, runtimes or applications to be >>>>> confident that any choice they make towards one of the other will be >>>>> the >>>>> *right* choice or even a *possible* choice next year -- or next week. >>>>> Perhaps this is why things like openstack/paunch exist -- to give you >>>>> options if something doesn't pan out. >>>> >>>> This is exactly why paunch exists. >>>> >>>> Re, the podman thing I look at it as an implementation detail. The >>>> good news is that given it is almost a parity replacement for what we >>>> already use we'll still contribute to the OpenStack community in >>>> similar ways. Ultimately whether you run 'docker run' or 'podman run' >>>> you end up with the same thing as far as the existing TripleO >>>> architecture goes. >>>> >>>> Dan >>>> >>>>> You have a tough job. I wish you all the luck in the world in making >>>>> these decisions and hope politics and internal corporate management >>>>> decisions play as little a role in them as possible. >>>>> >>>>> Best, >>>>> -jay >>>>> >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Regards, > Rabi Mishra > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best Regards, Sergii Golovatiuk From eng.szaher at gmail.com Mon Aug 27 09:59:20 2018 From: eng.szaher at gmail.com (Saad Zaher) Date: Mon, 27 Aug 2018 10:59:20 +0100 Subject: [openstack-dev] [Freezer] Update freezer-core team Message-ID: Hello Freezer Team, We are going to do the following updates to the core team: Add - Trinh Nguyen - gengchc2 (New PTL) Remove the following members due to inactivity - yapeng Yang - Ruslan Aliev - Memo Garcia - Pierre Mathieu -------------------------- Best Regards, Saad! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Mon Aug 27 10:16:31 2018 From: ramishra at redhat.com (Rabi Mishra) Date: Mon, 27 Aug 2018 15:46:31 +0530 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: References: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> <363d36a2-c25a-d7ba-3f45-c8b3aa4e6cce@gmail.com> <78bc1c3d-4d97-5a1c-f320-bb08647e8825@gmail.com> <1A3C52DFCD06494D8528644858247BF01C183A00@EX10MBOX03.pnnl.gov> <1A3C52DFCD06494D8528644858247BF01C183A4A@EX10MBOX03.pnnl.gov> Message-ID: On Mon, Aug 27, 2018 at 3:25 PM, Sergii Golovatiuk wrote: > Hi, > > On Mon, Aug 27, 2018 at 5:32 AM, Rabi Mishra wrote: > > On Mon, Aug 27, 2018 at 7:31 AM, Steve Baker wrote: > Steve mentioned kubectl (kubernetes CLI which communicates with > Not sure what he meant. May be I miss something, but not heard of 'kubectl standalone', though he might have meant standalone k8s cluster on every node as you think. > kube-api) not kubelet which is only one component of kubernetes. All > kubernetes components may be compiled as one binary (hyperkube) which > can be used to minimize footprint. Generated ansible for kubelet is > not enough as kubelet doesn't have any orchestration logic. > What orchestration logic do we've with TripleO atm? AFAIK we've provide roles data for service placement across nodes, right? I see standalone kubelet as a first step for scheduling openstack services with in k8s cluster in the future (may be). >> > >> This was a while ago now so this could be worth revisiting in the > future. > >> We'll be making gradual changes, the first of which is using podman to > >> manage single containers. However podman has native support for the pod > >> format, so I'm hoping we can switch to that once this transition is > >> complete. Then evaluating kubectl becomes much easier. > >> > >>> Question. Rather then writing a middle layer to abstract both container > >>> engines, couldn't you just use CRI? CRI is CRI-O's native language, and > >>> there is support already for Docker as well. > >> > >> > >> We're not writing a middle layer, we're leveraging one which is already > >> there. > >> > >> CRI-O is a socket interface and podman is a CLI interface that both sit > on > >> top of the exact same Go libraries. At this point, switching to podman > needs > >> a much lower development effort because we're replacing docker CLI > calls. > >> > > I see good value in evaluating kubelet standalone and leveraging it's > > inbuilt grpc interfaces with cri-o (rather than using podman) as a long > term > > strategy, unless we just want to provide an alternative to docker > container > > runtime with cri-o. > > I see no value using kubelet without kubernetes IMHO. > > > > >>> > >>> > >>> Thanks, > >>> Kevin > >>> ________________________________________ > >>> From: Jay Pipes [jaypipes at gmail.com] > >>> Sent: Thursday, August 23, 2018 8:36 AM > >>> To: openstack-dev at lists.openstack.org > >>> Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for > nice > >>> API calls > >>> > >>> Dan, thanks for the details and answers. Appreciated. > >>> > >>> Best, > >>> -jay > >>> > >>> On 08/23/2018 10:50 AM, Dan Prince wrote: > >>>> > >>>> On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes wrote: > >>>>> > >>>>> On 08/15/2018 04:01 PM, Emilien Macchi wrote: > >>>>>> > >>>>>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi >>>>>> > wrote: > >>>>>> > >>>>>> More seriously here: there is an ongoing effort to converge > the > >>>>>> tools around containerization within Red Hat, and we, TripleO > >>>>>> are > >>>>>> interested to continue the containerization of our services > >>>>>> (which > >>>>>> was initially done with Docker & Docker-Distribution). > >>>>>> We're looking at how these containers could be managed by k8s > >>>>>> one > >>>>>> day but way before that we plan to swap out Docker and join > >>>>>> CRI-O > >>>>>> efforts, which seem to be using Podman + Buildah (among other > >>>>>> things). > >>>>>> > >>>>>> I guess my wording wasn't the best but Alex explained way better > here: > >>>>>> > >>>>>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/% > 23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52 > >>>>>> > >>>>>> If I may have a chance to rephrase, I guess our current intention is > >>>>>> to > >>>>>> continue our containerization and investigate how we can improve our > >>>>>> tooling to better orchestrate the containers. > >>>>>> We have a nice interface (openstack/paunch) that allows us to run > >>>>>> multiple container backends, and we're currently looking outside of > >>>>>> Docker to see how we could solve our current challenges with the new > >>>>>> tools. > >>>>>> We're looking at CRI-O because it happens to be a project with a > great > >>>>>> community, focusing on some problems that we, TripleO have been > facing > >>>>>> since we containerized our services. > >>>>>> > >>>>>> We're doing all of this in the open, so feel free to ask any > question. > >>>>> > >>>>> I appreciate your response, Emilien, thank you. Alex' responses to > >>>>> Jeremy on the #openstack-tc channel were informative, thank you Alex. > >>>>> > >>>>> For now, it *seems* to me that all of the chosen tooling is very Red > >>>>> Hat > >>>>> centric. Which makes sense to me, considering Triple-O is a Red Hat > >>>>> product. > >>>> > >>>> Perhaps a slight clarification here is needed. "Director" is a Red Hat > >>>> product. TripleO is an upstream project that is now largely driven by > >>>> Red Hat and is today marked as single vendor. We welcome others to > >>>> contribute to the project upstream just like anybody else. > >>>> > >>>> And for those who don't know the history the TripleO project was once > >>>> multi-vendor as well. So a lot of the abstractions we have in place > >>>> could easily be extended to support distro specific implementation > >>>> details. (Kind of what I view podman as in the scope of this thread). > >>>> > >>>>> I don't know how much of the current reinvention of container > runtimes > >>>>> and various tooling around containers is the result of politics. I > >>>>> don't > >>>>> know how much is the result of certain companies wanting to "own" the > >>>>> container stack from top to bottom. Or how much is a result of > >>>>> technical > >>>>> disagreements that simply cannot (or will not) be resolved among > >>>>> contributors in the container development ecosystem. > >>>>> > >>>>> Or is it some combination of the above? I don't know. > >>>>> > >>>>> What I *do* know is that the current "NIH du jour" mentality > currently > >>>>> playing itself out in the container ecosystem -- reminding me very > much > >>>>> of the Javascript ecosystem -- makes it difficult for any potential > >>>>> *consumers* of container libraries, runtimes or applications to be > >>>>> confident that any choice they make towards one of the other will be > >>>>> the > >>>>> *right* choice or even a *possible* choice next year -- or next week. > >>>>> Perhaps this is why things like openstack/paunch exist -- to give you > >>>>> options if something doesn't pan out. > >>>> > >>>> This is exactly why paunch exists. > >>>> > >>>> Re, the podman thing I look at it as an implementation detail. The > >>>> good news is that given it is almost a parity replacement for what we > >>>> already use we'll still contribute to the OpenStack community in > >>>> similar ways. Ultimately whether you run 'docker run' or 'podman run' > >>>> you end up with the same thing as far as the existing TripleO > >>>> architecture goes. > >>>> > >>>> Dan > >>>> > >>>>> You have a tough job. I wish you all the luck in the world in making > >>>>> these decisions and hope politics and internal corporate management > >>>>> decisions play as little a role in them as possible. > >>>>> > >>>>> Best, > >>>>> -jay > >>>>> > >>>>> > >>>>> ____________________________________________________________ > ______________ > >>>>> OpenStack Development Mailing List (not for usage questions) > >>>>> Unsubscribe: > >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>> > >>>> > >>>> ____________________________________________________________ > ______________ > >>>> OpenStack Development Mailing List (not for usage questions) > >>>> Unsubscribe: > >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>> > >>> > >>> ____________________________________________________________ > ______________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: > >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >>> > >>> ____________________________________________________________ > ______________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: > >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >>> > >>> ____________________________________________________________ > ______________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: > >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > -- > > Regards, > > Rabi Mishra > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Best Regards, > Sergii Golovatiuk > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgolovat at redhat.com Mon Aug 27 10:46:33 2018 From: sgolovat at redhat.com (Sergii Golovatiuk) Date: Mon, 27 Aug 2018 12:46:33 +0200 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: References: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> <363d36a2-c25a-d7ba-3f45-c8b3aa4e6cce@gmail.com> <78bc1c3d-4d97-5a1c-f320-bb08647e8825@gmail.com> <1A3C52DFCD06494D8528644858247BF01C183A00@EX10MBOX03.pnnl.gov> <1A3C52DFCD06494D8528644858247BF01C183A4A@EX10MBOX03.pnnl.gov> Message-ID: Hi, On Mon, Aug 27, 2018 at 12:16 PM, Rabi Mishra wrote: > On Mon, Aug 27, 2018 at 3:25 PM, Sergii Golovatiuk > wrote: >> >> Hi, >> >> On Mon, Aug 27, 2018 at 5:32 AM, Rabi Mishra wrote: >> > On Mon, Aug 27, 2018 at 7:31 AM, Steve Baker wrote: >> Steve mentioned kubectl (kubernetes CLI which communicates with > > > Not sure what he meant. May be I miss something, but not heard of 'kubectl > standalone', though he might have meant standalone k8s cluster on every node > as you think. > >> >> kube-api) not kubelet which is only one component of kubernetes. All >> kubernetes components may be compiled as one binary (hyperkube) which >> can be used to minimize footprint. Generated ansible for kubelet is >> not enough as kubelet doesn't have any orchestration logic. > > > What orchestration logic do we've with TripleO atm? AFAIK we've provide > roles data for service placement across nodes, right? > I see standalone kubelet as a first step for scheduling openstack services > with in k8s cluster in the future (may be). It's half measure. I don't see any advantages of that move. We should either adopt whole kubernetes or doesn't use its components at all as the maintenance cost will be expensive. Using kubelet requires to resolve networking communication, scale-up/down, sidecar, or inter services dependencies. > >> >> >> >> This was a while ago now so this could be worth revisiting in the >> >> future. >> >> We'll be making gradual changes, the first of which is using podman to >> >> manage single containers. However podman has native support for the pod >> >> format, so I'm hoping we can switch to that once this transition is >> >> complete. Then evaluating kubectl becomes much easier. >> >> >> >>> Question. Rather then writing a middle layer to abstract both >> >>> container >> >>> engines, couldn't you just use CRI? CRI is CRI-O's native language, >> >>> and >> >>> there is support already for Docker as well. >> >> >> >> >> >> We're not writing a middle layer, we're leveraging one which is already >> >> there. >> >> >> >> CRI-O is a socket interface and podman is a CLI interface that both sit >> >> on >> >> top of the exact same Go libraries. At this point, switching to podman >> >> needs >> >> a much lower development effort because we're replacing docker CLI >> >> calls. >> >> >> > I see good value in evaluating kubelet standalone and leveraging it's >> > inbuilt grpc interfaces with cri-o (rather than using podman) as a long >> > term >> > strategy, unless we just want to provide an alternative to docker >> > container >> > runtime with cri-o. >> >> I see no value using kubelet without kubernetes IMHO. >> >> >> > >> >>> >> >>> >> >>> Thanks, >> >>> Kevin >> >>> ________________________________________ >> >>> From: Jay Pipes [jaypipes at gmail.com] >> >>> Sent: Thursday, August 23, 2018 8:36 AM >> >>> To: openstack-dev at lists.openstack.org >> >>> Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for >> >>> nice >> >>> API calls >> >>> >> >>> Dan, thanks for the details and answers. Appreciated. >> >>> >> >>> Best, >> >>> -jay >> >>> >> >>> On 08/23/2018 10:50 AM, Dan Prince wrote: >> >>>> >> >>>> On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes wrote: >> >>>>> >> >>>>> On 08/15/2018 04:01 PM, Emilien Macchi wrote: >> >>>>>> >> >>>>>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi > >>>>>> > wrote: >> >>>>>> >> >>>>>> More seriously here: there is an ongoing effort to converge >> >>>>>> the >> >>>>>> tools around containerization within Red Hat, and we, TripleO >> >>>>>> are >> >>>>>> interested to continue the containerization of our services >> >>>>>> (which >> >>>>>> was initially done with Docker & Docker-Distribution). >> >>>>>> We're looking at how these containers could be managed by k8s >> >>>>>> one >> >>>>>> day but way before that we plan to swap out Docker and join >> >>>>>> CRI-O >> >>>>>> efforts, which seem to be using Podman + Buildah (among other >> >>>>>> things). >> >>>>>> >> >>>>>> I guess my wording wasn't the best but Alex explained way better >> >>>>>> here: >> >>>>>> >> >>>>>> >> >>>>>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52 >> >>>>>> >> >>>>>> If I may have a chance to rephrase, I guess our current intention >> >>>>>> is >> >>>>>> to >> >>>>>> continue our containerization and investigate how we can improve >> >>>>>> our >> >>>>>> tooling to better orchestrate the containers. >> >>>>>> We have a nice interface (openstack/paunch) that allows us to run >> >>>>>> multiple container backends, and we're currently looking outside of >> >>>>>> Docker to see how we could solve our current challenges with the >> >>>>>> new >> >>>>>> tools. >> >>>>>> We're looking at CRI-O because it happens to be a project with a >> >>>>>> great >> >>>>>> community, focusing on some problems that we, TripleO have been >> >>>>>> facing >> >>>>>> since we containerized our services. >> >>>>>> >> >>>>>> We're doing all of this in the open, so feel free to ask any >> >>>>>> question. >> >>>>> >> >>>>> I appreciate your response, Emilien, thank you. Alex' responses to >> >>>>> Jeremy on the #openstack-tc channel were informative, thank you >> >>>>> Alex. >> >>>>> >> >>>>> For now, it *seems* to me that all of the chosen tooling is very Red >> >>>>> Hat >> >>>>> centric. Which makes sense to me, considering Triple-O is a Red Hat >> >>>>> product. >> >>>> >> >>>> Perhaps a slight clarification here is needed. "Director" is a Red >> >>>> Hat >> >>>> product. TripleO is an upstream project that is now largely driven by >> >>>> Red Hat and is today marked as single vendor. We welcome others to >> >>>> contribute to the project upstream just like anybody else. >> >>>> >> >>>> And for those who don't know the history the TripleO project was once >> >>>> multi-vendor as well. So a lot of the abstractions we have in place >> >>>> could easily be extended to support distro specific implementation >> >>>> details. (Kind of what I view podman as in the scope of this thread). >> >>>> >> >>>>> I don't know how much of the current reinvention of container >> >>>>> runtimes >> >>>>> and various tooling around containers is the result of politics. I >> >>>>> don't >> >>>>> know how much is the result of certain companies wanting to "own" >> >>>>> the >> >>>>> container stack from top to bottom. Or how much is a result of >> >>>>> technical >> >>>>> disagreements that simply cannot (or will not) be resolved among >> >>>>> contributors in the container development ecosystem. >> >>>>> >> >>>>> Or is it some combination of the above? I don't know. >> >>>>> >> >>>>> What I *do* know is that the current "NIH du jour" mentality >> >>>>> currently >> >>>>> playing itself out in the container ecosystem -- reminding me very >> >>>>> much >> >>>>> of the Javascript ecosystem -- makes it difficult for any potential >> >>>>> *consumers* of container libraries, runtimes or applications to be >> >>>>> confident that any choice they make towards one of the other will be >> >>>>> the >> >>>>> *right* choice or even a *possible* choice next year -- or next >> >>>>> week. >> >>>>> Perhaps this is why things like openstack/paunch exist -- to give >> >>>>> you >> >>>>> options if something doesn't pan out. >> >>>> >> >>>> This is exactly why paunch exists. >> >>>> >> >>>> Re, the podman thing I look at it as an implementation detail. The >> >>>> good news is that given it is almost a parity replacement for what we >> >>>> already use we'll still contribute to the OpenStack community in >> >>>> similar ways. Ultimately whether you run 'docker run' or 'podman run' >> >>>> you end up with the same thing as far as the existing TripleO >> >>>> architecture goes. >> >>>> >> >>>> Dan >> >>>> >> >>>>> You have a tough job. I wish you all the luck in the world in making >> >>>>> these decisions and hope politics and internal corporate management >> >>>>> decisions play as little a role in them as possible. >> >>>>> >> >>>>> Best, >> >>>>> -jay >> >>>>> >> >>>>> >> >>>>> >> >>>>> __________________________________________________________________________ >> >>>>> OpenStack Development Mailing List (not for usage questions) >> >>>>> Unsubscribe: >> >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>>> >> >>>> >> >>>> >> >>>> __________________________________________________________________________ >> >>>> OpenStack Development Mailing List (not for usage questions) >> >>>> Unsubscribe: >> >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>>> >> >>> >> >>> >> >>> __________________________________________________________________________ >> >>> OpenStack Development Mailing List (not for usage questions) >> >>> Unsubscribe: >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >> >>> >> >>> >> >>> __________________________________________________________________________ >> >>> OpenStack Development Mailing List (not for usage questions) >> >>> Unsubscribe: >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >> >>> >> >>> >> >>> __________________________________________________________________________ >> >>> OpenStack Development Mailing List (not for usage questions) >> >>> Unsubscribe: >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > >> > >> > -- >> > Regards, >> > Rabi Mishra >> > >> > >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> >> >> -- >> Best Regards, >> Sergii Golovatiuk >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Regards, > Rabi Mishra > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best Regards, Sergii Golovatiuk From sfinucan at redhat.com Mon Aug 27 11:28:51 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Mon, 27 Aug 2018 12:28:51 +0100 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> Message-ID: <98a51c5966d02fe5981fa2255ec8d3c365868556.camel@redhat.com> On Sat, 2018-08-25 at 08:08 +0800, Alex Xu wrote: > > > 2018-08-18 20:25 GMT+08:00 Chris Dent : > > On Fri, 17 Aug 2018, Doug Hellmann wrote: > > > > > If we ignore the political concerns in the short term, are there > > > other projects actually interested in using placement? With what > > > technical caveats? Perhaps with modifications of some sort to > > > support > > > the needs of those projects? > > > > > > > I think ignoring the political concerns (in any term) is not > > possible. We are a group of interacting humans, politics are always > > present. Cordial but active debate to determine the best course of > > action is warranted. > > > > (tl;dr: Let's have existing and potential placement contributors > > decide its destiny.) > > > > Five topics I think are relevant here, in order of politics, least > > to most: > > > > 1. Placement has been designed from the outset to have a hard > > contract between it and the services that use it. Being embedded > > and/or deeply associated with one other single service means that > > that contract evolves in a way that is strongly coupled. We made > > placement have an HTTP API, not use RPC, and not produce or consume > > notifications because it is supposed to be bounded and independent. > > Sharing code and human management doesn't enable that. As you'll > > read below, placement's progress has been overly constrained by > > compute. > > > > 2. There are other projects actively using placement, not merely > > interested. If you search codesearch.o.o for terms like "resource > > provider" you can find them. But to rattle off those that I'm aware > > of (which I'm certain is an incomplete list): > > > > * Cyborg is actively working on using placement to track FPGA > > e.g., https://review.openstack.org/#/c/577438/ > > > > * Blazar is working on using them for reservations: > > > > https://review.openstack.org/#/q/status:open+project:openstack/blazar+branch:master+topic:bp/placement-api > > > > * Neutron has been reporting to placement for some time and has > > work > > in progress on minimum bandwidth handling with the help of > > placement: > > > > https://review.openstack.org/#/q/status:open+project:openstack/neutron-lib+branch:master+topic:minimum-bandwidth-allocation-placement-api > > > > * Ironic uses resource classes to describe types of nodes > > > > * Mogan (which may or may not be dead, not clear) was intending to > > track nodes with placement: > > > > http://git.openstack.org/cgit/openstack/mogan-specs/tree/specs/pike/approved/track-resources-using-placement.rst > > > > * Zun is working to use placement for "unified resource > > management": > > > > https://blueprints.launchpad.net/zun/+spec/use-placement-resource-management > > > > * Cinder has had discussion about using placement to overcome race > > conditions in its existing scheduling subsystem (a purpose to > > which placement was explicitly designed). > > > > 3. Placement's direction and progress is heavily curtailed by the > > choices and priorities that compute wants or needs to make. That > > means that for the past year or more much of the effort in > > placement > > has been devoted to eventually satisfying NFV use cases driven by > > "enhanced platform awareness" to the detriment of the simple use > > case of "get me some resource providers". Compute is under a lot of > > pressure in this area, and is under-resourced, so placement's > > progress is delayed by being in the (necessarily) narrow engine of > > compute. Similarly, computes's overall progress is delayed because > > a > > lot of attention is devoted to placement. > > > > I think the relevance of that latter point has been under-estimated > > by the voices that are hoping to keep placement near to nova. The > > concern there has been that we need to continue iterating in > > concert > > and quickly. I disagree with that from two angles. One is that we > > _will_ continue to work in concert. We are OpenStack, and > > presumably > > all the same people working on placement now will continue to do > > so, > > and many of those are active contributors to nova. We will work > > together. > > > > The other angle is that, actually, placement is several months > > ahead > > of nova in terms of features and it would be to everyone's > > advantage if > > placement, from a feature standpoint, took a time out (to extract) > > while nova had a chance to catch up with fully implementing shared > > providers, nested resource providers, consumer generations, > > resource > > request groups, using the reshaper properly from the virt drivers, > > having a fast forward upgrade script talking to PlacementDirect, > > and > > other things that I'm not remembering right now. The placement side > > for those things is in place. The work that it needs now is a > > _diversity_ of callers (not just nova) so that the features can > > been > > fully exercised and bugs and performance problems found. > > > > The projects above, which might like to--and at various times have > > expressed desire to do so--work on features within placement that > > would benefit their projects, are forced to compete with existing > > priorities to get blueprint attention. Though runways seemed to > > help > > a bit on that front this just-ending cycle, it's simply too dense a > > competitive environment for good, clean progress. > > > > 4. While extracting the placement code into another repo within the > > compute umbrella might help a small amount with some of the > > competition described in item 3, it would be insufficient. The same > > forces would apply. > > > > Similarly, _if_ there are factors which are preventing some people > > from being willing to participate with a compute-associated > > project, > > a repo within compute is an insufficient break. > > > > Also, if we are going to go to the trouble of doing any kind of > > disrupting transition of the placement code, we may as well take as > > a big a step as possible in this one instance as these > > opportunities > > are rare and our capacity for change is slow. I started working on > > placement in early 2016, at that time we had plans to extract it to > > "it's own thing". We've passed the half-way point in 2018. > > > > 5. In OpenStack we have a tradition of the contributors having a > > strong degree of self-determination. If that tradition is to be > > upheld, then it would make sense that the people who designed and > > wrote the code that is being extracted would get to choose what > > happens with it. As much as Mel's and Dan's (only picking on them > > here because they are the dissenting voices that have showed up so > > far) input has been extremely important and helpful in the > > evolution > > of placement, they are not those people. > > > > So my hope is that (in no particular order) Jay Pipes, Eric Fried, > > Takashi Natsume, Tetsuro Nakamura, Matt Riedemann, Andrey Volkov, > > Alex Xu, Balazs Gibizer, Ed Leafe, and any other contributor to > > placement whom I'm forgetting [1] would express their preference on > > what they'd like to see happen. > > Sorry, I didn't read all the reply, compare to 70 replies, I prefer > to review some specs...English is heavy for me. > > I'm not very care about the extraction. But in the currently > situation, I think placement contributors and nova contributors still > need work to together, the resharp API is an example. So whatever we > extract the placement or not, pretty sure nova and placement should > work together. > > And really hope we won't have separate room in the PTG for placement > and nova..I don't want to make a hard choice to listen which one...I > already used to stay at one spot in a week now. What he said, minus the "English is heavy" bit. The only thing I care about is making sure the odd nova-placement'y thing I might care about (vGPU and generic "devices" at large, maybe a future version of NUMA- aware vSwitches) don't get significantly more difficult to implement post-whatever it is we end up doing. Once that constraint is satisfied, it's all good. Now, best get started on those spec reviews, I guess... Stephen > > At the same time, if people from neutron, cinder, blazar, zun, > > mogan, ironic, and cyborg could express their preferences, we can > > get > > through this by acclaim and get on with getting things done. > > > > Thank you. > > > > [1] My apologies if I have left you out. It's Saturday, I'm tried > > from trying to make this happen for so long, and I'm using various > > forms of git blame and git log to extract names from the git > > history > > and there's some degree of magic and guessing going on. > > > > > > ___________________________________________________________________ > > _______ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu > > bscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jaypipes at gmail.com Mon Aug 27 12:05:47 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 27 Aug 2018 08:05:47 -0400 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: References: <1535025580-sup-8617@lrrr.local> <756c9051-a605-4d34-6c15-99cbcbc4801d@gmail.com> <21463d02-2827-d4ce-915c-4c569aae7585@fried.cc> <1535045097-sup-986@lrrr.local> <315eac7a-2fed-e2ae-538e-e589dea7cf93@gmail.com> <3f2131e5-6785-0429-e731-81c1287b39ff@fried.cc> Message-ID: On 08/24/2018 07:51 PM, Matt Riedemann wrote: > On 8/23/2018 2:05 PM, Chris Dent wrote: >> On Thu, 23 Aug 2018, Dan Smith wrote: >> >>> ...and it doesn't work like mock.sentinel does, which is part of the >>> value. I really think we should put this wherever it needs to be so that >>> it can continue to be as useful as is is today. Even if that means just >>> copying it into another project -- it's not that complicated of a thing. >> >> Yeah, I agree. I had hoped that we could make something that was >> generally useful, but its main value is its interface and if we >> can't have that interface in a library, having it per codebase is no >> biggie. For example it's been copied straight from nova into the >> placement extractions experiments with no changes and, as one would >> expect, works just fine. >> >> Unless people are wed to doing something else, Dan's right, let's >> just do that. > > So just follow me here people, what if we had this common shared library > where code could incubate and then we could write some tools to easily > copy that common code into other projects... Sounds masterful. > I'm pretty sure I could get said project approved as a top-level program > under The Foundation and might even get a talk or two out of this idea. > I can see the Intel money rolling in now... Indeed, I'll open the commons bank account. Ciao, -jay From sfinucan at redhat.com Mon Aug 27 11:26:00 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Mon, 27 Aug 2018 12:26:00 +0100 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: References: <20180817160937.GB24275@sm-workstation> <1534528706-sup-7156@lrrr.local> Message-ID: <0e5362a1a1a9df9e34295daa1cdcaef26b65ccba.camel@redhat.com> On Sat, 2018-08-25 at 08:08 +0800, Alex Xu wrote: > > > 2018-08-18 20:25 GMT+08:00 Chris Dent : > > On Fri, 17 Aug 2018, Doug Hellmann wrote: > > > > > If we ignore the political concerns in the short term, are there > > > other projects actually interested in using placement? With what > > > technical caveats? Perhaps with modifications of some sort to > > > support > > > the needs of those projects? > > > > > > > I think ignoring the political concerns (in any term) is not > > possible. We are a group of interacting humans, politics are always > > present. Cordial but active debate to determine the best course of > > action is warranted. > > > > (tl;dr: Let's have existing and potential placement contributors > > decide its destiny.) > > > > Five topics I think are relevant here, in order of politics, least > > to most: > > > > 1. Placement has been designed from the outset to have a hard > > contract between it and the services that use it. Being embedded > > and/or deeply associated with one other single service means that > > that contract evolves in a way that is strongly coupled. We made > > placement have an HTTP API, not use RPC, and not produce or consume > > notifications because it is supposed to be bounded and independent. > > Sharing code and human management doesn't enable that. As you'll > > read below, placement's progress has been overly constrained by > > compute. > > > > 2. There are other projects actively using placement, not merely > > interested. If you search codesearch.o.o for terms like "resource > > provider" you can find them. But to rattle off those that I'm aware > > of (which I'm certain is an incomplete list): > > > > * Cyborg is actively working on using placement to track FPGA > > e.g., https://review.openstack.org/#/c/577438/ > > > > * Blazar is working on using them for reservations: > > > > https://review.openstack.org/#/q/status:open+project:openstack/blazar+branch:master+topic:bp/placement-api > > > > * Neutron has been reporting to placement for some time and has > > work > > in progress on minimum bandwidth handling with the help of > > placement: > > > > https://review.openstack.org/#/q/status:open+project:openstack/neutron-lib+branch:master+topic:minimum-bandwidth-allocation-placement-api > > > > * Ironic uses resource classes to describe types of nodes > > > > * Mogan (which may or may not be dead, not clear) was intending to > > track nodes with placement: > > > > http://git.openstack.org/cgit/openstack/mogan-specs/tree/specs/pike/approved/track-resources-using-placement.rst > > > > * Zun is working to use placement for "unified resource > > management": > > > > https://blueprints.launchpad.net/zun/+spec/use-placement-resource-management > > > > * Cinder has had discussion about using placement to overcome race > > conditions in its existing scheduling subsystem (a purpose to > > which placement was explicitly designed). > > > > 3. Placement's direction and progress is heavily curtailed by the > > choices and priorities that compute wants or needs to make. That > > means that for the past year or more much of the effort in > > placement > > has been devoted to eventually satisfying NFV use cases driven by > > "enhanced platform awareness" to the detriment of the simple use > > case of "get me some resource providers". Compute is under a lot of > > pressure in this area, and is under-resourced, so placement's > > progress is delayed by being in the (necessarily) narrow engine of > > compute. Similarly, computes's overall progress is delayed because > > a > > lot of attention is devoted to placement. > > > > I think the relevance of that latter point has been under-estimated > > by the voices that are hoping to keep placement near to nova. The > > concern there has been that we need to continue iterating in > > concert > > and quickly. I disagree with that from two angles. One is that we > > _will_ continue to work in concert. We are OpenStack, and > > presumably > > all the same people working on placement now will continue to do > > so, > > and many of those are active contributors to nova. We will work > > together. > > > > The other angle is that, actually, placement is several months > > ahead > > of nova in terms of features and it would be to everyone's > > advantage if > > placement, from a feature standpoint, took a time out (to extract) > > while nova had a chance to catch up with fully implementing shared > > providers, nested resource providers, consumer generations, > > resource > > request groups, using the reshaper properly from the virt drivers, > > having a fast forward upgrade script talking to PlacementDirect, > > and > > other things that I'm not remembering right now. The placement side > > for those things is in place. The work that it needs now is a > > _diversity_ of callers (not just nova) so that the features can > > been > > fully exercised and bugs and performance problems found. > > > > The projects above, which might like to--and at various times have > > expressed desire to do so--work on features within placement that > > would benefit their projects, are forced to compete with existing > > priorities to get blueprint attention. Though runways seemed to > > help > > a bit on that front this just-ending cycle, it's simply too dense a > > competitive environment for good, clean progress. > > > > 4. While extracting the placement code into another repo within the > > compute umbrella might help a small amount with some of the > > competition described in item 3, it would be insufficient. The same > > forces would apply. > > > > Similarly, _if_ there are factors which are preventing some people > > from being willing to participate with a compute-associated > > project, > > a repo within compute is an insufficient break. > > > > Also, if we are going to go to the trouble of doing any kind of > > disrupting transition of the placement code, we may as well take as > > a big a step as possible in this one instance as these > > opportunities > > are rare and our capacity for change is slow. I started working on > > placement in early 2016, at that time we had plans to extract it to > > "it's own thing". We've passed the half-way point in 2018. > > > > 5. In OpenStack we have a tradition of the contributors having a > > strong degree of self-determination. If that tradition is to be > > upheld, then it would make sense that the people who designed and > > wrote the code that is being extracted would get to choose what > > happens with it. As much as Mel's and Dan's (only picking on them > > here because they are the dissenting voices that have showed up so > > far) input has been extremely important and helpful in the > > evolution > > of placement, they are not those people. > > > > So my hope is that (in no particular order) Jay Pipes, Eric Fried, > > Takashi Natsume, Tetsuro Nakamura, Matt Riedemann, Andrey Volkov, > > Alex Xu, Balazs Gibizer, Ed Leafe, and any other contributor to > > placement whom I'm forgetting [1] would express their preference on > > what they'd like to see happen. > > Sorry, I didn't read all the reply, compare to 70 replies, I prefer > to review some specs...English is heavy for me. > > I'm not very care about the extraction. But in the currently > situation, I think placement contributors and nova contributors still > need work to together, the resharp API is an example. So whatever we > extract the placement or not, pretty sure nova and placement should > work together. > > And really hope we won't have separate room in the PTG for placement > and nova..I don't want to make a hard choice to listen which one...I > already used to stay at one spot in a week now. What he said, minus the "English is heavy" bit. The only thing I care about is making sure the odd nova-placement'y thing I might care about (vGPU and generic "devices" at large, maybe a future version of NUMA- aware vSwitches) don't get significantly more difficult to implement post-whatever it is we end up doing. Once that constraint is satisfied, it's all good. Now, best get started on those spec reviews, I guess... Stephen > > At the same time, if people from neutron, cinder, blazar, zun, > > mogan, ironic, and cyborg could express their preferences, we can > > get > > through this by acclaim and get on with getting things done. > > > > Thank you. > > > > [1] My apologies if I have left you out. It's Saturday, I'm tried > > from trying to make this happen for so long, and I'm using various > > forms of git blame and git log to extract names from the git > > history > > and there's some degree of magic and guessing going on. > > > > > > ___________________________________________________________________ > > _______ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu > > bscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From opensrloo at gmail.com Mon Aug 27 13:45:42 2018 From: opensrloo at gmail.com (Ruby Loo) Date: Mon, 27 Aug 2018 09:45:42 -0400 Subject: [openstack-dev] [ironic][bifrost][sushy][ironic-inspector][ironic-ui][virtualbmc] sub-project/repository core reviewer changes In-Reply-To: References: Message-ID: What is there to say? ++ :) and :(. --ruby On Thu, Aug 23, 2018 at 2:24 PM Julia Kreger wrote: > Greetings everyone! > > In our team meeting this week we stumbled across the subject of > promoting contributors to be sub-project's core reviewers. > Traditionally it is something we've only addressed as needed or > desired by consensus with-in those sub-projects, but we were past due > time to take a look at the entire picture since not everything should > fall to ironic-core. > > And so, I've taken a look at our various repositories and I'm > proposing the following additions: > > For sushy-core, sushy-tools-core, and virtualbmc-core: Ilya > Etingof[1]. Ilya has been actively involved with sushy, sushy-tools, > and virtualbmc this past cycle. I've found many of his reviews and > non-voting review comments insightful and willing to understand. He > has taken on some of the effort that is needed to maintain and keep > these tools usable for the community, and as such adding him to the > core group for these repositories makes lots of sense. > > For ironic-inspector-core and ironic-specs-core: Kaifeng Wang[2]. > Kaifeng has taken on some hard problems in ironic and > ironic-inspector, as well as brought up insightful feedback in > ironic-specs. They are demonstrating a solid understanding that I only > see growing as time goes on. > > For sushy-core: Debayan Ray[3]. Debayan has been involved with the > community for some time and has worked on sushy from early on in its > life. He has indicated it is near and dear to him, and he has been > actively reviewing and engaging in discussion on patchsets as his time > has permitted. > > With any addition it is good to look at inactivity as well. It saddens > me to say that we've had some contributors move on as priorities have > shifted to where they are no longer involved with the ironic > community. Each person listed below has been inactive for a year or > more and is no longer active in the ironic community. As such I've > removed their group membership from the sub-project core reviewer > groups. Should they return, we will welcome them back to the community > with open arms. > > bifrost-core: Stephanie Miller[4] > ironic-inspector-core: Anton Arefivev[5] > ironic-ui-core: Peter Peila[6], Beth Elwell[7] > > Thanks, > > -Julia > > [1]: http://stackalytics.com/?user_id=etingof&metric=marks > [2]: http://stackalytics.com/?user_id=kaifeng&metric=marks > [3]: http://stackalytics.com/?user_id=deray&metric=marks&release=all > [4]: http://stackalytics.com/?metric=marks&release=all&user_id=stephaneeee > [5]: http://stackalytics.com/?user_id=aarefiev&metric=marks > [6]: http://stackalytics.com/?metric=marks&release=all&user_id=ppiela > [7]: > http://stackalytics.com/?metric=marks&release=all&user_id=bethelwell&module=ironic-ui > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Mon Aug 27 14:05:05 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 27 Aug 2018 16:05:05 +0200 Subject: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction? In-Reply-To: <775949fc-a058-a076-06a5-c42bb8d016ec@gmail.com> References: <25479d71-5adf-cab5-f02e-6c05cf22b1a2@gmail.com> <3ded53a5-aa02-b8de-79ec-c829e584fe1d@gmail.com> <8d033d80-042f-d6da-d267-d84eb285996e@gmail.com> <1534883437-sup-4403@lrrr.local> <06afaecc-158c-a6d2-2e4d-c586116eac73@gmail.com> <1534945106-sup-4359@lrrr.local> <775949fc-a058-a076-06a5-c42bb8d016ec@gmail.com> Message-ID: <20180827140505.GA2113@paraplu> On Wed, Aug 22, 2018 at 11:03:43AM -0700, melanie witt wrote: [...] [Randomly jumping in on one specific point.] > Aside from that, it has always been difficult to add folks to > nova-core because of the large scope and expertise needed to approve > code across all of Nova. The complexity of Nova, and the amount of context one needs to keep in their head will only _keep_ increasing. Thus, that "difficult to add folks" becomes a self-perpetuating problem. And as we know, not every Nova contributor would want to learn the _whole_ of Nova — so, for the vanishingly small portion of people who might want to learn "all of Nova", it will be an uphill battle where the hill is only going to get steeper. Some people spend all of their time on specific subsystems of Nova (scheduler, virt drivers, etc); yet others work on unrelated projects (that don't overlap with OpenStack, but are "critical dependencies" for Nova and OpenStack) and thus have limited time for Nova, and so forth. This reminds me of the highly articulate thread[1] from Dan Berrangé in 2014. It would be educating to see how we stand today, in relation to the points raised in that thread, after four years. [1] http://lists.openstack.org/pipermail/openstack-dev/2014-September/044872.html [...] -- /kashyap From mjturek at linux.vnet.ibm.com Mon Aug 27 14:15:58 2018 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Mon, 27 Aug 2018 10:15:58 -0400 Subject: [openstack-dev] [ironic] Next bug day is Tuesday August 28th! Vote for timeslot! In-Reply-To: <8495ac13-5eb8-8d3a-04fd-0cd837c4c7bb@linux.vnet.ibm.com> References: <8495ac13-5eb8-8d3a-04fd-0cd837c4c7bb@linux.vnet.ibm.com> Message-ID: <395b74fd-31a0-12a2-6a48-3319c903b6ae@linux.vnet.ibm.com> Hello all, Tomorrow's bug day will be at 15:00 UTC. Hope to see you there! Thanks, Mike Turek On 8/21/18 11:56 AM, Michael Turek wrote: > Hello, > > With the next bug day coming in a week from today, I wanted to bring > up the timeslot poll we have going again. > https://doodle.com/poll/ef4m9zmacm2ey7ce > > I'd like to finalize a time slot for this on Thursday so if you want > to cast your vote, please do it soon! Hope to see you there! > > Thanks, > Mike Turek > > On 8/2/18 11:24 AM, Michael Turek wrote: >> Hey all! >> >> Bug day was pretty productive today and we decided to schedule >> another one for the end of this month, on Tuesday the 28th. For >> details see the etherpad for the event [0] >> >> Also since we're changing things up, we decided to also put up a vote >> for the timeslot [1] >> >> If you have any questions or suggestions on how to improve bug day, I >> am all ears! Hope to see you there! >> >> Thanks, >> Mike Turek >> >> [0] https://etherpad.openstack.org/p/ironic-bug-day-august-28-2018 >> [1] https://doodle.com/poll/ef4m9zmacm2ey7ce >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jaypipes at gmail.com Mon Aug 27 14:18:29 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 27 Aug 2018 10:18:29 -0400 Subject: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict In-Reply-To: References: <1534419109.24276.3@smtp.office365.com> <1534419803.3149.0@smtp.office365.com> Message-ID: <80b05cd8-86fa-a335-2b0f-036c96b38430@gmail.com> Sorry for the delay in responding to this, Gibi and Eric. Comments inline. tl;dr: go with option a) On 08/16/2018 11:34 AM, Eric Fried wrote: > Thanks for this, gibi. > > TL;DR: a). > > I didn't look, but I'm pretty sure we're not caching allocations in the > report client. Today, nobody outside of nova (specifically the resource > tracker via the report client) is supposed to be mucking with instance > allocations, right? And given the global lock in the resource tracker, > it should be pretty difficult to race e.g. a resize and a delete in any > meaningful way. It's not a global (i.e. multi-node) lock. It's a semaphore for just that compute node. Migrations (mostly) involve more than one compute node, so the compute node semaphore is useless in that regard, thus the need to go with option a) and bail out if any change to a generation of any of the consumers involved in the migration operation. > So short term, IMO it is reasonable to treat any generation conflict > as an error. No retries. Possible wrinkle on delete, where it should > be a failure unless forced. Agreed for all migration and deletion operations. > Long term, I also can't come up with any scenario where it would be > appropriate to do a narrowly-focused GET+merge/replace+retry. But > implementing the above short-term plan shouldn't prevent us from adding > retries for individual scenarios later if we do uncover places where it > makes sense. Neither do I. Safety first, IMHO. Best, -jay From jaypipes at gmail.com Mon Aug 27 14:27:38 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 27 Aug 2018 10:27:38 -0400 Subject: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict In-Reply-To: <1534942527.7552.8@smtp.office365.com> References: <1534419109.24276.3@smtp.office365.com> <1534419803.3149.0@smtp.office365.com> <1534500637.29318.1@smtp.office365.com> <7b45da6c-c8d3-c54f-89c0-9798589dfdc4@fried.cc> <1534942527.7552.8@smtp.office365.com> Message-ID: <662fdad7-ddcd-3c68-d94a-d1b06218087c@gmail.com> On 08/22/2018 08:55 AM, Balázs Gibizer wrote: > On Fri, Aug 17, 2018 at 5:40 PM, Eric Fried wrote: >> gibi- >> >>>>  - On migration, when we transfer the allocations in either >>>> direction, a >>>>  conflict means someone managed to resize (or otherwise change >>>>  allocations?) since the last time we pulled data. Given the global >>>> lock >>>>  in the report client, this should have been tough to do. If it does >>>>  happen, I would think any retry would need to be done all the way back >>>>  at the claim, which I imagine is higher up than we should go. So >>>> again, >>>>  I think we should fail the migration and make the user retry. >>> >>>  Do we want to fail the whole migration or just the migration step (e.g. >>>  confirm, revert)? >>>  The later means that failure during confirm or revert would put the >>>  instance back to VERIFY_RESIZE. While the former would mean that in >>> case >>>  of conflict at confirm we try an automatic revert. But for a >>> conflict at >>>  revert we can only put the instance to ERROR state. >> >> This again should be "impossible" to come across. What would the >> behavior be if we hit, say, ValueError in this spot? > > I might not totally follow you. I see two options to choose from for the > revert case: > > a) Allocation manipulation error during revert of a migration causes > that instance goes to ERROR. -> end user cannot retry the revert the > instance needs to be deleted. I would say this one is correct, but not because the user did anything wrong. Rather, *something inside Nova failed* because technically Nova shouldn't allow resource allocation to change while a server is in CONFIRMING_RESIZE task state. If we didn't make the server go to an ERROR state, I'm afraid we'd have no indication anywhere that this improper situation ever happened and we'd end up hiding some serious data corruption bugs. > b) Allocation manipulation error during revert of a migration causes > that the instance goes back to VERIFY_RESIZE state. -> end user can > retry the revert via the API. > > I see three options to choose from for the confirm case: > > a) Allocation manipulation error during confirm of a migration causes > that instance goes to ERROR. -> end user cannot retry the confirm the > instance needs to be deleted. For the same reasons outlined above, I think this is the only safe option. Best, -jay > b) Allocation manipulation error during confirm of a migration causes > that the instance goes back to VERIFY_RESIZE state. -> end user can > retry the confirm via the API. > > c) Allocation manipulation error during confirm of a migration causes > that nova automatically tries to revert the migration. (For failure > during this revert the same options available as for the generic revert > case, see above) > > We also need to consider live migration. It is similar in a sense that > it also use move_allocations. But it is different as the end user > doesn't explicitly confirm or revert a live migration. > > I'm looking for opinions about which option we should take in each cases. > > gibi > >> >> -efried >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Mon Aug 27 15:31:50 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 27 Aug 2018 10:31:50 -0500 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: Message-ID: On 8/24/2018 7:36 AM, Chris Dent wrote: > > Over the past few days a few of us have been experimenting with > extracting placement to its own repo, as has been discussed at > length on this list, and in some etherpads: > > https://etherpad.openstack.org/p/placement-extract-stein > https://etherpad.openstack.org/p/placement-extraction-file-notes > > As part of that, I've been doing some exploration to tease out the > issues we're going to hit as we do it. None of this is work that > will be merged, rather it is stuff to figure out what we need to > know to do the eventual merging correctly and efficiently. > > Please note that doing that is just the near edge of a large > collection of changes that will cascade in many ways to many > projects, tools, distros, etc. The people doing this are aware of > that, and the relative simplicity (and fairly immediate success) of > these experiments is not misleading people into thinking "hey, no > big deal". It's a big deal. > > There's a strategy now (described at the end of the first etherpad > listed above) for trimming the nova history to create a thing which > is placement. From the first run of that Ed created a github repo > and I branched that to eventually create: > > https://github.com/EdLeafe/placement/pull/2 > > In that, all the placement unit and functional tests are now > passing, and my placecat [1] integration suite also passes. > > That work has highlighted some gaps in the process for trimming > history which will be refined to create another interim repo. We'll > repeat this until the process is smooth, eventually resulting in an > openstack/placement. We talked about the github strategy a bit in the placement meeting today [1]. Without being involved in this technical extraction work for the past few weeks, I came in with a different perspective on the end-game, and it was not aligned with what Chris/Ed thought as far as how we get to the official openstack/placement repo. At a high level, Ed's repo [2] is a fork of nova with large changes on top using pull requests to do things like remove the non-placement nova files, update import paths (because the import structure changes from nova.api.openstack.placement to just placement), and then changes from Chris [3] to get tests working. Then the idea was to just use that to seed the openstack/placement repo and rather than review the changes along the way*, people that care about what changed (like myself) would see the tests passing and be happy enough. However, I disagree with this approach since it bypasses our community code review system of using Gerrit and relying on a core team to approve changes at the sake of expediency. What I would like to see are the changes that go into making the seed repo and what gets it to passing tests done in gerrit like we do for everything else. There are a couple of options on how this is done though: 1. Seed the openstack/placement repo with the filter_git_history.sh script output as Ed has done here [4]. This would include moving the placement files to the root of the tree and dropping nova-specific files. Then make incremental changes in gerrit like with [5] and the individual changes which make up Chris's big pull request [3]. I am primarily interested in making sure there are not content changes happening, only mechanical tree-restructuring type changes, stuff like that. I'm asking for more changes in gerrit so they can be sanely reviewed (per normal). 2. Eric took a slightly different tack in that he's OK with just a couple of large changes (or even large patch sets within a single change) in gerrit rather than ~30 individual changes. So that would be more like at most 3 changes in gerrit for [4][5][3]. 3. The 3rd option is we just don't use gerrit at all and seed the official repo with the results of Chris and Ed's work in Ed's repo in github. Clearly this would be the fastest way to get us to a new repo (at the expense of bucking community code review and development process - is an exception worth it?). Option 1 would clearly be a drain on at least 2 nova cores to go through the changes. I think Eric is on board for reviewing options 1 or 2 in either case, but he prefers option 2. Since I'm throwing a wrench in the works, I also need to stand up and review the changes if we go with option 1 or 2. Jay said he'd review them but consider these reviews lower priority. I expect we could get some help from some other nova cores though, maybe not on all changes, but at least some (thinking gibi, alex_xu, sfinucan). Any CI jobs would be non-voting while going through options 1 or 2 until we get to a point that tests should finally be passing and we can make them voting (it should be possible to control this within the repo itself using zuul v3). I would like to know from others (nova core or otherwise) what they would prefer, and if you are a nova core that wants option 1 (or 2) are you willing to help review those incremental changes knowing it will be a drain - but also realizing that we can't really let option 1 drag on while we're doing stein feature development, so ideally this would be done before the PTG. * Yes I realize I could be reviewing the github pull requests along the way, but that's not really how we do code review in openstack. [1] http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-08-27-14.00.log.html#l-74 [2] https://github.com/EdLeafe/placement [3] https://github.com/EdLeafe/placement/pull/3 [4] https://github.com/EdLeafe/placement/commit/e3173faf59bd1453c3800b2bf57c2af8cfde1697 [5] https://github.com/EdLeafe/placement/commit/e984bef8587009378ea430dd1c12ca3e40a3c901 -- Thanks, Matt From eumel at arcor.de Mon Aug 27 15:54:42 2018 From: eumel at arcor.de (Frank Kloeker) Date: Mon, 27 Aug 2018 17:54:42 +0200 Subject: [openstack-dev] [all] Berlin Hackathon: Hacking the Edge Message-ID: <53844c939fe81fb942b5eed2d8739985@arcor.de> Hello, For the weekend before the Berlin Summit we plan an additional OpenStack community event: "Hacking the Edge" Hackathon. The idea is to build together a community cloud based on Edge technology. We're looking for volunteers: Developers Try out the newest software version from your projects like Nova, Cinder, and Neutron. What's the requirements for Edge and what makes sense? Install different components on different devices and connects all of them. Operators Operation of Edge Cloud is also a challenge. Changes from 'must be online' to 'maybe online'. Which measuring methods are available for monitoring? Where are my backups? Do we need an operation center also in the Edge? Architects General Edge Cloud Architecture. How is the plan for connecting new devices with different connectivity. Scalable application and life cycle management. Bring your own devices like laptop, Raspberry PI, WIFI routers, which you would connect to the Edge Cloud. We host the event location and provide infrastructure, maybe together with a couple of 5G devices, because the venue has one of the first 5G antennas in Germany. Everybody is welcome to join and have fun. We are only limited on the event space. More details are also in the event description. Don't be afraid to ask me directly, via e-mail or IRC. kind regards Frank (eumel8) Registration: https://openstack-hackathon-berlin.eventbrite.com/ Collected ideas/workpad: https://etherpad.openstack.org/p/hacking_the_edge_hackathon_berlin From doug at doughellmann.com Mon Aug 27 15:59:30 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 27 Aug 2018 11:59:30 -0400 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: References: Message-ID: <1535385497-sup-5482@lrrr.local> Excerpts from Eric Fried's message of 2018-08-22 09:13:25 -0500: > For some time, nova has been using uuidsentinel [1] which conveniently > allows you to get a random UUID in a single LOC with a readable name > that's the same every time you reference it within that process (but not > across processes). Example usage: [2]. > > We would like other projects (notably the soon-to-be-split-out placement > project) to be able to use uuidsentinel without duplicating the code. So > we would like to stuff it in an oslo lib. > > The question is whether it should live in oslotest [3] or in > oslo_utils.uuidutils [4]. The proposed patches are (almost) the same. > The issues we've thought of so far: > > - If this thing is used only for test, oslotest makes sense. We haven't > thought of a non-test use, but somebody surely will. > - Conversely, if we put it in oslo_utils, we're kinda saying we support > it for non-test too. (This is why the oslo_utils version does some extra > work for thread safety and collision avoidance.) > - In oslotest, awkwardness is necessary to avoid circular importing: > uuidsentinel uses oslo_utils.uuidutils, which requires oslotest. In > oslo_utils.uuidutils, everything is right there. > - It's a... UUID util. If I didn't know anything and I was looking for a > UUID util like uuidsentinel, I would look in a module called uuidutils > first. > > We hereby solicit your opinions, either by further discussion here or as > votes on the respective patches. > > Thanks, > efried > > [1] > https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/uuidsentinel.py > [2] > https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/functional/api/openstack/placement/db/test_resource_provider.py#L109-L115 > [3] https://review.openstack.org/594068 > [4] https://review.openstack.org/594179 > We discussed this during the Oslo team meeting today, and have settled on the idea of placing Eric's version of the code (with the thread-safe fix and the module-level global) in oslo_utils.fixture to allow it to easily reuse the oslo_utils.uuidutils module and still be clearly marked as test code. Doug From lbragstad at gmail.com Mon Aug 27 16:01:45 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 27 Aug 2018 11:01:45 -0500 Subject: [openstack-dev] [keystone] Stein PTG Schedule Message-ID: <5b1ae665-b4e5-3991-feb2-7941f667f407@gmail.com> I've worked through the list of topics and organized them into a rough schedule [0]. As it stands right now, Monday is going to be the main cross-project day (similar to the identity-integration track in Dublin). We don't have a room on Tuesday and Wednesday, but we will likely have continued cross-project discussions around federation. Thursday and Friday are currently staged for keystone-specific topics. If you see any conflicts or issues with what is proposed, please let me know. [0] https://etherpad.openstack.org/p/keystone-stein-ptg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From eng.szaher at gmail.com Mon Aug 27 16:02:24 2018 From: eng.szaher at gmail.com (Saad Zaher) Date: Mon, 27 Aug 2018 17:02:24 +0100 Subject: [openstack-dev] [Freezer] Update freezer-core team In-Reply-To: References: Message-ID: + Ruslan Aliev Just back from sick leave and he said he will be contributing again to Freezer so he won't be deleted On Mon, Aug 27, 2018 at 10:59 AM Saad Zaher wrote: > Hello Freezer Team, > > We are going to do the following updates to the core team: > > Add > - Trinh Nguyen > - gengchc2 (New PTL) > > Remove the following members due to inactivity > - yapeng Yang > - Ruslan Aliev > - Memo Garcia > - Pierre Mathieu > > > -------------------------- > Best Regards, > Saad! > -- -------------------------- Best Regards, Saad! -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Mon Aug 27 16:04:22 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 27 Aug 2018 11:04:22 -0500 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: Message-ID: Thanks Matt, you summed it up nicely. Just one thing to point out... > Option 1 would clearly be a drain on at least 2 nova cores to go through > the changes. I think Eric is on board for reviewing options 1 or 2 in > either case, but he prefers option 2. Since I'm throwing a wrench in the > works, I also need to stand up and review the changes if we go with > option 1 or 2. Jay said he'd review them but consider these reviews > lower priority. I expect we could get some help from some other nova > cores though, maybe not on all changes, but at least some (thinking > gibi, alex_xu, sfinucan). The placement-core team should be seeded and should be the ones on the hook for the reviews. Since we've agreed in the other thread to make placement-core a superset of nova-core, what you've said above is still applicable, but incomplete: I would expect there to be at least one or two additional non-nova-core placement cores willing to do these reviews. (Assuming Ed and/or Chris to be on that team, I would of course expect them to refrain from approving, regardless of who does the gerrit work, since they've both been developing the changes in github.) -efried From dtantsur at redhat.com Mon Aug 27 16:09:15 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 27 Aug 2018 18:09:15 +0200 Subject: [openstack-dev] [ironic] proposing metalsmith for inclusion into ironic governance Message-ID: Hi all, I would like propose the metalsmith library [1][2] for inclusion into the bare metal project governance. What it is and is not --------------------- Metalsmith is a library and CLI tool for using Ironic+Neutron for provisioning bare metal nodes. It can be seen as a lightweight replacement of Nova when Nova is too much. The primary use case is single-tenant standalone installer. Metalsmith is not a new service, it does not maintain any state, except for state maintained by Ironic and Neutron. Metalsmith is not and will not be a replacement for Nova in any proper cloud scenario. Metalsmith does have some overlap with Bifrost, with one important feature difference: its primary feature is a mini-scheduler that allows to pick a suitable bare metal node for deployment. I have a partial convergence plan as well! First, as part of this effort I'm working on missing features in openstacksdk, which is used in the OpenStack ansible modules, which are used in Bifrost. Second, I hope we can use it as a helper for making Bifrost do scheduling decisions. Background ---------- Metalsmith was born with the goal of replacing Nova in TripleO undercloud. Indeed, the undercloud uses only a small subset of Nova features, while having features that conflict with Nova's design (for example, bypassing the scheduler [3]). We wanted to avoid putting a lot of provisioning logic into existing TripleO components. So I wrote a library that does not carry any TripleO-specific assumptions, but does allow to address its needs. Why under Ironic ---------------- I believe the goal of Metalsmith is fully aligned with what the Ironic team is doing around standalone deployment. I think Metalsmith can provide a nice entry point into standalone deployments for people who (for any reasons) will not use Bifrost. With this change I hope to get more exposure for it. The library itself is small, documented [2], follows OpenStack practices and does not have particular operating requirements. There is nothing in it that is not familiar to the Ironic team members. Please let me know if you have any questions or concerns. Dmitry [1] https://github.com/openstack/metalsmith [2] https://metalsmith.readthedocs.io/en/latest/ [3] http://tripleo.org/install/advanced_deployment/node_placement.html From jaypipes at gmail.com Mon Aug 27 16:35:03 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 27 Aug 2018 12:35:03 -0400 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: Message-ID: <22347bf6-b38e-d6e9-1de2-62823f6739be@gmail.com> On 08/27/2018 11:31 AM, Matt Riedemann wrote: > On 8/24/2018 7:36 AM, Chris Dent wrote: >> >> Over the past few days a few of us have been experimenting with >> extracting placement to its own repo, as has been discussed at >> length on this list, and in some etherpads: >> >> https://etherpad.openstack.org/p/placement-extract-stein >> https://etherpad.openstack.org/p/placement-extraction-file-notes >> >> As part of that, I've been doing some exploration to tease out the >> issues we're going to hit as we do it. None of this is work that >> will be merged, rather it is stuff to figure out what we need to >> know to do the eventual merging correctly and efficiently. >> >> Please note that doing that is just the near edge of a large >> collection of changes that will cascade in many ways to many >> projects, tools, distros, etc. The people doing this are aware of >> that, and the relative simplicity (and fairly immediate success) of >> these experiments is not misleading people into thinking "hey, no >> big deal". It's a big deal. >> >> There's a strategy now (described at the end of the first etherpad >> listed above) for trimming the nova history to create a thing which >> is placement. From the first run of that Ed created a github repo >> and I branched that to eventually create: >> >> https://github.com/EdLeafe/placement/pull/2 >> >> In that, all the placement unit and functional tests are now >> passing, and my placecat [1] integration suite also passes. >> >> That work has highlighted some gaps in the process for trimming >> history which will be refined to create another interim repo. We'll >> repeat this until the process is smooth, eventually resulting in an >> openstack/placement. > > We talked about the github strategy a bit in the placement meeting today > [1]. Without being involved in this technical extraction work for the > past few weeks, I came in with a different perspective on the end-game, > and it was not aligned with what Chris/Ed thought as far as how we get > to the official openstack/placement repo. > > At a high level, Ed's repo [2] is a fork of nova with large changes on > top using pull requests to do things like remove the non-placement nova > files, update import paths (because the import structure changes from > nova.api.openstack.placement to just placement), and then changes from > Chris [3] to get tests working. Then the idea was to just use that to > seed the openstack/placement repo and rather than review the changes > along the way*, people that care about what changed (like myself) would > see the tests passing and be happy enough. > > However, I disagree with this approach since it bypasses our community > code review system of using Gerrit and relying on a core team to approve > changes at the sake of expediency. > > What I would like to see are the changes that go into making the seed > repo and what gets it to passing tests done in gerrit like we do for > everything else. There are a couple of options on how this is done though: > > 1. Seed the openstack/placement repo with the filter_git_history.sh > script output as Ed has done here [4]. This would include moving the > placement files to the root of the tree and dropping nova-specific > files. Then make incremental changes in gerrit like with [5] and the > individual changes which make up Chris's big pull request [3]. I am > primarily interested in making sure there are not content changes > happening, only mechanical tree-restructuring type changes, stuff like > that. I'm asking for more changes in gerrit so they can be sanely > reviewed (per normal). > > 2. Eric took a slightly different tack in that he's OK with just a > couple of large changes (or even large patch sets within a single > change) in gerrit rather than ~30 individual changes. So that would be > more like at most 3 changes in gerrit for [4][5][3]. > > 3. The 3rd option is we just don't use gerrit at all and seed the > official repo with the results of Chris and Ed's work in Ed's repo in > github. Clearly this would be the fastest way to get us to a new repo > (at the expense of bucking community code review and development process > - is an exception worth it?). > > Option 1 would clearly be a drain on at least 2 nova cores to go through > the changes. I think Eric is on board for reviewing options 1 or 2 in > either case, but he prefers option 2. Since I'm throwing a wrench in the > works, I also need to stand up and review the changes if we go with > option 1 or 2. Jay said he'd review them but consider these reviews > lower priority. I expect we could get some help from some other nova > cores though, maybe not on all changes, but at least some (thinking > gibi, alex_xu, sfinucan). > > Any CI jobs would be non-voting while going through options 1 or 2 until > we get to a point that tests should finally be passing and we can make > them voting (it should be possible to control this within the repo > itself using zuul v3). > > I would like to know from others (nova core or otherwise) what they > would prefer, and if you are a nova core that wants option 1 (or 2) are > you willing to help review those incremental changes knowing it will be > a drain - but also realizing that we can't really let option 1 drag on > while we're doing stein feature development, so ideally this would be > done before the PTG. As mentioned, I prefer to do the multiple patches in Gerrit with non-voting CI jobs approach. I can try and review the patches but they will be lower priority than reviews on reshaper and a number of nova-specs patches that need to be thoroughly reviewed before the inevitable debates in Denver. -jay From Kevin.Fox at pnnl.gov Mon Aug 27 16:38:53 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Mon, 27 Aug 2018 16:38:53 +0000 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: References: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> <363d36a2-c25a-d7ba-3f45-c8b3aa4e6cce@gmail.com> <78bc1c3d-4d97-5a1c-f320-bb08647e8825@gmail.com> <1A3C52DFCD06494D8528644858247BF01C183A00@EX10MBOX03.pnnl.gov> <1A3C52DFCD06494D8528644858247BF01C183A4A@EX10MBOX03.pnnl.gov> , Message-ID: <1A3C52DFCD06494D8528644858247BF01C1847EA@EX10MBOX03.pnnl.gov> I think in this context, kubelet without all of kubernetes still has the value that it provides an abstraction layer that podmon/paunch is being suggested to handle. It does not need the things you mention, network, sidecar, scaleup/down, etc. You can use as little as you want. For example, make a pod yaml per container with hostNetwork: true. it will run just like it was on the host then. You can do just one container. no sidecars necessary. Without the apiserver, it can't do scaleup/down even if you wanted to. It provides declarative yaml based management of containers, similar to paunch. so you can skip needing that component. It also already provides crio and docker support via cri. It does provide a little bit of orchestration, in that you drive things with declarative yaml. You drop in a yaml file in /etc/kubernetes/manifests, and it will create the container. you delete it, it removes the container. If you change it, it will update the container. and if something goes wrong with the container, it will try and get it back to the requested state automatically. And, it will recover the containers on reboot without help. Thanks, Kevin ________________________________________ From: Sergii Golovatiuk [sgolovat at redhat.com] Sent: Monday, August 27, 2018 3:46 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls Hi, On Mon, Aug 27, 2018 at 12:16 PM, Rabi Mishra wrote: > On Mon, Aug 27, 2018 at 3:25 PM, Sergii Golovatiuk > wrote: >> >> Hi, >> >> On Mon, Aug 27, 2018 at 5:32 AM, Rabi Mishra wrote: >> > On Mon, Aug 27, 2018 at 7:31 AM, Steve Baker wrote: >> Steve mentioned kubectl (kubernetes CLI which communicates with > > > Not sure what he meant. May be I miss something, but not heard of 'kubectl > standalone', though he might have meant standalone k8s cluster on every node > as you think. > >> >> kube-api) not kubelet which is only one component of kubernetes. All >> kubernetes components may be compiled as one binary (hyperkube) which >> can be used to minimize footprint. Generated ansible for kubelet is >> not enough as kubelet doesn't have any orchestration logic. > > > What orchestration logic do we've with TripleO atm? AFAIK we've provide > roles data for service placement across nodes, right? > I see standalone kubelet as a first step for scheduling openstack services > with in k8s cluster in the future (may be). It's half measure. I don't see any advantages of that move. We should either adopt whole kubernetes or doesn't use its components at all as the maintenance cost will be expensive. Using kubelet requires to resolve networking communication, scale-up/down, sidecar, or inter services dependencies. > >> >> >> >> This was a while ago now so this could be worth revisiting in the >> >> future. >> >> We'll be making gradual changes, the first of which is using podman to >> >> manage single containers. However podman has native support for the pod >> >> format, so I'm hoping we can switch to that once this transition is >> >> complete. Then evaluating kubectl becomes much easier. >> >> >> >>> Question. Rather then writing a middle layer to abstract both >> >>> container >> >>> engines, couldn't you just use CRI? CRI is CRI-O's native language, >> >>> and >> >>> there is support already for Docker as well. >> >> >> >> >> >> We're not writing a middle layer, we're leveraging one which is already >> >> there. >> >> >> >> CRI-O is a socket interface and podman is a CLI interface that both sit >> >> on >> >> top of the exact same Go libraries. At this point, switching to podman >> >> needs >> >> a much lower development effort because we're replacing docker CLI >> >> calls. >> >> >> > I see good value in evaluating kubelet standalone and leveraging it's >> > inbuilt grpc interfaces with cri-o (rather than using podman) as a long >> > term >> > strategy, unless we just want to provide an alternative to docker >> > container >> > runtime with cri-o. >> >> I see no value using kubelet without kubernetes IMHO. >> >> >> > >> >>> >> >>> >> >>> Thanks, >> >>> Kevin >> >>> ________________________________________ >> >>> From: Jay Pipes [jaypipes at gmail.com] >> >>> Sent: Thursday, August 23, 2018 8:36 AM >> >>> To: openstack-dev at lists.openstack.org >> >>> Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for >> >>> nice >> >>> API calls >> >>> >> >>> Dan, thanks for the details and answers. Appreciated. >> >>> >> >>> Best, >> >>> -jay >> >>> >> >>> On 08/23/2018 10:50 AM, Dan Prince wrote: >> >>>> >> >>>> On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes wrote: >> >>>>> >> >>>>> On 08/15/2018 04:01 PM, Emilien Macchi wrote: >> >>>>>> >> >>>>>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi > >>>>>> > wrote: >> >>>>>> >> >>>>>> More seriously here: there is an ongoing effort to converge >> >>>>>> the >> >>>>>> tools around containerization within Red Hat, and we, TripleO >> >>>>>> are >> >>>>>> interested to continue the containerization of our services >> >>>>>> (which >> >>>>>> was initially done with Docker & Docker-Distribution). >> >>>>>> We're looking at how these containers could be managed by k8s >> >>>>>> one >> >>>>>> day but way before that we plan to swap out Docker and join >> >>>>>> CRI-O >> >>>>>> efforts, which seem to be using Podman + Buildah (among other >> >>>>>> things). >> >>>>>> >> >>>>>> I guess my wording wasn't the best but Alex explained way better >> >>>>>> here: >> >>>>>> >> >>>>>> >> >>>>>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52 >> >>>>>> >> >>>>>> If I may have a chance to rephrase, I guess our current intention >> >>>>>> is >> >>>>>> to >> >>>>>> continue our containerization and investigate how we can improve >> >>>>>> our >> >>>>>> tooling to better orchestrate the containers. >> >>>>>> We have a nice interface (openstack/paunch) that allows us to run >> >>>>>> multiple container backends, and we're currently looking outside of >> >>>>>> Docker to see how we could solve our current challenges with the >> >>>>>> new >> >>>>>> tools. >> >>>>>> We're looking at CRI-O because it happens to be a project with a >> >>>>>> great >> >>>>>> community, focusing on some problems that we, TripleO have been >> >>>>>> facing >> >>>>>> since we containerized our services. >> >>>>>> >> >>>>>> We're doing all of this in the open, so feel free to ask any >> >>>>>> question. >> >>>>> >> >>>>> I appreciate your response, Emilien, thank you. Alex' responses to >> >>>>> Jeremy on the #openstack-tc channel were informative, thank you >> >>>>> Alex. >> >>>>> >> >>>>> For now, it *seems* to me that all of the chosen tooling is very Red >> >>>>> Hat >> >>>>> centric. Which makes sense to me, considering Triple-O is a Red Hat >> >>>>> product. >> >>>> >> >>>> Perhaps a slight clarification here is needed. "Director" is a Red >> >>>> Hat >> >>>> product. TripleO is an upstream project that is now largely driven by >> >>>> Red Hat and is today marked as single vendor. We welcome others to >> >>>> contribute to the project upstream just like anybody else. >> >>>> >> >>>> And for those who don't know the history the TripleO project was once >> >>>> multi-vendor as well. So a lot of the abstractions we have in place >> >>>> could easily be extended to support distro specific implementation >> >>>> details. (Kind of what I view podman as in the scope of this thread). >> >>>> >> >>>>> I don't know how much of the current reinvention of container >> >>>>> runtimes >> >>>>> and various tooling around containers is the result of politics. I >> >>>>> don't >> >>>>> know how much is the result of certain companies wanting to "own" >> >>>>> the >> >>>>> container stack from top to bottom. Or how much is a result of >> >>>>> technical >> >>>>> disagreements that simply cannot (or will not) be resolved among >> >>>>> contributors in the container development ecosystem. >> >>>>> >> >>>>> Or is it some combination of the above? I don't know. >> >>>>> >> >>>>> What I *do* know is that the current "NIH du jour" mentality >> >>>>> currently >> >>>>> playing itself out in the container ecosystem -- reminding me very >> >>>>> much >> >>>>> of the Javascript ecosystem -- makes it difficult for any potential >> >>>>> *consumers* of container libraries, runtimes or applications to be >> >>>>> confident that any choice they make towards one of the other will be >> >>>>> the >> >>>>> *right* choice or even a *possible* choice next year -- or next >> >>>>> week. >> >>>>> Perhaps this is why things like openstack/paunch exist -- to give >> >>>>> you >> >>>>> options if something doesn't pan out. >> >>>> >> >>>> This is exactly why paunch exists. >> >>>> >> >>>> Re, the podman thing I look at it as an implementation detail. The >> >>>> good news is that given it is almost a parity replacement for what we >> >>>> already use we'll still contribute to the OpenStack community in >> >>>> similar ways. Ultimately whether you run 'docker run' or 'podman run' >> >>>> you end up with the same thing as far as the existing TripleO >> >>>> architecture goes. >> >>>> >> >>>> Dan >> >>>> >> >>>>> You have a tough job. I wish you all the luck in the world in making >> >>>>> these decisions and hope politics and internal corporate management >> >>>>> decisions play as little a role in them as possible. >> >>>>> >> >>>>> Best, >> >>>>> -jay >> >>>>> >> >>>>> >> >>>>> >> >>>>> __________________________________________________________________________ >> >>>>> OpenStack Development Mailing List (not for usage questions) >> >>>>> Unsubscribe: >> >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>>> >> >>>> >> >>>> >> >>>> __________________________________________________________________________ >> >>>> OpenStack Development Mailing List (not for usage questions) >> >>>> Unsubscribe: >> >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>>> >> >>> >> >>> >> >>> __________________________________________________________________________ >> >>> OpenStack Development Mailing List (not for usage questions) >> >>> Unsubscribe: >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >> >>> >> >>> >> >>> __________________________________________________________________________ >> >>> OpenStack Development Mailing List (not for usage questions) >> >>> Unsubscribe: >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >> >>> >> >>> >> >>> __________________________________________________________________________ >> >>> OpenStack Development Mailing List (not for usage questions) >> >>> Unsubscribe: >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > >> > >> > -- >> > Regards, >> > Rabi Mishra >> > >> > >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> >> >> -- >> Best Regards, >> Sergii Golovatiuk >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Regards, > Rabi Mishra > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best Regards, Sergii Golovatiuk __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Jesse.Pretorius at rackspace.co.uk Mon Aug 27 16:39:02 2018 From: Jesse.Pretorius at rackspace.co.uk (Jesse Pretorius) Date: Mon, 27 Aug 2018 16:39:02 +0000 Subject: [openstack-dev] [TripleO][kolla-ansible][DevStack][Tempest][openstack-ansible] Collaborate towards creating a unified ansible tempest role in openstack-ansible project In-Reply-To: References: Message-ID: <6F13BBD8-679B-4F3E-8585-2F90F6A5F077@rackspace.co.uk> >On 8/27/18, 7:33 AM, "Chandan kumar" wrote: > I have summarized the problem statement and requirements on this etherpad [3]. > Feel free to add your requirements and questions for the same on the > etherpad so that we can shape the unified ansible role in a better > way. > Links: > 1. http://lists.openstack.org/pipermail/openstack-dev/2018-August/133119.html > 2. https://github.com/openstack/openstack-ansible-os_tempest > 3. https://etherpad.openstack.org/p/ansible-tempest-role Thanks for compiling this Chandan. I've added the really base requirements from an OSA standpoint that come to mind and a question that's been hanging in the recesses of my mind for a while. ________________________________ Rackspace Limited is a company registered in England & Wales (company registered number 03897010) whose registered office is at 5 Millington Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may contain confidential or privileged information intended for the recipient. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse at rackspace.com and delete the original message. Your cooperation is appreciated. From miguel at mlavalle.com Mon Aug 27 16:42:47 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 27 Aug 2018 11:42:47 -0500 Subject: [openstack-dev] Appointing Slawek Kaplonski to the Neutron Drivers team Message-ID: Dear Neutron team, In order to help the Neutron Drivers team to perform its very important job of guiding the community to evolve the OpenStack Networking architecture to meet the needs of our current and future users [1], I have asked Slawek Kaplonski to join it. Over the past few years, he has gained very valuable experience with OpenStack Networking, both as a deployer and more recently working with one of our key packagers. He played a paramount role in implementing our QoS (Quality of Service) features, currently leading that sub-team. He also leads the CI sub-team, making sure the prompt discovery and fixing of bugs in our software. On top of that, he is one of our most active reviewers, contributor of code to our reference implementation and fixer of bugs. I am very confident in Slawek making great contributions to the Neutron Drivers team. Best regards Miguel [1] https://docs.openstack.org/neutron/latest/contributor/policies/neutron-teams.html#drivers-team -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Mon Aug 27 16:42:50 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 27 Aug 2018 12:42:50 -0400 Subject: [openstack-dev] [TripleO][kolla-ansible][DevStack][Tempest][openstack-ansible] Collaborate towards creating a unified ansible tempest role in openstack-ansible project In-Reply-To: <6F13BBD8-679B-4F3E-8585-2F90F6A5F077@rackspace.co.uk> References: <6F13BBD8-679B-4F3E-8585-2F90F6A5F077@rackspace.co.uk> Message-ID: Hi Chandan, This is great, I added some more OSA-side comments, I'd love for us to find sometime to sit down to discuss this at the PTG. Thanks, Mohammed On Mon, Aug 27, 2018 at 12:39 PM, Jesse Pretorius wrote: >>On 8/27/18, 7:33 AM, "Chandan kumar" wrote: > >> I have summarized the problem statement and requirements on this etherpad [3]. >> Feel free to add your requirements and questions for the same on the >> etherpad so that we can shape the unified ansible role in a better >> way. > >> Links: >> 1. http://lists.openstack.org/pipermail/openstack-dev/2018-August/133119.html >> 2. https://github.com/openstack/openstack-ansible-os_tempest >> 3. https://etherpad.openstack.org/p/ansible-tempest-role > > Thanks for compiling this Chandan. I've added the really base requirements from an OSA standpoint that come to mind and a question that's been hanging in the recesses of my mind for a while. > > > ________________________________ > Rackspace Limited is a company registered in England & Wales (company registered number 03897010) whose registered office is at 5 Millington Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may contain confidential or privileged information intended for the recipient. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse at rackspace.com and delete the original message. Your cooperation is appreciated. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From openstack at fried.cc Mon Aug 27 16:44:56 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 27 Aug 2018 11:44:56 -0500 Subject: [openstack-dev] [oslo] UUID sentinel needs a home In-Reply-To: <1535385497-sup-5482@lrrr.local> References: <1535385497-sup-5482@lrrr.local> Message-ID: Thanks Doug. I restored [4] and moved the code to the fixture module. Enjoy. -efried On 08/27/2018 10:59 AM, Doug Hellmann wrote: > Excerpts from Eric Fried's message of 2018-08-22 09:13:25 -0500: >> For some time, nova has been using uuidsentinel [1] which conveniently >> allows you to get a random UUID in a single LOC with a readable name >> that's the same every time you reference it within that process (but not >> across processes). Example usage: [2]. >> >> We would like other projects (notably the soon-to-be-split-out placement >> project) to be able to use uuidsentinel without duplicating the code. So >> we would like to stuff it in an oslo lib. >> >> The question is whether it should live in oslotest [3] or in >> oslo_utils.uuidutils [4]. The proposed patches are (almost) the same. >> The issues we've thought of so far: >> >> - If this thing is used only for test, oslotest makes sense. We haven't >> thought of a non-test use, but somebody surely will. >> - Conversely, if we put it in oslo_utils, we're kinda saying we support >> it for non-test too. (This is why the oslo_utils version does some extra >> work for thread safety and collision avoidance.) >> - In oslotest, awkwardness is necessary to avoid circular importing: >> uuidsentinel uses oslo_utils.uuidutils, which requires oslotest. In >> oslo_utils.uuidutils, everything is right there. >> - It's a... UUID util. If I didn't know anything and I was looking for a >> UUID util like uuidsentinel, I would look in a module called uuidutils >> first. >> >> We hereby solicit your opinions, either by further discussion here or as >> votes on the respective patches. >> >> Thanks, >> efried >> >> [1] >> https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/uuidsentinel.py >> [2] >> https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/functional/api/openstack/placement/db/test_resource_provider.py#L109-L115 >> [3] https://review.openstack.org/594068 >> [4] https://review.openstack.org/594179 >> > > We discussed this during the Oslo team meeting today, and have settled > on the idea of placing Eric's version of the code (with the thread-safe > fix and the module-level global) in oslo_utils.fixture to allow it to > easily reuse the oslo_utils.uuidutils module and still be clearly marked > as test code. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From juliaashleykreger at gmail.com Mon Aug 27 16:53:49 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 27 Aug 2018 09:53:49 -0700 Subject: [openstack-dev] [ironic][tripleo][edge] Discussing ironic federation and distributed deployments Message-ID: Greetings everyone! We in Ironic land would like to go into the PTG with some additional thoughts, requirements, and ideas as it relates to distributed and geographically distributed deployments. As you may or may not know, we did take a first step towards supporting some of the architectures needed with conductor_groups this past cycle, but we have two very distinct needs that have been expressed with-in the community. 1) Need to federate and share baremetal resources between ironic deployments. A specification[1] was proposed to try and begin to capture what this would look like ironic wise. At a high level, this would look like an ironic node that actually consumes and remotely manages a node via another ironic deployment. Largely this would be a stand-alone user/admin user deployment cases, where hardware inventory insight is needed. 2) Need to securely manage remote sites with different security postures, while not exposing control-plane components as an attack surface. Some early discussion into this would involve changing Conductor/IPA communication flow[2], or at least supporting a different model, and some sort of light weight intermediate middle-man service that helps facilitate the local site management. With that in mind, we would like to schedule a call for sometime next week where we can kind of talk through and discuss these thoughts and needs in real time in advance of the PTG so we can be even better prepared. We are attempting to identify a time with a doodle[3]. Please select a time and date, so we can schedule something for next week. Thanks, -Julia [1]: https://review.openstack.org/#/c/560152/ [2]: https://review.openstack.org/212206 [3]: https://doodle.com/poll/y355wt97heffvp3m From juliaashleykreger at gmail.com Mon Aug 27 16:59:24 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 27 Aug 2018 09:59:24 -0700 Subject: [openstack-dev] [ironic] proposing metalsmith for inclusion into ironic governance In-Reply-To: References: Message-ID: On Mon, Aug 27, 2018 at 9:09 AM Dmitry Tantsur wrote: > I would like propose the metalsmith library [1][2] for inclusion into the bare > metal project governance. I am +1 to this. I think this is a logical inclusion to Ironic's governance, and overall benefits the ecosystem by allowing greater choice and ability to leverage ironic. Thanks Dmitry! From miguel at mlavalle.com Mon Aug 27 17:11:36 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 27 Aug 2018 12:11:36 -0500 Subject: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: <20180827093210.rgrgcrkggfims53j@localhost> References: <20180823104210.kgctxfjiq47uru34@localhost> <20180823170756.sz5qj2lxdy4i4od2@localhost> <880e2ff0-cf3a-7d6d-a805-816464858aee@gmail.com> <20180827093210.rgrgcrkggfims53j@localhost> Message-ID: Hi Matt, Isn't multiple port binding what we need in the case of ports? In my mind, the big motivator for multiple port binding is the ability to change a port's backend Best regards Miguel On Mon, Aug 27, 2018 at 4:32 AM, Gorka Eguileor wrote: > On 24/08, Jay S Bryant wrote: > > > > > > On 8/23/2018 12:07 PM, Gorka Eguileor wrote: > > > On 23/08, Dan Smith wrote: > > > > > I think Nova should never have to rely on Cinder's hosts/backends > > > > > information to do migrations or any other operation. > > > > > > > > > > In this case even if Nova had that info, it wouldn't be the > solution. > > > > > Cinder would reject migrations if there's an incompatibility on the > > > > > Volume Type (AZ, Referenced backend, capabilities...) > > > > I think I'm missing a bunch of cinder knowledge required to fully > grok > > > > this situation and probably need to do some reading. Is there some > > > > reason that a volume type can't exist in multiple backends or > something? > > > > I guess I think of volume type as flavor, and the same definition in > two > > > > places would be interchangeable -- is that not the case? > > > > > > > Hi, > > > > > > I just know the basics of flavors, and they are kind of similar, though > > > I'm sure there are quite a few differences. > > > > > > Sure, multiple storage arrays can meet the requirements of a Volume > > > Type, but then when you create the volume you don't know where it's > > > going to land. If your volume type is too generic you volume could land > > > somewhere your cell cannot reach. > > > > > > > > > > > I don't know anything about Nova cells, so I don't know the > specifics of > > > > > how we could do the mapping between them and Cinder backends, but > > > > > considering the limited range of possibilities in Cinder I would > say we > > > > > only have Volume Types and AZs to work a solution. > > > > I think the only mapping we need is affinity or distance. The point > of > > > > needing to migrate the volume would purely be because moving cells > > > > likely means you moved physically farther away from where you were, > > > > potentially with different storage connections and networking. It > > > > doesn't *have* to mean that, but I think in reality it would. So the > > > > question I think Matt is looking to answer here is "how do we move an > > > > instance from a DC in building A to building C and make sure the > > > > volume gets moved to some storage local in the new building so we're > > > > not just transiting back to the original home for no reason?" > > > > > > > > Does that explanation help or are you saying that's fundamentally > hard > > > > to do/orchestrate? > > > > > > > > Fundamentally, the cells thing doesn't even need to be part of the > > > > discussion, as the same rules would apply if we're just doing a > normal > > > > migration but need to make sure that storage remains affined to > compute. > > > > > > > We could probably work something out using the affinity filter, but > > > right now we don't have a way of doing what you need. > > > > > > We could probably rework the migration to accept scheduler hints to be > > > used with the affinity filter and to accept calls with the host or the > > > hints, that way it could migrate a volume without knowing the > > > destination host and decide it based on affinity. > > > > > > We may have to do more modifications, but it could be a way to do it. > > > > > > > > > > > > > > I don't know how the Nova Placement works, but it could hold an > > > > > equivalency mapping of volume types to cells as in: > > > > > > > > > > Cell#1 Cell#2 > > > > > > > > > > VolTypeA <--> VolTypeD > > > > > VolTypeB <--> VolTypeE > > > > > VolTypeC <--> VolTypeF > > > > > > > > > > Then it could do volume retypes (allowing migration) and that would > > > > > properly move the volumes from one backend to another. > > > > The only way I can think that we could do this in placement would be > if > > > > volume types were resource providers and we assigned them traits that > > > > had special meaning to nova indicating equivalence. Several of the > words > > > > in that sentence are likely to freak out placement people, myself > > > > included :) > > > > > > > > So is the concern just that we need to know what volume types in one > > > > backend map to those in another so that when we do the migration we > know > > > > what to ask for? Is "they are the same name" not enough? Going back > to > > > > the flavor analogy, you could kinda compare two flavor definitions > and > > > > have a good idea if they're equivalent or not... > > > > > > > > --Dan > > > In Cinder you don't get that from Volume Types, unless all your > backends > > > have the same hardware and are configured exactly the same. > > > > > > There can be some storage specific information there, which doesn't > > > correlate to anything on other hardware. Volume types may refer to a > > > specific pool that has been configured in the array to use specific > type > > > of disks. But even the info on the type of disks is unknown to the > > > volume type. > > > > > > I haven't checked the PTG agenda yet, but is there a meeting on this? > > > Because we may want to have one to try to understand the requirements > > > and figure out if there's a way to do it with current Cinder > > > functionality of if we'd need something new. > > Gorka, > > > > I don't think that this has been put on the agenda yet. Might be good to > > add. I don't think we have a cross project time officially planned with > > Nova. I will start that discussion with Melanie so that we can cover the > > couple of cross projects subjects we have. > > > > Jay > > Thanks Jay! > > > > > > > Cheers, > > > Gorka. > > > > > > ____________________________________________________________ > ______________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From assaf at redhat.com Mon Aug 27 17:18:27 2018 From: assaf at redhat.com (Assaf Muller) Date: Mon, 27 Aug 2018 13:18:27 -0400 Subject: [openstack-dev] Appointing Slawek Kaplonski to the Neutron Drivers team In-Reply-To: References: Message-ID: On Mon, Aug 27, 2018 at 12:42 PM, Miguel Lavalle wrote: > Dear Neutron team, > > In order to help the Neutron Drivers team to perform its very important job > of guiding the community to evolve the OpenStack Networking architecture to > meet the needs of our current and future users [1], I have asked Slawek > Kaplonski to join it. Over the past few years, he has gained very valuable > experience with OpenStack Networking, both as a deployer and more recently > working with one of our key packagers. He played a paramount role in > implementing our QoS (Quality of Service) features, currently leading that > sub-team. He also leads the CI sub-team, making sure the prompt discovery > and fixing of bugs in our software. On top of that, he is one of our most > active reviewers, contributor of code to our reference implementation and > fixer of bugs. I am very confident in Slawek making great contributions to > the Neutron Drivers team. Congratulations Slawek, I think you'll do a great job :) > > Best regards > > Miguel > > [1] > https://docs.openstack.org/neutron/latest/contributor/policies/neutron-teams.html#drivers-team > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From kendall at openstack.org Mon Aug 27 18:27:06 2018 From: kendall at openstack.org (Kendall Waters) Date: Mon, 27 Aug 2018 13:27:06 -0500 Subject: [openstack-dev] Early Bird Pricing Ends Tomorrow - OpenStack Summit Berlin Message-ID: Hi everyone, Friendly reminder that the early bird ticket price deadline for the OpenStack Summit Berlin is tomorrow, August 28 at 11:59pm PT (August 29, 6:59 UTC). In Berlin, there will be sessions and workshops around open infrastructure use cases, including CI/CD, container infrastructure, edge computing, HPC / AI / GPUs, private & hybrid cloud, public cloud and NFV. In case you haven’t seen it, the agenda is now live and includes sessions and workshops from Ocado Technology, Metronom, Oerlikon, and more! In addition, make sure to check out the Edge Hackathon hosted by Open Telekom Cloud the weekend prior to the Summit. Register NOW before the price increases to $999 USD! Interested in sponsoring the Summit? Find out more here or email summit at openstack.org. Cheers, Kendall Kendall Waters OpenStack Marketing & Events kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Aug 27 18:53:38 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 27 Aug 2018 13:53:38 -0500 Subject: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: References: <20180823104210.kgctxfjiq47uru34@localhost> <20180823170756.sz5qj2lxdy4i4od2@localhost> <880e2ff0-cf3a-7d6d-a805-816464858aee@gmail.com> <20180827093210.rgrgcrkggfims53j@localhost> Message-ID: <71965ead-4f30-1709-1df6-176195809a55@gmail.com> On 8/27/2018 12:11 PM, Miguel Lavalle wrote: > Isn't multiple port binding what we need in the case of ports? In my > mind, the big motivator for multiple port binding is the ability to > change a port's backend Hmm, yes maybe. Nova's usage of multiple port bindings today is restricted to live migration which isn't what we're supporting with the initial cross-cell (cold) migration support, but it could be a dependency if that's what we need. What I was wondering is if there is a concept like a port spanning or migrating across networks? I'm assuming there isn't, and I'm not even sure if that would be required here. But it would mean there is an implicit requirement that for cross-cell migration to work, neutron networks need to span cells (similarly storage backends would need to span cells). -- Thanks, Matt From melwittt at gmail.com Mon Aug 27 18:55:18 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 27 Aug 2018 11:55:18 -0700 Subject: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: <880e2ff0-cf3a-7d6d-a805-816464858aee@gmail.com> References: <20180823104210.kgctxfjiq47uru34@localhost> <20180823170756.sz5qj2lxdy4i4od2@localhost> <880e2ff0-cf3a-7d6d-a805-816464858aee@gmail.com> Message-ID: On Fri, 24 Aug 2018 10:44:16 -0500, Jay S Bryant wrote: >> I haven't checked the PTG agenda yet, but is there a meeting on this? >> Because we may want to have one to try to understand the requirements >> and figure out if there's a way to do it with current Cinder >> functionality of if we'd need something new. > Gorka, > > I don't think that this has been put on the agenda yet.  Might be good > to add.  I don't think we have a cross project time officially planned > with Nova.  I will start that discussion with Melanie so that we can > cover the couple of cross projects subjects we have. Just to update everyone, we've schedule Cinder/Nova cross project time for Thursday 9am-11am at the PTG, please add topics starting at L134 in the Cinder section: https://etherpad.openstack.org/p/nova-ptg-stein Cheers, -melanie From skaplons at redhat.com Mon Aug 27 19:36:51 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 27 Aug 2018 21:36:51 +0200 Subject: [openstack-dev] Appointing Slawek Kaplonski to the Neutron Drivers team In-Reply-To: References: Message-ID: <2D537092-41CB-4F58-B232-CA660C8021E9@redhat.com> Hi, Thanks a lot. I will do my best to help Neutron Drivers team :) > Wiadomość napisana przez Assaf Muller w dniu 27.08.2018, o godz. 19:18: > > On Mon, Aug 27, 2018 at 12:42 PM, Miguel Lavalle wrote: >> Dear Neutron team, >> >> In order to help the Neutron Drivers team to perform its very important job >> of guiding the community to evolve the OpenStack Networking architecture to >> meet the needs of our current and future users [1], I have asked Slawek >> Kaplonski to join it. Over the past few years, he has gained very valuable >> experience with OpenStack Networking, both as a deployer and more recently >> working with one of our key packagers. He played a paramount role in >> implementing our QoS (Quality of Service) features, currently leading that >> sub-team. He also leads the CI sub-team, making sure the prompt discovery >> and fixing of bugs in our software. On top of that, he is one of our most >> active reviewers, contributor of code to our reference implementation and >> fixer of bugs. I am very confident in Slawek making great contributions to >> the Neutron Drivers team. > > Congratulations Slawek, I think you'll do a great job :) > >> >> Best regards >> >> Miguel >> >> [1] >> https://docs.openstack.org/neutron/latest/contributor/policies/neutron-teams.html#drivers-team >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From doug at doughellmann.com Mon Aug 27 19:37:34 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 27 Aug 2018 15:37:34 -0400 Subject: [openstack-dev] [goal][python3] week 3 update Message-ID: <1535398507-sup-4428@lrrr.local> This is week 3 of the "Run under Python 3 by default" goal (https://governance.openstack.org/tc/goals/stein/python3-first.html). == What we learned last week == We have a few enthusiastic folks who want to contribute to the goal who have not been involved in the previous discussion with goal champions. If you are one of them, please get in touch with me BEFORE beginning any work. http://lists.openstack.org/pipermail/openstack-dev/2018-August/133610.html In the course of adding python 3.6 unit tests to Manilla, a recursion bug setting up the SSL context was reported. https://bugs.launchpad.net/manila/+bug/1788253 (We could use some help debugging it.) Several projects have their .gitignore files set up to ignore all '.' files. I'm not sure why this is the case. It has caused some issues with the migration, but I think we've worked around the problem in the scripts now. We extended the scripts for generating the migration patches to handle the neutron-specific versions of the unit test jobs for python 3.5 and 3.6. The Storyboard UI has some performance issue when a single story has several hundred comments. This is an unusual situation, which we don't expect to come up for "normal" stories, but the SB team discussed some ways to address it. Akihiro Mitoki expressed some concern about the new release notes job being set up in horizon, and how to test it. The "new" job is the same as the "old" job except that it sets up sphinx using python3. The versions of sphinx and reno that we rely on for the release notes jobs all work under python3, and projects don't have any convenient way to install extra dependencies, so we are confident that the new version of the job works. If you find that not to be true for your project, we can help fix the problem. We have a few repos with unstable functional tests, and we seem to have some instability in the integrated gate as well. == Ongoing and Completed Work == These teams have started or completed their Zuul migration work: +---------------------+------+-------+------+ | Team | Open | Total | Done | +---------------------+------+-------+------+ | Documentation | 0 | 12 | yes | | OpenStack-Helm | 5 | 5 | | | OpenStackAnsible | 70 | 270 | | | OpenStackClient | 10 | 19 | | | OpenStackSDK | 12 | 15 | | | PowerVMStackers | 0 | 15 | yes | | Technical Committee | 0 | 5 | yes | | blazar | 16 | 16 | | | congress | 1 | 16 | | | cyborg | 2 | 9 | | | designate | 10 | 17 | | | ec2-api | 4 | 7 | | | freezer | 26 | 30 | | | glance | 16 | 16 | | | horizon | 0 | 8 | yes | | ironic | 22 | 60 | | | karbor | 30 | 30 | | | keystone | 35 | 35 | | | kolla | 1 | 8 | | | kuryr | 26 | 29 | | | magnum | 24 | 29 | | | manila | 19 | 19 | | | masakari | 18 | 18 | | | mistral | 0 | 25 | yes | | monasca | 20 | 69 | | | murano | 25 | 25 | | | octavia | 5 | 23 | | | oslo | 3 | 157 | | | other | 3 | 7 | | | qinling | 1 | 6 | | | requirements | 0 | 5 | yes | | sahara | 0 | 27 | yes | | searchlight | 5 | 13 | | | solum | 0 | 17 | yes | | storlets | 5 | 5 | | | swift | 9 | 11 | | | tacker | 16 | 16 | | | tricircle | 5 | 9 | | | tripleo | 67 | 78 | | | vitrage | 0 | 17 | yes | | watcher | 12 | 17 | | | winstackers | 6 | 11 | | | zaqar | 12 | 17 | | | zun | 0 | 13 | yes | +---------------------+------+-------+------+ == Next Steps == If your team is ready to have your zuul settings migrated, please let us know by following up to this email. We will start with the volunteers, and then work our way through the other teams. After the Rocky cycle-trailing projects are released, I will propose the change to project-config to change all of the packaging jobs to use the new publish-to-pypi-python3 template. We should be able to have that change in place before the first milestone for Stein so that we have an opportunity to test it. == How can you help? == 1. Choose a patch that has failing tests and help fix it. https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) 2. Review the patches for the zuul changes. Keep in mind that some of those patches will be on the stable branches for projects. 3. Work on adding functional test jobs that run under Python 3. == How can you ask for help? == If you have any questions, please post them here to the openstack-dev list with the topic tag [python3] in the subject line. Posting questions to the mailing list will give the widest audience the chance to see the answers. We are using the #openstack-dev IRC channel for discussion as well, but I'm not sure how good our timezone coverage is so it's probably better to use the mailing list. == Reference Material == Goal description: https://governance.openstack.org/tc/goals/stein/python3-first.html Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-first+is:open Storyboard: https://storyboard.openstack.org/#!/board/104 Zuul migration notes: https://etherpad.openstack.org/p/python3-first Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 From melwittt at gmail.com Mon Aug 27 19:50:01 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 27 Aug 2018 12:50:01 -0700 Subject: [openstack-dev] [nova][cinder] cross project time at the PTG Message-ID: Howdy everyone, We've scheduled cross project time for Cinder/Nova at the PTG from 9am-11am on Thursday in the Nova room. Please add topics you'd like to discuss during our cross project time to the etherpad at L133: https://etherpad.openstack.org/p/nova-ptg-stein Cheers, -melanie From melwittt at gmail.com Mon Aug 27 19:51:06 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 27 Aug 2018 12:51:06 -0700 Subject: [openstack-dev] [nova][ironic] cross project time at the PTG Message-ID: Howdy everyone, We've scheduled cross project time for Ironic/Nova at the PTG from ~3:30pm-5pm on Thursday in the Nova room. Please add topics you'd like to discuss during our cross project time to the etherpad in the Ironic section at L139: https://etherpad.openstack.org/p/nova-ptg-stein Cheers, -melanie From melwittt at gmail.com Mon Aug 27 19:52:54 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 27 Aug 2018 12:52:54 -0700 Subject: [openstack-dev] [nova][neutron] cross project time at the PTG Message-ID: <2f9937d7-270d-6440-a64a-547e88a98270@gmail.com> Howdy everyone, We've scheduled cross project time for Neutron/Nova at the PTG from ~1:30pm-3pm after lunch on Thursday in the Nova room. Please add topics you'd like to discuss during our cross project time to the etherpad in the Neutron section at L136: https://etherpad.openstack.org/p/nova-ptg-stein Based on the number of topics added, we can add more time to the session before lunch (~11:20am - lunch) if needed and do part 1 before lunch and part 2 after lunch. Cheers, -melanie From nate.johnston at redhat.com Mon Aug 27 20:07:53 2018 From: nate.johnston at redhat.com (Nate Johnston) Date: Mon, 27 Aug 2018 16:07:53 -0400 Subject: [openstack-dev] Appointing Slawek Kaplonski to the Neutron Drivers team In-Reply-To: References: Message-ID: <20180827200753.luytwqgjekzepbv2@bishop> On Mon, Aug 27, 2018 at 11:42:47AM -0500, Miguel Lavalle wrote: > Dear Neutron team, > > In order to help the Neutron Drivers team to perform its very important job > of guiding the community to evolve the OpenStack Networking architecture to > meet the needs of our current and future users [1], I have asked Slawek > Kaplonski to join it. Over the past few years, he has gained very valuable > experience with OpenStack Networking, both as a deployer and more recently > working with one of our key packagers. He played a paramount role in > implementing our QoS (Quality of Service) features, currently leading that > sub-team. He also leads the CI sub-team, making sure the prompt discovery > and fixing of bugs in our software. On top of that, he is one of our most > active reviewers, contributor of code to our reference implementation and > fixer of bugs. I am very confident in Slawek making great contributions to > the Neutron Drivers team. Congrations Slawek, and thanks for your tireless work! Nate From dangtrinhnt at gmail.com Mon Aug 27 20:15:00 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 28 Aug 2018 05:15:00 +0900 Subject: [openstack-dev] [Freezer] Update freezer-core team In-Reply-To: References: Message-ID: Thanks Saad for coordinating this. On Tue, Aug 28, 2018, 01:02 Saad Zaher wrote: > + Ruslan Aliev Just back from sick leave and he said he will be > contributing again to Freezer so he won't be deleted > > On Mon, Aug 27, 2018 at 10:59 AM Saad Zaher wrote: > >> Hello Freezer Team, >> >> We are going to do the following updates to the core team: >> >> Add >> - Trinh Nguyen >> - gengchc2 (New PTL) >> >> Remove the following members due to inactivity >> - yapeng Yang >> - Ruslan Aliev >> - Memo Garcia >> - Pierre Mathieu >> >> >> -------------------------- >> Best Regards, >> Saad! >> > > > -- > -------------------------- > Best Regards, > Saad! > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin.lu at huawei.com Mon Aug 27 20:16:41 2018 From: hongbin.lu at huawei.com (Hongbin Lu) Date: Mon, 27 Aug 2018 20:16:41 +0000 Subject: [openstack-dev] [neutron] Bug deputy report week August 20th - August 26th Message-ID: <0957CD8F4B55C0418161614FEC580D6B2FA25FDD@YYZEML701-CHM.china.huawei.com> Hi all, I was the bugs deputy for the week of August 20th - August 26th. Here's the summary of the bugs that were filed: Critical: * https://bugs.launchpad.net/neutron/+bug/1788185 Functional tests neutron.tests.functional.agent.l3.test_ha_router failing 100% times. Miguel Lavalle is assigned. High: * https://bugs.launchpad.net/neutron/+bug/1787919 Upgrade router to L3 HA broke IPv6. This bug occurs when creating a HA router or migrating a normal router to HA with dual-stack network. * https://bugs.launchpad.net/neutron/+bug/1788006 neutron_tempest_plugin DNS integration tests fail. Miguel Lavalle is assigned. Medium: * https://bugs.launchpad.net/neutron/+bug/1788556 Dhcp agent error reading lease file. Proposed fix: https://review.openstack.org/#/c/595235/ * https://bugs.launchpad.net/neutron/+bug/1788759 Firewall Logging does not work when changing port state to UP after restarting q-l3.service. Confirmed by @LongKB. * https://bugs.launchpad.net/neutron/+bug/1788865 neutron-openvswitch-agent interface monitor does not work if ovsdb-client generates warnings. Proposed fix: https://review.openstack.org/#/c/596717/ Low: * https://bugs.launchpad.net/neutron/+bug/1788023 Neutron does not form mesh tunnel overly between different ml2 driver. This bug occurs in a mixed environment with OVS and linuxbridge with l2 population disabled. * https://bugs.launchpad.net/neutron/+bug/1788900 pci_passthrough_whitelist deprecated. A doc issue. * https://bugs.launchpad.net/neutron/+bug/1788936 Network address translation in Neutron wrong RFC in documentation. A doc issue. New: * https://bugs.launchpad.net/neutron/+bug/1788978 DPDK vxlan not Vxlan does not work. I failed to triage this bug. Escalates to neutron lieutenants to triage. RFEs: * https://bugs.launchpad.net/neutron/+bug/1788009 Neutron bridge name is not always set for ml2/ovs. * https://bugs.launchpad.net/neutron/+bug/1788012 Bridge name not set in vif:binding-details by ml2/linux-bridge. Invalid: * https://bugs.launchpad.net/devstack/+bug/1788184 Failed to bind port on host in case DEVSTACK_GATE_VIRT_DRIVER=fake. This is not a neutron bug but a configuration problem. * https://bugs.launchpad.net/neutron/+bug/1788045 Cannot delete security group rules with unicode chars in their description. The error happens on client side, directly using neutron REST API works fine. Incomplete: * https://bugs.launchpad.net/neutron/+bug/1787908 Cannot turn off arp_spoofing on linuxbridge ml2. Bug reporter was prompted to confirm if he/she had the correct configuration. * https://bugs.launchpad.net/neutron/+bug/1788745 Resource ACCEPT LOG can only print once then get CookieNotFound. Bug reporter was prompted for reproducing steps. Duplicated: * https://bugs.launchpad.net/neutron/+bug/1788614 dvr floating IP not work. Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Mon Aug 27 21:23:07 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 27 Aug 2018 14:23:07 -0700 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: Message-ID: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> On Mon, 27 Aug 2018 10:31:50 -0500, Matt Riedemann wrote: > On 8/24/2018 7:36 AM, Chris Dent wrote: >> >> Over the past few days a few of us have been experimenting with >> extracting placement to its own repo, as has been discussed at >> length on this list, and in some etherpads: >> >> https://etherpad.openstack.org/p/placement-extract-stein >> https://etherpad.openstack.org/p/placement-extraction-file-notes >> >> As part of that, I've been doing some exploration to tease out the >> issues we're going to hit as we do it. None of this is work that >> will be merged, rather it is stuff to figure out what we need to >> know to do the eventual merging correctly and efficiently. >> >> Please note that doing that is just the near edge of a large >> collection of changes that will cascade in many ways to many >> projects, tools, distros, etc. The people doing this are aware of >> that, and the relative simplicity (and fairly immediate success) of >> these experiments is not misleading people into thinking "hey, no >> big deal". It's a big deal. >> >> There's a strategy now (described at the end of the first etherpad >> listed above) for trimming the nova history to create a thing which >> is placement. From the first run of that Ed created a github repo >> and I branched that to eventually create: >> >> https://github.com/EdLeafe/placement/pull/2 >> >> In that, all the placement unit and functional tests are now >> passing, and my placecat [1] integration suite also passes. >> >> That work has highlighted some gaps in the process for trimming >> history which will be refined to create another interim repo. We'll >> repeat this until the process is smooth, eventually resulting in an >> openstack/placement. > > We talked about the github strategy a bit in the placement meeting today > [1]. Without being involved in this technical extraction work for the > past few weeks, I came in with a different perspective on the end-game, > and it was not aligned with what Chris/Ed thought as far as how we get > to the official openstack/placement repo. > > At a high level, Ed's repo [2] is a fork of nova with large changes on > top using pull requests to do things like remove the non-placement nova > files, update import paths (because the import structure changes from > nova.api.openstack.placement to just placement), and then changes from > Chris [3] to get tests working. Then the idea was to just use that to > seed the openstack/placement repo and rather than review the changes > along the way*, people that care about what changed (like myself) would > see the tests passing and be happy enough. > > However, I disagree with this approach since it bypasses our community > code review system of using Gerrit and relying on a core team to approve > changes at the sake of expediency. > > What I would like to see are the changes that go into making the seed > repo and what gets it to passing tests done in gerrit like we do for > everything else. There are a couple of options on how this is done though: > > 1. Seed the openstack/placement repo with the filter_git_history.sh > script output as Ed has done here [4]. This would include moving the > placement files to the root of the tree and dropping nova-specific > files. Then make incremental changes in gerrit like with [5] and the > individual changes which make up Chris's big pull request [3]. I am > primarily interested in making sure there are not content changes > happening, only mechanical tree-restructuring type changes, stuff like > that. I'm asking for more changes in gerrit so they can be sanely > reviewed (per normal). > > 2. Eric took a slightly different tack in that he's OK with just a > couple of large changes (or even large patch sets within a single > change) in gerrit rather than ~30 individual changes. So that would be > more like at most 3 changes in gerrit for [4][5][3]. > > 3. The 3rd option is we just don't use gerrit at all and seed the > official repo with the results of Chris and Ed's work in Ed's repo in > github. Clearly this would be the fastest way to get us to a new repo > (at the expense of bucking community code review and development process > - is an exception worth it?). > > Option 1 would clearly be a drain on at least 2 nova cores to go through > the changes. I think Eric is on board for reviewing options 1 or 2 in > either case, but he prefers option 2. Since I'm throwing a wrench in the > works, I also need to stand up and review the changes if we go with > option 1 or 2. Jay said he'd review them but consider these reviews > lower priority. I expect we could get some help from some other nova > cores though, maybe not on all changes, but at least some (thinking > gibi, alex_xu, sfinucan). > > Any CI jobs would be non-voting while going through options 1 or 2 until > we get to a point that tests should finally be passing and we can make > them voting (it should be possible to control this within the repo > itself using zuul v3). > > I would like to know from others (nova core or otherwise) what they > would prefer, and if you are a nova core that wants option 1 (or 2) are > you willing to help review those incremental changes knowing it will be > a drain - but also realizing that we can't really let option 1 drag on > while we're doing stein feature development, so ideally this would be > done before the PTG. > > * Yes I realize I could be reviewing the github pull requests along the > way, but that's not really how we do code review in openstack. I think we should use the openstack review system (gerrit) for moving the code. We're moving a critical piece of nova to its own repo and I think it's worth having the review and history contained in the openstack review system. Using smaller changes that make it easy to see import vs content changes might make review faster than fewer, larger changes. The most important bit of all of this is making sure we don't break anything in the process for operators and users consuming nova and placement, and ensure the upgrade path from rocky => stein is tested in grenade. The steps I think we should take are: 1. We copy the placement code into the openstack/placement repo and have it passing all of its own unit and functional tests. 2. We have a stack of changes to zuul jobs that show nova working but deploying placement in devstack from the new repo instead of nova's repo. This includes the grenade job, ensuring that upgrade works. 3. When those pass, we merge them, effectively orphaning nova's copy of placement. Switch those jobs to voting. 4. Finally, we delete the orphaned code from nova (without needing to make any changes to non-placement-only test code -- code is truly orphaned). -melanie > [1] > http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-08-27-14.00.log.html#l-74 > [2] https://github.com/EdLeafe/placement > [3] https://github.com/EdLeafe/placement/pull/3 > [4] > https://github.com/EdLeafe/placement/commit/e3173faf59bd1453c3800b2bf57c2af8cfde1697 > [5] > https://github.com/EdLeafe/placement/commit/e984bef8587009378ea430dd1c12ca3e40a3c901 > From mriedemos at gmail.com Mon Aug 27 21:30:46 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 27 Aug 2018 16:30:46 -0500 Subject: [openstack-dev] [all] upgrade-checkers community wide goal - update Message-ID: <7e29b969-fa16-eee1-293f-53c761f79a74@gmail.com> I now at least have a board created for tracking this work [1]. I also started an etherpad [2] while I was working out some kinks with using StoryBoard. I wrote a docs patch [3] which will hopefully serve as a contributor guide for these checks. Please comment here or in the review if there are things missing from that document which you'd like to see added. It's hard to strike a balance between being too detailed in docs like this but I'm also not sure if it will be the best way to on-board people in other projects to how this is done. In the coming days I will flesh out the stories/tasks in storyboard for other projects to track their progress on the goal. After that I hope to start skimming through some other projects upgrade release notes looking for examples of things that could have been added to automated upgrade check tooling. Again, let me know if you have any questions. [1] https://storyboard.openstack.org/#!/board/107 [2] https://etherpad.openstack.org/p/goal-support-pre-upgrade-checks [3] https://review.openstack.org/#/c/596902/ -- Thanks, Matt From haleyb.dev at gmail.com Mon Aug 27 21:38:09 2018 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 27 Aug 2018 17:38:09 -0400 Subject: [openstack-dev] Appointing Slawek Kaplonski to the Neutron Drivers team In-Reply-To: References: Message-ID: Congrats Slawek! On 08/27/2018 12:42 PM, Miguel Lavalle wrote: > Dear Neutron team, > > In order to help the Neutron Drivers team to perform its very important > job of guiding the community to evolve the OpenStack Networking > architecture to meet the needs of our current and future users [1], I > have asked Slawek Kaplonski to join it. Over the past few years, he has > gained very valuable experience with OpenStack Networking, both as a > deployer  and more recently working with one of our key packagers. He > played a paramount role in implementing our QoS (Quality of Service) > features, currently leading that sub-team. He also leads the CI > sub-team, making sure the prompt discovery and fixing of bugs in our > software. On top of that, he is one of our most active reviewers, > contributor of code to our reference implementation and fixer of bugs. I > am very confident in Slawek making great contributions to the Neutron > Drivers team. > > Best regards > > Miguel > > [1] > https://docs.openstack.org/neutron/latest/contributor/policies/neutron-teams.html#drivers-team > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From kennelson11 at gmail.com Mon Aug 27 22:46:13 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 27 Aug 2018 15:46:13 -0700 Subject: [openstack-dev] [Freezer] Reactivate the team In-Reply-To: References: <201808271025487809975@zte.com.cn> Message-ID: Hello, Here is the change that adds Freezer to StoryBoard[1]. If we can get the PTL's +1, we can move forward with the migration. Does Friday work for you all? -Kendall (diablo_rojo) [1] https://review.openstack.org/#/c/596918/ On Sun, Aug 26, 2018 at 7:59 PM Trinh Nguyen wrote: > @Kendall: please help the Freezer team. Thanks. > > @gengchc2: I think you should send an email to TC and ask for help. The > Freezer core seems to inactive. > > > *Trinh Nguyen *| Founder & Chief Architect > > > > *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * > > > > On Mon, Aug 27, 2018 at 11:26 AM wrote: > >> Hi,Kendall: >> >> I agree to migrate freezer project from Launchpad to Storyboard, Thanks. >> >> By the way, When will grant privileges for gengchc2 on Launchpad and >> Project Gerrit repositories? >> >> >> >> Best regards, >> >> gengchc2 >> >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From longkb at vn.fujitsu.com Tue Aug 28 00:36:58 2018 From: longkb at vn.fujitsu.com (Kim Bao, Long) Date: Tue, 28 Aug 2018 00:36:58 +0000 Subject: [openstack-dev] Appointing Slawek Kaplonski to the Neutron Drivers team In-Reply-To: References: Message-ID: <5046e490f79a4db18fe6d09bcc012ef1@G07SGEXCMSGPS05.g07.fujitsu.local> Congrats Slawek!!!! Thanks for the great work. Cheer!!!!!! LongKB From: Miguel Lavalle [mailto:miguel at mlavalle.com] Sent: Monday, August 27, 2018 11:43 PM To: OpenStack Development Mailing List Subject: [openstack-dev] Appointing Slawek Kaplonski to the Neutron Drivers team Dear Neutron team, In order to help the Neutron Drivers team to perform its very important job of guiding the community to evolve the OpenStack Networking architecture to meet the needs of our current and future users [1], I have asked Slawek Kaplonski to join it. Over the past few years, he has gained very valuable experience with OpenStack Networking, both as a deployer and more recently working with one of our key packagers. He played a paramount role in implementing our QoS (Quality of Service) features, currently leading that sub-team. He also leads the CI sub-team, making sure the prompt discovery and fixing of bugs in our software. On top of that, he is one of our most active reviewers, contributor of code to our reference implementation and fixer of bugs. I am very confident in Slawek making great contributions to the Neutron Drivers team. Best regards Miguel [1] https://docs.openstack.org/neutron/latest/contributor/policies/neutron-teams.html#drivers-team -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuyulong.xa at gmail.com Tue Aug 28 00:57:50 2018 From: liuyulong.xa at gmail.com (LIU Yulong) Date: Tue, 28 Aug 2018 08:57:50 +0800 Subject: [openstack-dev] Appointing Slawek Kaplonski to the Neutron Drivers team In-Reply-To: <5046e490f79a4db18fe6d09bcc012ef1@G07SGEXCMSGPS05.g07.fujitsu.local> References: <5046e490f79a4db18fe6d09bcc012ef1@G07SGEXCMSGPS05.g07.fujitsu.local> Message-ID: Congratulations Slawek! Thanks for the remarkable work for the neutron team and community. On Tue, Aug 28, 2018 at 8:38 AM Kim Bao, Long wrote: > Congrats Slawek!!!! > Thanks for the great work. Cheer!!!!!! > > LongKB > > > > *From:* Miguel Lavalle [mailto:miguel at mlavalle.com] > *Sent:* Monday, August 27, 2018 11:43 PM > *To:* OpenStack Development Mailing List < > openstack-dev at lists.openstack.org> > *Subject:* [openstack-dev] Appointing Slawek Kaplonski to the Neutron > Drivers team > > > > Dear Neutron team, > > > > In order to help the Neutron Drivers team to perform its very important > job of guiding the community to evolve the OpenStack Networking > architecture to meet the needs of our current and future users [1], I have > asked Slawek Kaplonski to join it. Over the past few years, he has gained > very valuable experience with OpenStack Networking, both as a deployer and > more recently working with one of our key packagers. He played a paramount > role in implementing our QoS (Quality of Service) features, currently > leading that sub-team. He also leads the CI sub-team, making sure the > prompt discovery and fixing of bugs in our software. On top of that, he is > one of our most active reviewers, contributor of code to our reference > implementation and fixer of bugs. I am very confident in Slawek making > great contributions to the Neutron Drivers team. > > > > Best regards > > > > Miguel > > > > [1] > https://docs.openstack.org/neutron/latest/contributor/policies/neutron-teams.html#drivers-team > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Tue Aug 28 05:15:00 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 28 Aug 2018 15:15:00 +1000 Subject: [openstack-dev] [all][election] TC Election Season Message-ID: <20180828051500.GM26778@thor.bakeyournoodle.com> Election details: https://governance.openstack.org/election/ Please read the stipulations and timelines for candidates and electorate contained in this governance documentation. There will be further announcements posted to the mailing list as action is required from the electorate or candidates. This email is for information purposes only. If you have any questions which you feel affect others please reply to this email thread. If you have any questions that you which to discuss in private please email any of the election officials[1] so that we may address your concerns. Thank you, [1] https://governance.openstack.org/election/#election-officials Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From jlibosva at redhat.com Tue Aug 28 07:11:53 2018 From: jlibosva at redhat.com (Jakub Libosvar) Date: Tue, 28 Aug 2018 09:11:53 +0200 Subject: [openstack-dev] Appointing Slawek Kaplonski to the Neutron Drivers team In-Reply-To: References: Message-ID: <15a96cd4-670f-c4fe-e757-1a6de3cb4217@redhat.com> Congrats Slawek! I'm sure you'll rock as a driver too! Kuba On 27/08/2018 18:42, Miguel Lavalle wrote: > Dear Neutron team, > > In order to help the Neutron Drivers team to perform its very important job > of guiding the community to evolve the OpenStack Networking architecture to > meet the needs of our current and future users [1], I have asked Slawek > Kaplonski to join it. Over the past few years, he has gained very valuable > experience with OpenStack Networking, both as a deployer and more recently > working with one of our key packagers. He played a paramount role in > implementing our QoS (Quality of Service) features, currently leading that > sub-team. He also leads the CI sub-team, making sure the prompt discovery > and fixing of bugs in our software. On top of that, he is one of our most > active reviewers, contributor of code to our reference implementation and > fixer of bugs. I am very confident in Slawek making great contributions to > the Neutron Drivers team. > > Best regards > > Miguel > > [1] > https://docs.openstack.org/neutron/latest/contributor/policies/neutron-teams.html#drivers-team > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dangtrinhnt at gmail.com Tue Aug 28 08:03:44 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 28 Aug 2018 17:03:44 +0900 Subject: [openstack-dev] [Searchlight] Team meeting next week In-Reply-To: References: Message-ID: For those who want to follow up previous meetings, please refer to this Etherpad: https://etherpad.openstack.org/p/search-team-meeting-agenda Bests, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Mon, Aug 27, 2018 at 4:10 PM Trinh Nguyen wrote: > Hi team, > > This is a kind reminder of our meeting next Thursday, 15:00 UTC. Please > see below for meeting details. > > Bests, > > *Trinh Nguyen *| Founder & Chief Architect > > > > *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * > > > > On Sat, Aug 25, 2018 at 1:11 PM Trinh Nguyen > wrote: > >> Dear team, >> >> I would like to organize a team meeting on Thursday next week: >> >> - Date: 30 August 2018 >> - Time: 15:00 UTC >> - Channel: #openstack-meeting-4 >> >> All existing core members and new contributors are welcome. >> >> Here is the Searchlight's Etherpad for Stein, all ideas are welcomed: >> >> https://etherpad.openstack.org/p/searchlight-stein-ptg >> >> Please reply or ping me on IRC (#openstack-searchlight, dangtrinhnt) if >> you want to join. >> >> Bests, >> >> *Trinh Nguyen *| Founder & Chief Architect >> >> >> >> *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From naichuan.sun at citrix.com Tue Aug 28 08:17:28 2018 From: naichuan.sun at citrix.com (Naichuan Sun) Date: Tue, 28 Aug 2018 08:17:28 +0000 Subject: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update Message-ID: Hi, experts, XenServer CI failed frequently with an error "No valid host was found. " for more than a week. I think it is cause by placement update. It looks ` _get_provider_ids_matching ` return empty when allocate candidates, but filter statements looks good(vcpu/memory/disk): coalesce(usage_vcpu.used, :coalesce_1) + :coalesce_2 <= (inv_vcpu.total - inv_vcpu.reserved) * inv_vcpu.allocation_ratio AND inv_vcpu.min_unit <= :min_unit_1 AND inv_vcpu.max_unit >= :max_unit_1 AND :step_size_1 % inv_vcpu.step_size = :param_1, coalesce(usage_memory_mb.used, :coalesce_1) + :coalesce_2 <= (inv_memory_mb.total - inv_memory_mb.reserved) * inv_memory_mb.allocation_ratio AND inv_memory_mb.min_unit <= :min_unit_1 AND inv_memory_mb.max_unit >= :max_unit_1 AND :step_size_1 % inv_memory_mb.step_size = :param_1, coalesce(usage_disk_gb.used, :coalesce_1) + :coalesce_2 <= (inv_disk_gb.total - inv_disk_gb.reserved) * inv_disk_gb.allocation_ratio AND inv_disk_gb.min_unit <= :min_unit_1 AND inv_disk_gb.max_unit >= :max_unit_1 AND :step_size_1 % inv_disk_gb.step_size = :param_1 Also, database looks good: mysql> select * from inventories; +---------------------+---------------------+----+----------------------+-------------------+-------+----------+----------+----------+-----------+------------------+ | created_at | updated_at | id | resource_provider_id | resource_class_id | total | reserved | min_unit | max_unit | step_size | allocation_ratio | +---------------------+---------------------+----+----------------------+-------------------+-------+----------+----------+----------+-----------+------------------+ | 2018-08-27 10:14:12 | 2018-08-27 10:16:11 | 1 | 1 | 0 | 24 | 0 | 1 | 24 | 1 | 0 | | 2018-08-27 10:14:12 | 2018-08-27 10:16:11 | 2 | 1 | 1 | 98293 | 512 | 1 | 98293 | 1 | 0 | | 2018-08-27 10:14:12 | 2018-08-27 10:16:11 | 3 | 1 | 2 | 450 | 0 | 1 | 450 | 1 | 2 | +---------------------+---------------------+----+----------------------+-------------------+-------+----------+----------+----------+-----------+------------------+ 3 rows in set (0.00 sec) mysql> select * from resource_providers; +---------------------+---------------------+----+--------------------------------------+--------------+------------+----------+------------------+--------------------+ | created_at | updated_at | id | uuid | name | generation | can_host | root_provider_id | parent_provider_id | +---------------------+---------------------+----+--------------------------------------+--------------+------------+----------+------------------+--------------------+ | 2018-08-27 10:14:11 | 2018-08-27 10:16:11 | 1 | cb831119-c68f-47ac-92ba-0f19c1a56b31 | xrtmia-03-11 | 2 | NULL | 1 | NULL | +---------------------+---------------------+----+--------------------------------------+--------------+------------+----------+------------------+--------------------+ 1 row in set (0.00 sec) It is a alo environment deployed by devstack. Anyone has some suggestions about that? Thank you very much. BR. Naichuan Sun -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Tue Aug 28 08:30:11 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 28 Aug 2018 10:30:11 +0200 Subject: [openstack-dev] [TripleO]Addressing Edge/Multi-site/Multi-cloud deployment use cases (new squad) In-Reply-To: References: Message-ID: <49b9bee9-687a-1e9e-d9e4-f8f15d21aa83@redhat.com> On 08/22/2018 03:26 PM, Jiri Tomasek wrote: > Hi, > > > - Plan and template management in git. > >   This could be an iterative step towards eliminating Swift in the undercloud. >   Swift seemed like a natural choice at the time because it was an existing >   OpenStack service.  However, I think git would do a better job at tracking >   history and comparing changes and is much more lightweight than Swift. We've >   been managing the config-download directory as a git repo, and I like this >   direction. For now, we are just putting the whole git repo in Swift, but I >   wonder if it makes sense to consider eliminating Swift entirely. We need to >   consider the scale of managing thousands of plans for separate edge >   deployments. > >   I also think this would be a step towards undercloud simplification. > > > +1, we need to identify how much this affects the existing API and overall user > experience > for managing deployment plans. Currentl plan management options we support are: > - create plan from default files (/usr/share/tht...) > - create/update plan from local directory > - create/update plan by providing tarball > - create/update plan from remote git repository > > Ian has been working on similar efforts towards performance improvements [2], It > would be good to take this a step further and evaluate possibility to eliminate > Swift entirely. > > [2] https://review.openstack.org/#/c/581153/ We need to do something about ironic-inspector then: it currently depends on swift for storing collected data. Fortunately, there is a spec to fix it, but it hasn't been our team's priority. Reviews are welcome: https://review.openstack.org/#/c/587698/ Dmitry > > -- Jirka > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dangtrinhnt at gmail.com Tue Aug 28 08:36:52 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 28 Aug 2018 17:36:52 +0900 Subject: [openstack-dev] [Searchlight] Team meeting next week In-Reply-To: References: Message-ID: Hi Manh, 15:00 UTC is 22:00 in your time zone. Bests, On Tue, Aug 28, 2018, 17:26 Dinh Manh wrote: > Dear All. > I'm very sorry for my lately reply, but the time of 15:00 UTC is my office > work time so i'm afraid that i can't join with team in that time. > Very sorry. > > Best regards. > > > *-----------------------------------------------------------------------------------------------------------------------* > > *Đinh Văn Mạnh* > > Phone: 0167 6513 816 > > Mail: *manhdinh1994 at gmail.com * > > > Vào Th 2, 27 thg 8, 2018 vào lúc 14:10 Trinh Nguyen < > dangtrinhnt at gmail.com> đã viết: > >> Hi team, >> >> This is a kind reminder of our meeting next Thursday, 15:00 UTC. Please >> see below for meeting details. >> >> Bests, >> >> *Trinh Nguyen *| Founder & Chief Architect >> >> >> >> *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * >> >> >> >> On Sat, Aug 25, 2018 at 1:11 PM Trinh Nguyen >> wrote: >> >>> Dear team, >>> >>> I would like to organize a team meeting on Thursday next week: >>> >>> - Date: 30 August 2018 >>> - Time: 15:00 UTC >>> - Channel: #openstack-meeting-4 >>> >>> All existing core members and new contributors are welcome. >>> >>> Here is the Searchlight's Etherpad for Stein, all ideas are welcomed: >>> >>> https://etherpad.openstack.org/p/searchlight-stein-ptg >>> >>> Please reply or ping me on IRC (#openstack-searchlight, dangtrinhnt) if >>> you want to join. >>> >>> Bests, >>> >>> *Trinh Nguyen *| Founder & Chief Architect >>> >>> >>> >>> *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz >>> * >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfinucan at redhat.com Tue Aug 28 08:45:36 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 28 Aug 2018 09:45:36 +0100 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> Message-ID: <4be5a401eba05408fb68ca08985514161382f318.camel@redhat.com> On Mon, 2018-08-27 at 14:23 -0700, melanie witt wrote: > On Mon, 27 Aug 2018 10:31:50 -0500, Matt Riedemann wrote: > > On 8/24/2018 7:36 AM, Chris Dent wrote: > > > > > > Over the past few days a few of us have been experimenting with > > > extracting placement to its own repo, as has been discussed at > > > length on this list, and in some etherpads: > > > > > > https://etherpad.openstack.org/p/placement-extract-stein > > > https://etherpad.openstack.org/p/placement-extraction-file-notes > > > > > > As part of that, I've been doing some exploration to tease out the > > > issues we're going to hit as we do it. None of this is work that > > > will be merged, rather it is stuff to figure out what we need to > > > know to do the eventual merging correctly and efficiently. > > > > > > Please note that doing that is just the near edge of a large > > > collection of changes that will cascade in many ways to many > > > projects, tools, distros, etc. The people doing this are aware of > > > that, and the relative simplicity (and fairly immediate success) of > > > these experiments is not misleading people into thinking "hey, no > > > big deal". It's a big deal. > > > > > > There's a strategy now (described at the end of the first etherpad > > > listed above) for trimming the nova history to create a thing which > > > is placement. From the first run of that Ed created a github repo > > > and I branched that to eventually create: > > > > > > https://github.com/EdLeafe/placement/pull/2 > > > > > > In that, all the placement unit and functional tests are now > > > passing, and my placecat [1] integration suite also passes. > > > > > > That work has highlighted some gaps in the process for trimming > > > history which will be refined to create another interim repo. We'll > > > repeat this until the process is smooth, eventually resulting in an > > > openstack/placement. > > > > We talked about the github strategy a bit in the placement meeting today > > [1]. Without being involved in this technical extraction work for the > > past few weeks, I came in with a different perspective on the end-game, > > and it was not aligned with what Chris/Ed thought as far as how we get > > to the official openstack/placement repo. > > > > At a high level, Ed's repo [2] is a fork of nova with large changes on > > top using pull requests to do things like remove the non-placement nova > > files, update import paths (because the import structure changes from > > nova.api.openstack.placement to just placement), and then changes from > > Chris [3] to get tests working. Then the idea was to just use that to > > seed the openstack/placement repo and rather than review the changes > > along the way*, people that care about what changed (like myself) would > > see the tests passing and be happy enough. > > > > However, I disagree with this approach since it bypasses our community > > code review system of using Gerrit and relying on a core team to approve > > changes at the sake of expediency. > > > > What I would like to see are the changes that go into making the seed > > repo and what gets it to passing tests done in gerrit like we do for > > everything else. There are a couple of options on how this is done though: > > > > 1. Seed the openstack/placement repo with the filter_git_history.sh > > script output as Ed has done here [4]. This would include moving the > > placement files to the root of the tree and dropping nova-specific > > files. Then make incremental changes in gerrit like with [5] and the > > individual changes which make up Chris's big pull request [3]. I am > > primarily interested in making sure there are not content changes > > happening, only mechanical tree-restructuring type changes, stuff like > > that. I'm asking for more changes in gerrit so they can be sanely > > reviewed (per normal). > > > > 2. Eric took a slightly different tack in that he's OK with just a > > couple of large changes (or even large patch sets within a single > > change) in gerrit rather than ~30 individual changes. So that would be > > more like at most 3 changes in gerrit for [4][5][3]. > > > > 3. The 3rd option is we just don't use gerrit at all and seed the > > official repo with the results of Chris and Ed's work in Ed's repo in > > github. Clearly this would be the fastest way to get us to a new repo > > (at the expense of bucking community code review and development process > > - is an exception worth it?). > > > > Option 1 would clearly be a drain on at least 2 nova cores to go through > > the changes. I think Eric is on board for reviewing options 1 or 2 in > > either case, but he prefers option 2. Since I'm throwing a wrench in the > > works, I also need to stand up and review the changes if we go with > > option 1 or 2. Jay said he'd review them but consider these reviews > > lower priority. I expect we could get some help from some other nova > > cores though, maybe not on all changes, but at least some (thinking > > gibi, alex_xu, sfinucan). I'm still figuring out what I'll be focusing on this cycle so I do have time to review this if necessary. That being said, I do think there is merit in having the future placement-core team "own" this initiative from start to finish and would prefer this approach. That being said, regardless of whether I'm reviewing this or not, I would much rather option 1. Reviewing many small patches is almost always easier than reviewing one big one (Mox removal jumps to mind) and this applies both at the time of review and later on, when you're trying to figure out what on earth changed in a given commit. We don't want a return to the bad old days [1][2], even temporarily :) > > Any CI jobs would be non-voting while going through options 1 or 2 until > > we get to a point that tests should finally be passing and we can make > > them voting (it should be possible to control this within the repo > > itself using zuul v3). > > > > I would like to know from others (nova core or otherwise) what they > > would prefer, and if you are a nova core that wants option 1 (or 2) are > > you willing to help review those incremental changes knowing it will be > > a drain - but also realizing that we can't really let option 1 drag on > > while we're doing stein feature development, so ideally this would be > > done before the PTG. > > > > * Yes I realize I could be reviewing the github pull requests along the > > way, but that's not really how we do code review in openstack. > > I think we should use the openstack review system (gerrit) for moving > the code. We're moving a critical piece of nova to its own repo and I > think it's worth having the review and history contained in the > openstack review system. > > Using smaller changes that make it easy to see import vs content changes > might make review faster than fewer, larger changes. > > The most important bit of all of this is making sure we don't break > anything in the process for operators and users consuming nova and > placement, and ensure the upgrade path from rocky => stein is tested in > grenade. > > The steps I think we should take are: > > 1. We copy the placement code into the openstack/placement repo and have > it passing all of its own unit and functional tests. > > 2. We have a stack of changes to zuul jobs that show nova working but > deploying placement in devstack from the new repo instead of nova's > repo. This includes the grenade job, ensuring that upgrade works. I'm guessing there would need to be changes to Devstack itself, outside of the zuul jobs? > 3. When those pass, we merge them, effectively orphaning nova's copy of > placement. Switch those jobs to voting. > > 4. Finally, we delete the orphaned code from nova (without needing to > make any changes to non-placement-only test code -- code is truly orphaned). The one point above aside, ++ to all of this. Stephen [1] https://github.com/openstack/nova/commit/d940fa4619584dac967176d045407f0919da0a74 [2] [1] is in jest. Not picking on anyone :) > -melanie > > > [1] http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-08-27-14.00.log.html#l-74 > > [2] https://github.com/EdLeafe/placement > > [3] https://github.com/EdLeafe/placement/pull/3 > > [4] https://github.com/EdLeafe/placement/commit/e3173faf59bd1453c3800b2bf57c2af8cfde1697 > > [5] https://github.com/EdLeafe/placement/commit/e984bef8587009378ea430dd1c12ca3e40a3c901 From dirk at dmllr.de Tue Aug 28 08:47:10 2018 From: dirk at dmllr.de (=?UTF-8?B?RGlyayBNw7xsbGVy?=) Date: Tue, 28 Aug 2018 10:47:10 +0200 Subject: [openstack-dev] [rpm-packaging] Step down as a reviewer In-Reply-To: <0a42dac71ee047ff9f4b1ef87114f019c617d6b8.camel@suse.de> References: <0a42dac71ee047ff9f4b1ef87114f019c617d6b8.camel@suse.de> Message-ID: Hi Alberto, Am Mo., 13. Aug. 2018 um 11:08 Uhr schrieb Alberto Planas Dominguez : > I will change my role at SUSE at the end of the month (August 2018), so > I request to be removed from the core position on those projects. Sad to see you go, but I appreciate the heads up and wish you all the best at the new position. I've removed you from the list of core's as request. Greetings, Dirk From soulxu at gmail.com Tue Aug 28 10:22:01 2018 From: soulxu at gmail.com (Alex Xu) Date: Tue, 28 Aug 2018 18:22:01 +0800 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: Message-ID: 2018-08-27 23:31 GMT+08:00 Matt Riedemann : > On 8/24/2018 7:36 AM, Chris Dent wrote: > >> >> Over the past few days a few of us have been experimenting with >> extracting placement to its own repo, as has been discussed at >> length on this list, and in some etherpads: >> >> https://etherpad.openstack.org/p/placement-extract-stein >> https://etherpad.openstack.org/p/placement-extraction-file-notes >> >> As part of that, I've been doing some exploration to tease out the >> issues we're going to hit as we do it. None of this is work that >> will be merged, rather it is stuff to figure out what we need to >> know to do the eventual merging correctly and efficiently. >> >> Please note that doing that is just the near edge of a large >> collection of changes that will cascade in many ways to many >> projects, tools, distros, etc. The people doing this are aware of >> that, and the relative simplicity (and fairly immediate success) of >> these experiments is not misleading people into thinking "hey, no >> big deal". It's a big deal. >> >> There's a strategy now (described at the end of the first etherpad >> listed above) for trimming the nova history to create a thing which >> is placement. From the first run of that Ed created a github repo >> and I branched that to eventually create: >> >> https://github.com/EdLeafe/placement/pull/2 >> >> In that, all the placement unit and functional tests are now >> passing, and my placecat [1] integration suite also passes. >> >> That work has highlighted some gaps in the process for trimming >> history which will be refined to create another interim repo. We'll >> repeat this until the process is smooth, eventually resulting in an >> openstack/placement. >> > > We talked about the github strategy a bit in the placement meeting today > [1]. Without being involved in this technical extraction work for the past > few weeks, I came in with a different perspective on the end-game, and it > was not aligned with what Chris/Ed thought as far as how we get to the > official openstack/placement repo. > > At a high level, Ed's repo [2] is a fork of nova with large changes on top > using pull requests to do things like remove the non-placement nova files, > update import paths (because the import structure changes from > nova.api.openstack.placement to just placement), and then changes from > Chris [3] to get tests working. Then the idea was to just use that to seed > the openstack/placement repo and rather than review the changes along the > way*, people that care about what changed (like myself) would see the tests > passing and be happy enough. > > However, I disagree with this approach since it bypasses our community > code review system of using Gerrit and relying on a core team to approve > changes at the sake of expediency. > > What I would like to see are the changes that go into making the seed repo > and what gets it to passing tests done in gerrit like we do for everything > else. There are a couple of options on how this is done though: > > 1. Seed the openstack/placement repo with the filter_git_history.sh script > output as Ed has done here [4]. This would include moving the placement > files to the root of the tree and dropping nova-specific files. Then make > incremental changes in gerrit like with [5] and the individual changes > which make up Chris's big pull request [3]. I am primarily interested in > making sure there are not content changes happening, only mechanical > tree-restructuring type changes, stuff like that. I'm asking for more > changes in gerrit so they can be sanely reviewed (per normal). > > 2. Eric took a slightly different tack in that he's OK with just a couple > of large changes (or even large patch sets within a single change) in > gerrit rather than ~30 individual changes. So that would be more like at > most 3 changes in gerrit for [4][5][3]. > > 3. The 3rd option is we just don't use gerrit at all and seed the official > repo with the results of Chris and Ed's work in Ed's repo in github. > Clearly this would be the fastest way to get us to a new repo (at the > expense of bucking community code review and development process - is an > exception worth it?). > > Option 1 would clearly be a drain on at least 2 nova cores to go through > the changes. I think Eric is on board for reviewing options 1 or 2 in > either case, but he prefers option 2. Since I'm throwing a wrench in the > works, I also need to stand up and review the changes if we go with option > 1 or 2. Jay said he'd review them but consider these reviews lower > priority. I expect we could get some help from some other nova cores > though, maybe not on all changes, but at least some (thinking gibi, > alex_xu, sfinucan). > I can help some. And yes, small change is good than huge change. > > Any CI jobs would be non-voting while going through options 1 or 2 until > we get to a point that tests should finally be passing and we can make them > voting (it should be possible to control this within the repo itself using > zuul v3). > > I would like to know from others (nova core or otherwise) what they would > prefer, and if you are a nova core that wants option 1 (or 2) are you > willing to help review those incremental changes knowing it will be a drain > - but also realizing that we can't really let option 1 drag on while we're > doing stein feature development, so ideally this would be done before the > PTG. > > * Yes I realize I could be reviewing the github pull requests along the > way, but that's not really how we do code review in openstack. > > [1] http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/ > nova_scheduler.2018-08-27-14.00.log.html#l-74 > [2] https://github.com/EdLeafe/placement > [3] https://github.com/EdLeafe/placement/pull/3 > [4] https://github.com/EdLeafe/placement/commit/e3173faf59bd1453 > c3800b2bf57c2af8cfde1697 > [5] https://github.com/EdLeafe/placement/commit/e984bef858700937 > 8ea430dd1c12ca3e40a3c901 > > -- > > Thanks, > > Matt > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Aug 28 11:20:37 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 28 Aug 2018 12:20:37 +0100 (BST) Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> Message-ID: On Mon, 27 Aug 2018, melanie witt wrote: > I think we should use the openstack review system (gerrit) for moving the > code. We're moving a critical piece of nova to its own repo and I think it's > worth having the review and history contained in the openstack review system. This seems a reasonable enough strategy, in broad strokes. I want to be sure that we're all actually in agreement on the details, as we've had a few false starts and I think some of the details are getting confused in the shuffle and the general busy-ness in progress. Is anyone aware of anyone who hasn't commented yet that should? If you are, please poke them so we don't surprise them. > Using smaller changes that make it easy to see import vs content changes > might make review faster than fewer, larger changes. I _think_ we ought to be able to use the existing commits from the runs-throughs-to-passing-tests already done, but if we use the strategy described below it doesn't really matter: the TDD approach (after fixing paths and test config) is pretty fast. > The most important bit of all of this is making sure we don't break anything > in the process for operators and users consuming nova and placement, and > ensure the upgrade path from rocky => stein is tested in grenade. This is one of the areas where pretty active support from all of nova will be required: getting zuul, upgrade paths, and the like clearly defined and executed. > The steps I think we should take are: > > 1. We copy the placement code into the openstack/placement repo and have it > passing all of its own unit and functional tests. To break that down to more detail, how does this look? (note the ALL CAPS where more than acknowledgement is requested) 1.1 Run the git filter-branch on a copy of nova 1.1.1 Add missing files to the file list: 1.1.1.1 .gitignore 1.1.1.2 # ANYTHING ELSE? 1.2 Push -f that thing, acknowledge to be broken, to a seed repo on github (ed's repo should be fine) 1.3 Do the repo creation bits described in https://docs.openstack.org/infra/manual/creators.html to seed openstack/placement 1.3.1 set zuul jobs. Either to noop-jobs, or non voting basic func and unit # INPUT DESIRED HERE 1.4 Once the repo exists with some content, incrementally bring it to working 1.4.1 Update tox.ini to be placement oriented 1.4.2 Update setup.cfg to be placement oriented 1.4.3 Correct .stesr.conf 1.4.4 Move base of placement to "right" place 1.4.5 Move unit and functionals to right place 1.4.6 Do automated path fixings 1.4.7 Set up translation domain and i18n.py corectly 1.4.8 Trim placement/conf to just the conf settings required (api, base, database, keystone, paths, placement) 1.4.9 Remove database files that are not relevant (the db api is not used by placement) 1.4.10 Fix the Database Fixture to be just one database 1.4.11 Disable migrations that can't work (because of dependencies on nova code, 014 and 030 are examples) # INPUT DESIRED HERE AND ON SCHEMA MIGRATIONS IN GENERAL 1.4.12 Incrementally get tests working 1.4.13 Fix pep8 1.5 Make zuul pep, unit and functional voting 1.6 Create tools for db table sync/create 1.7 Concurrently go to step 2, where the harder magic happens. 1.8 Find and remove dead code (there will be some). 1.9 Tune up and confirm docs 1.10 Grep for remaining "nova" (as string and spirit) and fix Item 1.4.12 may deserve some discussion. When I've done this the several times before, the strategy I've used is to be test driven: run either functional or unit tests, find and fix one of the errors revealed, commit, move on. This strategy has worked very well for me because of the "test driven" part, but I'm hesitant to do it if reviewers are going to get to a patch and say "why didn't you also change X?" The answer to that question is "because this is incremental and test driven and the tests didn't demand that change (yet)". Sometimes that will mean that things of the same class of change are in different commits. Are people okay with that and willing to commit to being okay with that answer in reviews? To some extent we need to have some faith on the end result: the tests work. If people are not okay with that, we need the people who are not to determine and prove the alternate strategy. I've had this one work and work well. Please help to refine the above, thank you. > 2. We have a stack of changes to zuul jobs that show nova working but > deploying placement in devstack from the new repo instead of nova's repo. > This includes the grenade job, ensuring that upgrade works. If we can make a list for this (and the subsequent) major items that is as detailed as I've made for step 1 above, I think that will help us avoid some of the confusion and frustration that comes up. I'm neither able nor willing to be responsible for creating those lists for all these points, but very happy to help. > 3. When those pass, we merge them, effectively orphaning nova's copy of > placement. Switch those jobs to voting. > > 4. Finally, we delete the orphaned code from nova (without needing to make > any changes to non-placement-only test code -- code is truly orphaned). In case you missed it, one of the things I did earlier in the discussion was make it so that the wsgi script for placement defined in nova's setup.cfg [1] could: * continue to exist * with the same name * using the nova.conf file * running the extracted placement code That was easy to do because of the work over the last year or so that has been hardening the boundary between placement and nova, in place. I've been assuming that maintaining the option to use original conf file is a helpful trick for people. Is that the case? Thanks. [1] https://review.openstack.org/#/c/596291/3/nova/api/openstack/placement/wsgi.py -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From eng.szaher at gmail.com Tue Aug 28 11:48:09 2018 From: eng.szaher at gmail.com (Saad Zaher) Date: Tue, 28 Aug 2018 12:48:09 +0100 Subject: [openstack-dev] [Freezer] Reactivate the team In-Reply-To: References: <201808271025487809975@zte.com.cn> Message-ID: Hello Kendall, Can we get the old meeting slot which is Thursday @ 14:00 UTC if this is Ok with everyone ? Thanks, Saad! On Mon, Aug 27, 2018 at 11:46 PM Kendall Nelson wrote: > Hello, > > Here is the change that adds Freezer to StoryBoard[1]. If we can get the > PTL's +1, we can move forward with the migration. Does Friday work for you > all? > > -Kendall (diablo_rojo) > > [1] https://review.openstack.org/#/c/596918/ > > On Sun, Aug 26, 2018 at 7:59 PM Trinh Nguyen > wrote: > >> @Kendall: please help the Freezer team. Thanks. >> >> @gengchc2: I think you should send an email to TC and ask for help. The >> Freezer core seems to inactive. >> >> >> *Trinh Nguyen *| Founder & Chief Architect >> >> >> >> *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * >> >> >> >> On Mon, Aug 27, 2018 at 11:26 AM wrote: >> >>> Hi,Kendall: >>> >>> I agree to migrate freezer project from Launchpad to Storyboard, Thanks. >>> >>> By the way, When will grant privileges for gengchc2 on Launchpad and >>> Project Gerrit repositories? >>> >>> >>> >>> Best regards, >>> >>> gengchc2 >>> >>> >>> >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- -------------------------- Best Regards, Saad! -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Tue Aug 28 12:46:26 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 28 Aug 2018 14:46:26 +0200 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: Message-ID: <1535460386.23583.0@smtp.office365.com> On Mon, Aug 27, 2018 at 5:31 PM, Matt Riedemann wrote: > On 8/24/2018 7:36 AM, Chris Dent wrote: >> >> Over the past few days a few of us have been experimenting with >> extracting placement to its own repo, as has been discussed at >> length on this list, and in some etherpads: >> >> https://etherpad.openstack.org/p/placement-extract-stein >> https://etherpad.openstack.org/p/placement-extraction-file-notes >> >> As part of that, I've been doing some exploration to tease out the >> issues we're going to hit as we do it. None of this is work that >> will be merged, rather it is stuff to figure out what we need to >> know to do the eventual merging correctly and efficiently. >> >> Please note that doing that is just the near edge of a large >> collection of changes that will cascade in many ways to many >> projects, tools, distros, etc. The people doing this are aware of >> that, and the relative simplicity (and fairly immediate success) of >> these experiments is not misleading people into thinking "hey, no >> big deal". It's a big deal. >> >> There's a strategy now (described at the end of the first etherpad >> listed above) for trimming the nova history to create a thing which >> is placement. From the first run of that Ed created a github repo >> and I branched that to eventually create: >> >> https://github.com/EdLeafe/placement/pull/2 >> >> In that, all the placement unit and functional tests are now >> passing, and my placecat [1] integration suite also passes. >> >> That work has highlighted some gaps in the process for trimming >> history which will be refined to create another interim repo. We'll >> repeat this until the process is smooth, eventually resulting in an >> openstack/placement. > > We talked about the github strategy a bit in the placement meeting > today [1]. Without being involved in this technical extraction work > for the past few weeks, I came in with a different perspective on the > end-game, and it was not aligned with what Chris/Ed thought as far as > how we get to the official openstack/placement repo. > > At a high level, Ed's repo [2] is a fork of nova with large changes > on top using pull requests to do things like remove the non-placement > nova files, update import paths (because the import structure changes > from nova.api.openstack.placement to just placement), and then > changes from Chris [3] to get tests working. Then the idea was to > just use that to seed the openstack/placement repo and rather than > review the changes along the way*, people that care about what > changed (like myself) would see the tests passing and be happy enough. > > However, I disagree with this approach since it bypasses our > community code review system of using Gerrit and relying on a core > team to approve changes at the sake of expediency. > > What I would like to see are the changes that go into making the seed > repo and what gets it to passing tests done in gerrit like we do for > everything else. There are a couple of options on how this is done > though: > > 1. Seed the openstack/placement repo with the filter_git_history.sh > script output as Ed has done here [4]. This would include moving the > placement files to the root of the tree and dropping nova-specific > files. Then make incremental changes in gerrit like with [5] and the > individual changes which make up Chris's big pull request [3]. I am > primarily interested in making sure there are not content changes > happening, only mechanical tree-restructuring type changes, stuff > like that. I'm asking for more changes in gerrit so they can be > sanely reviewed (per normal). > > 2. Eric took a slightly different tack in that he's OK with just a > couple of large changes (or even large patch sets within a single > change) in gerrit rather than ~30 individual changes. So that would > be more like at most 3 changes in gerrit for [4][5][3]. > > 3. The 3rd option is we just don't use gerrit at all and seed the > official repo with the results of Chris and Ed's work in Ed's repo in > github. Clearly this would be the fastest way to get us to a new repo > (at the expense of bucking community code review and development > process - is an exception worth it?). > I assumed that the work on github was done to _discover_ what steps needs to be done later to populate the new repo and make the tests pass. So I more like the #1 approach. > Option 1 would clearly be a drain on at least 2 nova cores to go > through the changes. I think Eric is on board for reviewing options 1 > or 2 in either case, but he prefers option 2. Since I'm throwing a > wrench in the works, I also need to stand up and review the changes > if we go with option 1 or 2. Jay said he'd review them but consider > these reviews lower priority. I expect we could get some help from > some other nova cores though, maybe not on all changes, but at least > some (thinking gibi, alex_xu, sfinucan). I will spend time reviewing the patches coming for the new placement repo. Cheers, gibi > > Any CI jobs would be non-voting while going through options 1 or 2 > until we get to a point that tests should finally be passing and we > can make them voting (it should be possible to control this within > the repo itself using zuul v3). > > I would like to know from others (nova core or otherwise) what they > would prefer, and if you are a nova core that wants option 1 (or 2) > are you willing to help review those incremental changes knowing it > will be a drain - but also realizing that we can't really let option > 1 drag on while we're doing stein feature development, so ideally > this would be done before the PTG. > > * Yes I realize I could be reviewing the github pull requests along > the way, but that's not really how we do code review in openstack. > > [1] > http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-08-27-14.00.log.html#l-74 > [2] https://github.com/EdLeafe/placement > [3] https://github.com/EdLeafe/placement/pull/3 > [4] > https://github.com/EdLeafe/placement/commit/e3173faf59bd1453c3800b2bf57c2af8cfde1697 > [5] > https://github.com/EdLeafe/placement/commit/e984bef8587009378ea430dd1c12ca3e40a3c901 > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jaosorior at redhat.com Tue Aug 28 12:50:07 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Tue, 28 Aug 2018 15:50:07 +0300 Subject: [openstack-dev] [tripleo] PTG topics and agenda Message-ID: <0c407d93-8809-8c1c-4d1b-11a9e797cb90@redhat.com> Hello folks! With the PTG being quite soon, I just wanted to remind folks to add your topics on the etherpad: https://etherpad.openstack.org/p/tripleo-ptg-stein Also, please vote for the topics you're the most interested in, so we can add them to the agenda. I'll submit a potential agenda by the end of the week. Best Regards From jaypipes at gmail.com Tue Aug 28 12:54:51 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 28 Aug 2018 08:54:51 -0400 Subject: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update In-Reply-To: References: Message-ID: <97a6065b-02cc-5e0f-0de6-139f810b8d3c@gmail.com> On 08/28/2018 04:17 AM, Naichuan Sun wrote: > Hi, experts, > > XenServer CI failed frequently with an error "No valid host was found. " > for more than a week. I think it is cause by placement update. Hi Naichuan, Can you give us a link to the logs a patchset's Citrix XenServer CI that has failed? Also, a timestamp for the failure you refer to would be useful so we can correlate across service logs. Thanks, -jay From dangtrinhnt at gmail.com Tue Aug 28 13:12:30 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 28 Aug 2018 22:12:30 +0900 Subject: [openstack-dev] [Freezer] Reactivate the team In-Reply-To: References: <201808271025487809975@zte.com.cn> Message-ID: Hi Saad, That is the time to migrate Freezer to Storyboard, not the meeting time. :) Bests, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Tue, Aug 28, 2018 at 8:48 PM Saad Zaher wrote: > Hello Kendall, > > Can we get the old meeting slot which is Thursday @ 14:00 UTC if this is > Ok with everyone ? > > Thanks, > Saad! > > On Mon, Aug 27, 2018 at 11:46 PM Kendall Nelson > wrote: > >> Hello, >> >> Here is the change that adds Freezer to StoryBoard[1]. If we can get the >> PTL's +1, we can move forward with the migration. Does Friday work for you >> all? >> >> -Kendall (diablo_rojo) >> >> [1] https://review.openstack.org/#/c/596918/ >> >> On Sun, Aug 26, 2018 at 7:59 PM Trinh Nguyen >> wrote: >> >>> @Kendall: please help the Freezer team. Thanks. >>> >>> @gengchc2: I think you should send an email to TC and ask for help. The >>> Freezer core seems to inactive. >>> >>> >>> *Trinh Nguyen *| Founder & Chief Architect >>> >>> >>> >>> *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz >>> * >>> >>> >>> >>> On Mon, Aug 27, 2018 at 11:26 AM wrote: >>> >>>> Hi,Kendall: >>>> >>>> I agree to migrate freezer project from Launchpad to Storyboard, Thanks. >>>> >>>> By the way, When will grant privileges for gengchc2 on Launchpad and >>>> Project Gerrit repositories? >>>> >>>> >>>> >>>> Best regards, >>>> >>>> gengchc2 >>>> >>>> >>>> >>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > -------------------------- > Best Regards, > Saad! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Tue Aug 28 13:11:56 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 28 Aug 2018 15:11:56 +0200 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> Message-ID: <1535461916.23583.1@smtp.office365.com> On Tue, Aug 28, 2018 at 1:20 PM, Chris Dent wrote: > On Mon, 27 Aug 2018, melanie witt wrote: > >> I think we should use the openstack review system (gerrit) for >> moving the code. We're moving a critical piece of nova to its own >> repo and I think it's worth having the review and history contained >> in the openstack review system. > > This seems a reasonable enough strategy, in broad strokes. I want to > be sure that we're all actually in agreement on the details, as > we've had a few false starts and I think some of the details are > getting confused in the shuffle and the general busy-ness in progress. > > Is anyone aware of anyone who hasn't commented yet that should? If > you are, please poke them so we don't surprise them. > >> Using smaller changes that make it easy to see import vs content >> changes might make review faster than fewer, larger changes. > > I _think_ we ought to be able to use the existing commits from the > runs-throughs-to-passing-tests already done, but if we use the > strategy described below it doesn't really matter: the TDD approach > (after fixing paths and test config) is pretty fast. > >> The most important bit of all of this is making sure we don't break >> anything in the process for operators and users consuming nova and >> placement, and ensure the upgrade path from rocky => stein is >> tested in grenade. > > This is one of the areas where pretty active support from all of > nova will be required: getting zuul, upgrade paths, and the like > clearly defined and executed. > >> The steps I think we should take are: >> >> 1. We copy the placement code into the openstack/placement repo and >> have it passing all of its own unit and functional tests. > > To break that down to more detail, how does this look? > (note the ALL CAPS where more than acknowledgement is requested) > > 1.1 Run the git filter-branch on a copy of nova > 1.1.1 Add missing files to the file list: > 1.1.1.1 .gitignore > 1.1.1.2 # ANYTHING ELSE? > 1.2 Push -f that thing, acknowledge to be broken, to a seed repo on > github > (ed's repo should be fine) > 1.3 Do the repo creation bits described in > https://docs.openstack.org/infra/manual/creators.html > to seed openstack/placement > 1.3.1 set zuul jobs. Either to noop-jobs, or non voting basic > func and unit # INPUT DESIRED HERE I suggest to add a non-voting unit and functional job and iterate on the repo to make them green, then turn them to voting. I also think that we can add a non-voting tempest full job as well. Making it green depends on how hard to deploy placement from the new repo to tempest. I think as soon as placement repo has passing gabbits (e.g functional job) and we can deploy placement in tempest then tempest will be green soon. > 1.4 Once the repo exists with some content, incrementally bring it to > working > 1.4.1 Update tox.ini to be placement oriented > 1.4.2 Update setup.cfg to be placement oriented > 1.4.3 Correct .stesr.conf > 1.4.4 Move base of placement to "right" place > 1.4.5 Move unit and functionals to right place > 1.4.6 Do automated path fixings > 1.4.7 Set up translation domain and i18n.py corectly > 1.4.8 Trim placement/conf to just the conf settings required > (api, base, database, keystone, paths, placement) > 1.4.9 Remove database files that are not relevant (the db api is > not used by placement) > 1.4.10 Fix the Database Fixture to be just one database > 1.4.11 Disable migrations that can't work (because of > dependencies on nova code, 014 and 030 are examples) > # INPUT DESIRED HERE AND ON SCHEMA MIGRATIONS IN GENERAL > 1.4.12 Incrementally get tests working > 1.4.13 Fix pep8 > 1.5 Make zuul pep, unit and functional voting > 1.6 Create tools for db table sync/create > 1.7 Concurrently go to step 2, where the harder magic happens. > 1.8 Find and remove dead code (there will be some). > 1.9 Tune up and confirm docs > 1.10 Grep for remaining "nova" (as string and spirit) and fix > > > Item 1.4.12 may deserve some discussion. When I've done this the > several times before, the strategy I've used is to be test driven: > run either functional or unit tests, find and fix one of the errors > revealed, commit, move on. > > This strategy has worked very well for me because of the "test > driven" part, but I'm hesitant to do it if reviewers are going to > get to a patch and say "why didn't you also change X?" The answer to > that question is "because this is incremental and test driven and > the tests didn't demand that change (yet)". Sometimes that will mean > that things of the same class of change are in different commits. > > Are people okay with that and willing to commit to being okay with > that answer in reviews? To some extent we need to have some faith on > the end result: the tests work. If people are not okay with that, we > need the people who are not to determine and prove the alternate > strategy. I've had this one work and work well. I like this test driven approach. If I start to leave comments like "why didn't you also change X?" in these patches then please point me to this mail and I will correct my behavior. :) I think for me the trust towards the end result of these changes will come from the fact that the number of passing test cases are increases at every step. > > Please help to refine the above, thank you. > >> 2. We have a stack of changes to zuul jobs that show nova working >> but deploying placement in devstack from the new repo instead of >> nova's repo. This includes the grenade job, ensuring that upgrade >> works. > > If we can make a list for this (and the subsequent) major items that > is as detailed as I've made for step 1 above, I think that will help > us avoid some of the confusion and frustration that comes up. I'm > neither able nor willing to be responsible for creating those lists > for all these points, but very happy to help. > Let's collaborate on that list making. I added a list of jobs I foresee to the etherpad https://etherpad.openstack.org/p/placement-extract-stein-copy Cheers, gibi >> 3. When those pass, we merge them, effectively orphaning nova's copy >> of placement. Switch those jobs to voting. >> >> 4. Finally, we delete the orphaned code from nova (without needing >> to make any changes to non-placement-only test code -- code is >> truly orphaned). > > In case you missed it, one of the things I did earlier in the > discussion was make it so that the wsgi script for placement defined > in nova's setup.cfg [1] could: > > * continue to exist > * with the same name > * using the nova.conf file > * running the extracted placement code > > That was easy to do because of the work over the last year or so > that has been hardening the boundary between placement and nova, in > place. I've been assuming that maintaining the option to use > original conf file is a helpful trick for people. Is that the case? > > Thanks. > > [1] > https://review.openstack.org/#/c/596291/3/nova/api/openstack/placement/wsgi.py > -- > Chris Dent ٩◔̯◔۶ > https://anticdent.org/ > freenode: cdent tw: @anticdent > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From openstack at fried.cc Tue Aug 28 13:21:36 2018 From: openstack at fried.cc (Eric Fried) Date: Tue, 28 Aug 2018 08:21:36 -0500 Subject: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update In-Reply-To: <97a6065b-02cc-5e0f-0de6-139f810b8d3c@gmail.com> References: <97a6065b-02cc-5e0f-0de6-139f810b8d3c@gmail.com> Message-ID: Naichuan- Are you running with [1]? If you are, the placement logs (at debug level) should be giving you some useful info. If you're not... perhaps you could pull that in :) Note that it refactors the _get_provider_ids_matching method completely, so it's possible your problem will magically go away when you do. [1] https://review.openstack.org/#/c/590041/ On 08/28/2018 07:54 AM, Jay Pipes wrote: > On 08/28/2018 04:17 AM, Naichuan Sun wrote: >> Hi, experts, >> >> XenServer CI failed frequently with an error "No valid host was found. >> " for more than a week. I think it is cause by placement update. > > Hi Naichuan, > > Can you give us a link to the logs a patchset's Citrix XenServer CI that > has failed? Also, a timestamp for the failure you refer to would be > useful so we can correlate across service logs. > > Thanks, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From michel at redhat.com Tue Aug 28 13:30:02 2018 From: michel at redhat.com (Michel Peterson) Date: Tue, 28 Aug 2018 16:30:02 +0300 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: <1535398507-sup-4428@lrrr.local> References: <1535398507-sup-4428@lrrr.local> Message-ID: On Mon, Aug 27, 2018 at 10:37 PM, Doug Hellmann wrote: > > If your team is ready to have your zuul settings migrated, please > let us know by following up to this email. We will start with the > volunteers, and then work our way through the other teams. > The networking-odl team is willing to volunteer for this. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Tue Aug 28 13:49:45 2018 From: dms at danplanet.com (Dan Smith) Date: Tue, 28 Aug 2018 06:49:45 -0700 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: <4be5a401eba05408fb68ca08985514161382f318.camel@redhat.com> (Stephen Finucane's message of "Tue, 28 Aug 2018 09:45:36 +0100") References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <4be5a401eba05408fb68ca08985514161382f318.camel@redhat.com> Message-ID: >> 2. We have a stack of changes to zuul jobs that show nova working but >> deploying placement in devstack from the new repo instead of nova's >> repo. This includes the grenade job, ensuring that upgrade works. > > I'm guessing there would need to be changes to Devstack itself, outside > of the zuul jobs? I think we'll need changes to devstack itself, as well as grenade, as well as zuul jobs I'd assume. Otherwise, this sequence of steps is what I've been anticipating. --Dan From bob.ball at citrix.com Tue Aug 28 14:01:46 2018 From: bob.ball at citrix.com (Bob Ball) Date: Tue, 28 Aug 2018 14:01:46 +0000 Subject: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update In-Reply-To: References: <97a6065b-02cc-5e0f-0de6-139f810b8d3c@gmail.com> Message-ID: <8c34e52161b844cfb32b7268685af0fc@AMSPEX02CL01.citrite.net> We're not running with [1], however that did also fail the CI in the same way - see [2] for the full logs. The first failing appeared to be around Aug 27 08:32:14: Aug 27 08:32:14.502788 dsvm-devstack-citrix-lon-nodepool-1379254 devstack at placement-api.service[13219]: DEBUG nova.api.openstack.placement.requestlog [req-94714f18-87f3-4ff5-9b17-f6e50131b3a9 req-fc47376d-cf04-4cd3-b69c-31ef4d5739a4 service placement] Starting request: 192.168.33.1 "GET /placement/allocation_candidates?limit=1000&resources=MEMORY_MB%3A64%2CVCPU%3A1" {{(pid=13222) __call__ /opt/stack/new/nova/nova/api/openstack/placement/requestlog.py:38}} Aug 27 08:32:14.583676 dsvm-devstack-citrix-lon-nodepool-1379254 devstack at placement-api.service[13219]: DEBUG nova.api.openstack.placement.objects.resource_provider [req-94714f18-87f3-4ff5-9b17-f6e50131b3a9 req-fc47376d-cf04-4cd3-b69c-31ef4d5739a4 service placement] found 0 providers with available 1 VCPU {{(pid=13222) _get_provider_ids_matching /opt/stack/new/nova/nova/api/openstack/placement/objects/resource_provider.py:2928}} Just looking at Naichuan's output, I wonder if this is because allocation_ratio is registered as 0 in the inventory. Bob [2] http://dd6b71949550285df7dc-dda4e480e005aaa13ec303551d2d8155.r49.cf1.rackcdn.com/41/590041/17/check/dsvm-tempest-neutron-network/afadfe7/ -----Original Message----- From: Eric Fried [mailto:openstack at fried.cc] Sent: 28 August 2018 14:22 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update Naichuan- Are you running with [1]? If you are, the placement logs (at debug level) should be giving you some useful info. If you're not... perhaps you could pull that in :) Note that it refactors the _get_provider_ids_matching method completely, so it's possible your problem will magically go away when you do. [1] https://review.openstack.org/#/c/590041/ On 08/28/2018 07:54 AM, Jay Pipes wrote: > On 08/28/2018 04:17 AM, Naichuan Sun wrote: >> Hi, experts, >> >> XenServer CI failed frequently with an error "No valid host was found. >> " for more than a week. I think it is cause by placement update. > > Hi Naichuan, > > Can you give us a link to the logs a patchset's Citrix XenServer CI > that has failed? Also, a timestamp for the failure you refer to would > be useful so we can correlate across service logs. > > Thanks, > -jay > > ______________________________________________________________________ > ____ OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cdent+os at anticdent.org Tue Aug 28 14:07:18 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 28 Aug 2018 15:07:18 +0100 (BST) Subject: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update In-Reply-To: <8c34e52161b844cfb32b7268685af0fc@AMSPEX02CL01.citrite.net> References: <97a6065b-02cc-5e0f-0de6-139f810b8d3c@gmail.com> <8c34e52161b844cfb32b7268685af0fc@AMSPEX02CL01.citrite.net> Message-ID: On Tue, 28 Aug 2018, Bob Ball wrote: > Just looking at Naichuan's output, I wonder if this is because allocation_ratio is registered as 0 in the inventory. Yes. Whatever happened to cause that is the root, that will throw the math off into zeroness in lots of different places. The default (if you don't send and allocation_ratio) is 1.0, so maybe there's some code somewhere that is trying to use the default (by not sending) but is accidentally sending 0 instead? -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From balazs.gibizer at ericsson.com Tue Aug 28 14:24:24 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 28 Aug 2018 16:24:24 +0200 Subject: [openstack-dev] [nova]Notification subteam meeting cancelled Message-ID: <1535466264.23583.2@smtp.office365.com> Hi, There won't be notification subteam meeting this week. Cheers, gibi From mriedemos at gmail.com Tue Aug 28 14:27:37 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 28 Aug 2018 09:27:37 -0500 Subject: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: <71965ead-4f30-1709-1df6-176195809a55@gmail.com> References: <20180823104210.kgctxfjiq47uru34@localhost> <20180823170756.sz5qj2lxdy4i4od2@localhost> <880e2ff0-cf3a-7d6d-a805-816464858aee@gmail.com> <20180827093210.rgrgcrkggfims53j@localhost> <71965ead-4f30-1709-1df6-176195809a55@gmail.com> Message-ID: On 8/27/2018 1:53 PM, Matt Riedemann wrote: > On 8/27/2018 12:11 PM, Miguel Lavalle wrote: >> Isn't multiple port binding what we need in the case of ports? In my >> mind, the big motivator for multiple port binding is the ability to >> change a port's backend > > Hmm, yes maybe. Nova's usage of multiple port bindings today is > restricted to live migration which isn't what we're supporting with the > initial cross-cell (cold) migration support, but it could be a > dependency if that's what we need. > > What I was wondering is if there is a concept like a port spanning or > migrating across networks? I'm assuming there isn't, and I'm not even > sure if that would be required here. But it would mean there is an > implicit requirement that for cross-cell migration to work, neutron > networks need to span cells (similarly storage backends would need to > span cells). In thinking about this again (sleepless at 3am of course), port bindings doesn't help us here if we're orchestrating the cross-cell move using shelve offload, because in that case the port is unbound from the source host - while the instance is shelved offloaded, it has no host. When we unshelve in the new cell, we'd update the port binding. So there isn't really a use in this flow for multiple port bindings on multiple hosts (assuming we stick with using the shelve/unshelve idea here). -- Thanks, Matt From balazs.gibizer at ericsson.com Tue Aug 28 14:31:10 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 28 Aug 2018 16:31:10 +0200 Subject: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface In-Reply-To: <1533807698.26377.7@smtp.office365.com> References: <1533807698.26377.7@smtp.office365.com> Message-ID: <1535466670.23583.3@smtp.office365.com> Thanks for all the responses. I collected them on the nova ptg discussion etherpad [1] (L186 at the moment). The nova team will talk about deprecation of the legacy interface on Friday on the PTG. If you want participate in the discussion but you are not planning to sit in the nova room whole day then let me know and I will try to ping you over IRC when we about to start the item. Cheers, gibi [1] https://etherpad.openstack.org/p/nova-ptg-stein On Thu, Aug 9, 2018 at 11:41 AM, Balázs Gibizer wrote: > Dear Nova notification consumers! > > > The Nova team made progress with the new versioned notification > interface [1] and it is almost reached feature parity [2] with the > legacy, unversioned one. So Nova team will discuss on the upcoming > PTG the deprecation of the legacy interface. There is a list of > projects (we know of) consuming the legacy interface and we would > like to know if any of these projects plan to switch over to the new > interface in the foreseeable future so we can make a well informed > decision about the deprecation. > > > * Searchlight [3] - it is in maintenance mode so I guess the answer > is no > * Designate [4] > * Telemetry [5] > * Mistral [6] > * Blazar [7] > * Watcher [8] - it seems Watcher uses both legacy and versioned nova > notifications > * Masakari - I'm not sure Masakari depends on nova notifications or > not > > Cheers, > gibi > > [1] > https://docs.openstack.org/nova/latest/reference/notifications.html > [2] http://burndown.peermore.com/nova-notification/ > > [3] > https://github.com/openstack/searchlight/blob/master/searchlight/elasticsearch/plugins/nova/notification_handler.py > [4] > https://github.com/openstack/designate/blob/master/designate/notification_handler/nova.py > [5] > https://github.com/openstack/ceilometer/blob/master/ceilometer/pipeline/data/event_definitions.yaml#L2 > [6] > https://github.com/openstack/mistral/blob/master/etc/event_definitions.yml.sample#L2 > [7] > https://github.com/openstack/blazar/blob/5526ed1f9b74d23b5881a5f73b70776ba9732da4/doc/source/user/compute-host-monitor.rst > [8] > https://github.com/openstack/watcher/blob/master/watcher/decision_engine/model/notification/nova.py#L335 > > From dangtrinhnt at gmail.com Tue Aug 28 14:57:05 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 28 Aug 2018 23:57:05 +0900 Subject: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface In-Reply-To: <1535466670.23583.3@smtp.office365.com> References: <1533807698.26377.7@smtp.office365.com> <1535466670.23583.3@smtp.office365.com> Message-ID: Hi gibi, Thanks for the information. The Searchlight team would love to migrate to the new version of Nova notification. I apology for this late response since I just take over the project after your first email sent. We will discuss this at the next team meeting and figure out the plan. Bests, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Tue, Aug 28, 2018 at 11:31 PM Balázs Gibizer wrote: > Thanks for all the responses. I collected them on the nova ptg > discussion etherpad [1] (L186 at the moment). The nova team will talk > about deprecation of the legacy interface on Friday on the PTG. If you > want participate in the discussion but you are not planning to sit in > the nova room whole day then let me know and I will try to ping you > over IRC when we about to start the item. > > Cheers, > gibi > > [1] https://etherpad.openstack.org/p/nova-ptg-stein > > On Thu, Aug 9, 2018 at 11:41 AM, Balázs Gibizer > wrote: > > Dear Nova notification consumers! > > > > > > The Nova team made progress with the new versioned notification > > interface [1] and it is almost reached feature parity [2] with the > > legacy, unversioned one. So Nova team will discuss on the upcoming > > PTG the deprecation of the legacy interface. There is a list of > > projects (we know of) consuming the legacy interface and we would > > like to know if any of these projects plan to switch over to the new > > interface in the foreseeable future so we can make a well informed > > decision about the deprecation. > > > > > > * Searchlight [3] - it is in maintenance mode so I guess the answer > > is no > > * Designate [4] > > * Telemetry [5] > > * Mistral [6] > > * Blazar [7] > > * Watcher [8] - it seems Watcher uses both legacy and versioned nova > > notifications > > * Masakari - I'm not sure Masakari depends on nova notifications or > > not > > > > Cheers, > > gibi > > > > [1] > > https://docs.openstack.org/nova/latest/reference/notifications.html > > [2] http://burndown.peermore.com/nova-notification/ > > > > [3] > > > https://github.com/openstack/searchlight/blob/master/searchlight/elasticsearch/plugins/nova/notification_handler.py > > [4] > > > https://github.com/openstack/designate/blob/master/designate/notification_handler/nova.py > > [5] > > > https://github.com/openstack/ceilometer/blob/master/ceilometer/pipeline/data/event_definitions.yaml#L2 > > [6] > > > https://github.com/openstack/mistral/blob/master/etc/event_definitions.yml.sample#L2 > > [7] > > > https://github.com/openstack/blazar/blob/5526ed1f9b74d23b5881a5f73b70776ba9732da4/doc/source/user/compute-host-monitor.rst > > [8] > > > https://github.com/openstack/watcher/blob/master/watcher/decision_engine/model/notification/nova.py#L335 > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Aug 28 14:57:23 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 28 Aug 2018 09:57:23 -0500 Subject: [openstack-dev] [oslo] No meeting next two weeks Message-ID: <16cc7348-0051-e2d2-1cc8-56f3d8ffe3bf@nemebean.com> Next week is a US holiday so a lot of the team will be off, and the week after is the PTG. If you have anything to discuss in the meantime feel free to contact us in #openstack-oslo or here on the list. -Ben From mriedemos at gmail.com Tue Aug 28 15:20:10 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 28 Aug 2018 10:20:10 -0500 Subject: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update In-Reply-To: References: <97a6065b-02cc-5e0f-0de6-139f810b8d3c@gmail.com> <8c34e52161b844cfb32b7268685af0fc@AMSPEX02CL01.citrite.net> Message-ID: <082d62f3-53d6-d75d-1b46-f7162047b9de@gmail.com> On 8/28/2018 9:07 AM, Chris Dent wrote: > On Tue, 28 Aug 2018, Bob Ball wrote: > >> Just looking at Naichuan's output, I wonder if this is because >> allocation_ratio is registered as 0 in the inventory. > > Yes. > > Whatever happened to cause that is the root, that will throw the > math off into zeroness in lots of different places. The default (if > you don't send and allocation_ratio) is 1.0, so maybe there's some > code somewhere that is trying to use the default (by not sending) > but is accidentally sending 0 instead? If cpu_allocation_ratio isn't in nova.conf, which it's not in this CI run, then it should default to 16.0 via the ComputeNode object code: https://github.com/openstack/nova/blob/6bf864df771edb8c0d0af8a868dde21e3d12481e/nova/objects/compute_node.py#L201 Which is used to set the allocation ratio on the VCPUs inventory here: https://github.com/openstack/nova/blob/6bf864df771edb8c0d0af8a868dde21e3d12481e/nova/compute/resource_tracker.py#L106 Nothing has changed here recently as far as I know. -- Thanks, Matt From mriedemos at gmail.com Tue Aug 28 15:37:15 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 28 Aug 2018 10:37:15 -0500 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: <1535461916.23583.1@smtp.office365.com> References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <1535461916.23583.1@smtp.office365.com> Message-ID: <8cffffb3-4fc4-2669-dc07-446ebed851cc@gmail.com> On 8/28/2018 8:11 AM, Balázs Gibizer wrote: > I also think that we can add a non-voting tempest full job as well. > Making it green depends on how hard to deploy placement from the new > repo to tempest. I think as soon as placement repo has passing gabbits > (e.g functional job) and we can deploy placement in tempest then tempest > will be green soon. There is likely no point in this until devstack itself is installing and using placement from the new repo rather than from nova. Because otherwise this job will be using devstack which still installs placement from nova and the job will pass but not actually test anything on the placement change in question. Even if it did run on the placement change from the placement repo, the job will be a time sink known failure until we get to the end of the series. It's at the end of the series where we declare that placement in the new repo is ready to go and passing all of it's own unit/functional tests that I think we add in the tempest-full job with a dependency on a devstack change which flips from which repo we install. -- Thanks, Matt From mriedemos at gmail.com Tue Aug 28 15:47:12 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 28 Aug 2018 10:47:12 -0500 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> Message-ID: <350f3b3c-fa38-c57c-049b-60104b4da788@gmail.com> On 8/28/2018 6:20 AM, Chris Dent wrote: > On Mon, 27 Aug 2018, melanie witt wrote: > >> I think we should use the openstack review system (gerrit) for moving >> the code. We're moving a critical piece of nova to its own repo and I >> think it's worth having the review and history contained in the >> openstack review system. > > This seems a reasonable enough strategy, in broad strokes. I want to > be sure that we're all actually in agreement on the details, as > we've had a few false starts and I think some of the details are > getting confused in the shuffle and the general busy-ness in progress. > > Is anyone aware of anyone who hasn't commented yet that should? If > you are, please poke them so we don't surprise them. > >> Using smaller changes that make it easy to see import vs content >> changes might make review faster than fewer, larger changes. > > I _think_ we ought to be able to use the existing commits from the > runs-throughs-to-passing-tests already done, but if we use the > strategy described below it doesn't really matter: the TDD approach > (after fixing paths and test config) is pretty fast. > >> The most important bit of all of this is making sure we don't break >> anything in the process for operators and users consuming nova and >> placement, and ensure the upgrade path from rocky => stein is tested >> in grenade. > > This is one of the areas where pretty active support from all of > nova will be required: getting zuul, upgrade paths, and the like > clearly defined and executed. > >> The steps I think we should take are: >> >> 1. We copy the placement code into the openstack/placement repo and >> have it passing all of its own unit and functional tests. > > To break that down to more detail, how does this look? > (note the ALL CAPS where more than acknowledgement is requested) > > 1.1 Run the git filter-branch on a copy of nova >     1.1.1 Add missing files to the file list: >           1.1.1.1 .gitignore >           1.1.1.2 # ANYTHING ELSE? Unless I were to actually run the git filter-branch tooling to see what is excluded from the new repo, I can't really say what is missing at this time. I assume it would be clear during review - which is why I'm asking that we do this stuff in gerrit where we do reviews. > 1.2 Push -f that thing, acknowledge to be broken, to a seed repo on github >     (ed's repo should be fine) > 1.3 Do the repo creation bits described in >     https://docs.openstack.org/infra/manual/creators.html >     to seed openstack/placement >     1.3.1 set zuul jobs. Either to noop-jobs, or non voting basic >     func and unit # INPUT DESIRED HERE Agree. As I said to gibi elsewhere in this thread, I would hold off on adding a tempest-full job to the repo until we're at the end. > 1.4 Once the repo exists with some content, incrementally bring it to >     working >     1.4.1 Update tox.ini to be placement oriented >     1.4.2 Update setup.cfg to be placement oriented >     1.4.3 Correct .stesr.conf >     1.4.4 Move base of placement to "right" place >     1.4.5 Move unit and functionals to right place >     1.4.6 Do automated path fixings >     1.4.7 Set up translation domain and i18n.py corectly >     1.4.8 Trim placement/conf to just the conf settings required >           (api, base, database, keystone, paths, placement) >     1.4.9 Remove database files that are not relevant (the db api is >           not used by placement) >     1.4.10 Fix the Database Fixture to be just one database >     1.4.11 Disable migrations that can't work (because of >            dependencies on nova code, 014 and 030 are examples) >            # INPUT DESIRED HERE AND ON SCHEMA MIGRATIONS IN GENERAL >     1.4.12 Incrementally get tests working >     1.4.13 Fix pep8 > 1.5 Make zuul pep, unit and functional voting > 1.6 Create tools for db table sync/create > 1.7 Concurrently go to step 2, where the harder magic happens. > 1.8 Find and remove dead code (there will be some). > 1.9 Tune up and confirm docs > 1.10 Grep for remaining "nova" (as string and spirit) and fix > > > Item 1.4.12 may deserve some discussion. When I've done this the > several times before, the strategy I've used is to be test driven: > run either functional or unit tests, find and fix one of the errors > revealed, commit, move on. > > This strategy has worked very well for me because of the "test > driven" part, but I'm hesitant to do it if reviewers are going to > get to a patch and say "why didn't you also change X?" The answer to > that question is "because this is incremental and test driven and > the tests didn't demand that change (yet)". Sometimes that will mean > that things of the same class of change are in different commits. > > Are people okay with that and willing to commit to being okay with > that answer in reviews? To some extent we need to have some faith on > the end result: the tests work. If people are not okay with that, we > need the people who are not to determine and prove the alternate > strategy. I've had this one work and work well. Seems reasonable to me. But to be clear, if there are 70 failed tests, are you going to have 70 separate patches? Or this is just one of those things where you start with 70, fix something, get down to 50 failed tests, and iterate until you're down to all passing. If so, I'm OK with that. It's hard to say without knowing how many patches get from 70 failures to 0 and what the size/complexity of those changes is, but without knowing I'd default to the incremental approach for ease of review. > > Please help to refine the above, thank you. > >> 2. We have a stack of changes to zuul jobs that show nova working but >> deploying placement in devstack from the new repo instead of nova's >> repo. This includes the grenade job, ensuring that upgrade works. > > If we can make a list for this (and the subsequent) major items that > is as detailed as I've made for step 1 above, I think that will help > us avoid some of the confusion and frustration that comes up. I'm > neither able nor willing to be responsible for creating those lists > for all these points, but very happy to help. Grenade uses devstack so once we have devstack on master installing (and configuring) placement from the new repo and disable installing and configuring it from the nova repo, that's the majority of the change I'd think. Grenade will likely need a from-rocky script to move any config that is necessary, but as you already noted below, if the new repo can live with an existing nova.conf, then we might not need to do anything in grenade since placement from the new repo (in stein) could then run with nova.conf created for placement from the nova repo (in rocky). > >> 3. When those pass, we merge them, effectively orphaning nova's copy >> of placement. Switch those jobs to voting. >> >> 4. Finally, we delete the orphaned code from nova (without needing to >> make any changes to non-placement-only test code -- code is truly >> orphaned). > > In case you missed it, one of the things I did earlier in the > discussion was make it so that the wsgi script for placement defined > in nova's setup.cfg [1] could: > > * continue to exist > * with the same name > * using the nova.conf file > * running the extracted placement code > > That was easy to do because of the work over the last year or so > that has been hardening the boundary between placement and nova, in > place. I've been assuming that maintaining the option to use > original conf file is a helpful trick for people. Is that the case? Yes, as noted above, that might mean we don't need any changes in grenade, but I can't foretell what will be needed until we start to actually work on the devstack/grenade changes and see what fails and then iterate on fixing those issues - exactly what you're doing with fixing the in-tree unit/functional tests. > > Thanks. > > [1] > https://review.openstack.org/#/c/596291/3/nova/api/openstack/placement/wsgi.py > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Thanks, Matt From mnaser at vexxhost.com Tue Aug 28 15:56:12 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 28 Aug 2018 11:56:12 -0400 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: <350f3b3c-fa38-c57c-049b-60104b4da788@gmail.com> References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <350f3b3c-fa38-c57c-049b-60104b4da788@gmail.com> Message-ID: Forgive me for barging into this, but just with my deployer and PTL of a deployment project hat on.. As part of the split, wouldn't we *not* need to make a devstack change if this is done correctly because placement will become a nova dependency, which is pulled out of Git and when using Zuul, will test the specific commit in question. From devstack's POV, deploying the way things are shouldn't change (except for when we decide to deploy placement separately).. but I believe in the process, both should technically work? (and if devstack breaks, then maybe we'll be breaking downstream users?) Thanks, Mohammed On Tue, Aug 28, 2018 at 11:47 AM Matt Riedemann wrote: > > On 8/28/2018 6:20 AM, Chris Dent wrote: > > On Mon, 27 Aug 2018, melanie witt wrote: > > > >> I think we should use the openstack review system (gerrit) for moving > >> the code. We're moving a critical piece of nova to its own repo and I > >> think it's worth having the review and history contained in the > >> openstack review system. > > > > This seems a reasonable enough strategy, in broad strokes. I want to > > be sure that we're all actually in agreement on the details, as > > we've had a few false starts and I think some of the details are > > getting confused in the shuffle and the general busy-ness in progress. > > > > Is anyone aware of anyone who hasn't commented yet that should? If > > you are, please poke them so we don't surprise them. > > > >> Using smaller changes that make it easy to see import vs content > >> changes might make review faster than fewer, larger changes. > > > > I _think_ we ought to be able to use the existing commits from the > > runs-throughs-to-passing-tests already done, but if we use the > > strategy described below it doesn't really matter: the TDD approach > > (after fixing paths and test config) is pretty fast. > > > >> The most important bit of all of this is making sure we don't break > >> anything in the process for operators and users consuming nova and > >> placement, and ensure the upgrade path from rocky => stein is tested > >> in grenade. > > > > This is one of the areas where pretty active support from all of > > nova will be required: getting zuul, upgrade paths, and the like > > clearly defined and executed. > > > >> The steps I think we should take are: > >> > >> 1. We copy the placement code into the openstack/placement repo and > >> have it passing all of its own unit and functional tests. > > > > To break that down to more detail, how does this look? > > (note the ALL CAPS where more than acknowledgement is requested) > > > > 1.1 Run the git filter-branch on a copy of nova > > 1.1.1 Add missing files to the file list: > > 1.1.1.1 .gitignore > > 1.1.1.2 # ANYTHING ELSE? > > Unless I were to actually run the git filter-branch tooling to see what > is excluded from the new repo, I can't really say what is missing at > this time. I assume it would be clear during review - which is why I'm > asking that we do this stuff in gerrit where we do reviews. > > > 1.2 Push -f that thing, acknowledge to be broken, to a seed repo on github > > (ed's repo should be fine) > > 1.3 Do the repo creation bits described in > > https://docs.openstack.org/infra/manual/creators.html > > to seed openstack/placement > > 1.3.1 set zuul jobs. Either to noop-jobs, or non voting basic > > func and unit # INPUT DESIRED HERE > > Agree. As I said to gibi elsewhere in this thread, I would hold off on > adding a tempest-full job to the repo until we're at the end. > > > 1.4 Once the repo exists with some content, incrementally bring it to > > working > > 1.4.1 Update tox.ini to be placement oriented > > 1.4.2 Update setup.cfg to be placement oriented > > 1.4.3 Correct .stesr.conf > > 1.4.4 Move base of placement to "right" place > > 1.4.5 Move unit and functionals to right place > > 1.4.6 Do automated path fixings > > 1.4.7 Set up translation domain and i18n.py corectly > > 1.4.8 Trim placement/conf to just the conf settings required > > (api, base, database, keystone, paths, placement) > > 1.4.9 Remove database files that are not relevant (the db api is > > not used by placement) > > 1.4.10 Fix the Database Fixture to be just one database > > 1.4.11 Disable migrations that can't work (because of > > dependencies on nova code, 014 and 030 are examples) > > # INPUT DESIRED HERE AND ON SCHEMA MIGRATIONS IN GENERAL > > 1.4.12 Incrementally get tests working > > 1.4.13 Fix pep8 > > 1.5 Make zuul pep, unit and functional voting > > 1.6 Create tools for db table sync/create > > 1.7 Concurrently go to step 2, where the harder magic happens. > > 1.8 Find and remove dead code (there will be some). > > 1.9 Tune up and confirm docs > > 1.10 Grep for remaining "nova" (as string and spirit) and fix > > > > > > Item 1.4.12 may deserve some discussion. When I've done this the > > several times before, the strategy I've used is to be test driven: > > run either functional or unit tests, find and fix one of the errors > > revealed, commit, move on. > > > > This strategy has worked very well for me because of the "test > > driven" part, but I'm hesitant to do it if reviewers are going to > > get to a patch and say "why didn't you also change X?" The answer to > > that question is "because this is incremental and test driven and > > the tests didn't demand that change (yet)". Sometimes that will mean > > that things of the same class of change are in different commits. > > > > Are people okay with that and willing to commit to being okay with > > that answer in reviews? To some extent we need to have some faith on > > the end result: the tests work. If people are not okay with that, we > > need the people who are not to determine and prove the alternate > > strategy. I've had this one work and work well. > > Seems reasonable to me. But to be clear, if there are 70 failed tests, > are you going to have 70 separate patches? Or this is just one of those > things where you start with 70, fix something, get down to 50 failed > tests, and iterate until you're down to all passing. If so, I'm OK with > that. It's hard to say without knowing how many patches get from 70 > failures to 0 and what the size/complexity of those changes is, but > without knowing I'd default to the incremental approach for ease of review. > > > > > Please help to refine the above, thank you. > > > >> 2. We have a stack of changes to zuul jobs that show nova working but > >> deploying placement in devstack from the new repo instead of nova's > >> repo. This includes the grenade job, ensuring that upgrade works. > > > > If we can make a list for this (and the subsequent) major items that > > is as detailed as I've made for step 1 above, I think that will help > > us avoid some of the confusion and frustration that comes up. I'm > > neither able nor willing to be responsible for creating those lists > > for all these points, but very happy to help. > > Grenade uses devstack so once we have devstack on master installing (and > configuring) placement from the new repo and disable installing and > configuring it from the nova repo, that's the majority of the change I'd > think. > > Grenade will likely need a from-rocky script to move any config that is > necessary, but as you already noted below, if the new repo can live with > an existing nova.conf, then we might not need to do anything in grenade > since placement from the new repo (in stein) could then run with > nova.conf created for placement from the nova repo (in rocky). > > > > >> 3. When those pass, we merge them, effectively orphaning nova's copy > >> of placement. Switch those jobs to voting. > >> > >> 4. Finally, we delete the orphaned code from nova (without needing to > >> make any changes to non-placement-only test code -- code is truly > >> orphaned). > > > > In case you missed it, one of the things I did earlier in the > > discussion was make it so that the wsgi script for placement defined > > in nova's setup.cfg [1] could: > > > > * continue to exist > > * with the same name > > * using the nova.conf file > > * running the extracted placement code > > > > That was easy to do because of the work over the last year or so > > that has been hardening the boundary between placement and nova, in > > place. I've been assuming that maintaining the option to use > > original conf file is a helpful trick for people. Is that the case? > > Yes, as noted above, that might mean we don't need any changes in > grenade, but I can't foretell what will be needed until we start to > actually work on the devstack/grenade changes and see what fails and > then iterate on fixing those issues - exactly what you're doing with > fixing the in-tree unit/functional tests. > > > > > Thanks. > > > > [1] > > https://review.openstack.org/#/c/596291/3/nova/api/openstack/placement/wsgi.py > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dms at danplanet.com Tue Aug 28 15:57:14 2018 From: dms at danplanet.com (Dan Smith) Date: Tue, 28 Aug 2018 08:57:14 -0700 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: <350f3b3c-fa38-c57c-049b-60104b4da788@gmail.com> (Matt Riedemann's message of "Tue, 28 Aug 2018 10:47:12 -0500") References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <350f3b3c-fa38-c57c-049b-60104b4da788@gmail.com> Message-ID: > Grenade uses devstack so once we have devstack on master installing > (and configuring) placement from the new repo and disable installing > and configuring it from the nova repo, that's the majority of the > change I'd think. > > Grenade will likely need a from-rocky script to move any config that > is necessary, but as you already noted below, if the new repo can live > with an existing nova.conf, then we might not need to do anything in > grenade since placement from the new repo (in stein) could then run > with nova.conf created for placement from the nova repo (in rocky). The from-rocky will also need to extract data from the nova-api database for the placement tables and put it into the new placement database (as real operators will have to do). It'll need to do this after the split code has been installed and the schema has been sync'd. Without this, the pre-upgrade resources won't have allocations known by the split placement service. I do not think we should cheat by just pointing the split placement at nova's database. Also, ISTR you added some allocation/inventory checking to devstack via hook, maybe after the tempest job ran? We might want to add some stuff to grenade to verify the pre/post resource allocations before we start this move so we can make sure they're still good after we roll. I'll see if I can hack something up to that effect. --Dan From cdent+os at anticdent.org Tue Aug 28 16:05:15 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 28 Aug 2018 17:05:15 +0100 (BST) Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: <350f3b3c-fa38-c57c-049b-60104b4da788@gmail.com> References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <350f3b3c-fa38-c57c-049b-60104b4da788@gmail.com> Message-ID: On Tue, 28 Aug 2018, Matt Riedemann wrote: >> Are people okay with that and willing to commit to being okay with >> that answer in reviews? To some extent we need to have some faith on >> the end result: the tests work. If people are not okay with that, we >> need the people who are not to determine and prove the alternate >> strategy. I've had this one work and work well. > > Seems reasonable to me. But to be clear, if there are 70 failed tests, are > you going to have 70 separate patches? Or this is just one of those things > where you start with 70, fix something, get down to 50 failed tests, and > iterate until you're down to all passing. If so, I'm OK with that. It's hard > to say without knowing how many patches get from 70 failures to 0 and what > the size/complexity of those changes is, but without knowing I'd default to > the incremental approach for ease of review. It's lumpy. But at least at the begining it will be something like: 0 passing, stil 0 passing; still 0 passing; still 0 passing; 150 passing, 700 failing; 295 passing, X failing, etc. Because in the early stages, test discovery and listing doesn't work at all, for quite a few different reasons. Based on the discussion here, resolving those "different reasons" is things people want to see in different commits. One way to optimize this (if people preferred) would be to not use stestr as called by tox, with its built in test discovery, but instead run testtools or subunit in a non-parallel and failfast where not all tests need to be discovered first. That would provide a more visible sense of "it's getting better" to someone who is running the tests locally using that alternate method, but would not do much for the jobs run by zuul, so probably not all that useful. Thanks for the other info on the devstack and grenade stuff. If I read you right, from your perspective it's a case of "we'll see" and "we'll figure it out", which sounds good to me. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From jaypipes at gmail.com Tue Aug 28 16:08:12 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 28 Aug 2018 12:08:12 -0400 Subject: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update In-Reply-To: <8c34e52161b844cfb32b7268685af0fc@AMSPEX02CL01.citrite.net> References: <97a6065b-02cc-5e0f-0de6-139f810b8d3c@gmail.com> <8c34e52161b844cfb32b7268685af0fc@AMSPEX02CL01.citrite.net> Message-ID: <5661e4f6-c6e3-ed1a-b27d-ee1bc07b575c@gmail.com> Yeah, the nova.CONF cpu_allocation_ratio is being overridden to 0.0: Aug 27 07:43:02.179927 dsvm-devstack-citrix-lon-nodepool-1379254 nova-compute[21125]: DEBUG oslo_service.service [None req-4bb236c4-54c3-42b7-aa4e-e5c8b1ece0c7 None None] cpu_allocation_ratio = 0.0 {{(pid=21125) log_opt_values /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:3019}} Best, -jay On 08/28/2018 10:01 AM, Bob Ball wrote: > We're not running with [1], however that did also fail the CI in the same way - see [2] for the full logs. > > The first failing appeared to be around Aug 27 08:32:14: > Aug 27 08:32:14.502788 dsvm-devstack-citrix-lon-nodepool-1379254 devstack at placement-api.service[13219]: DEBUG nova.api.openstack.placement.requestlog [req-94714f18-87f3-4ff5-9b17-f6e50131b3a9 req-fc47376d-cf04-4cd3-b69c-31ef4d5739a4 service placement] Starting request: 192.168.33.1 "GET /placement/allocation_candidates?limit=1000&resources=MEMORY_MB%3A64%2CVCPU%3A1" {{(pid=13222) __call__ /opt/stack/new/nova/nova/api/openstack/placement/requestlog.py:38}} > Aug 27 08:32:14.583676 dsvm-devstack-citrix-lon-nodepool-1379254 devstack at placement-api.service[13219]: DEBUG nova.api.openstack.placement.objects.resource_provider [req-94714f18-87f3-4ff5-9b17-f6e50131b3a9 req-fc47376d-cf04-4cd3-b69c-31ef4d5739a4 service placement] found 0 providers with available 1 VCPU {{(pid=13222) _get_provider_ids_matching /opt/stack/new/nova/nova/api/openstack/placement/objects/resource_provider.py:2928}} > > Just looking at Naichuan's output, I wonder if this is because allocation_ratio is registered as 0 in the inventory. > > Bob > > [2] http://dd6b71949550285df7dc-dda4e480e005aaa13ec303551d2d8155.r49.cf1.rackcdn.com/41/590041/17/check/dsvm-tempest-neutron-network/afadfe7/ > > -----Original Message----- > From: Eric Fried [mailto:openstack at fried.cc] > Sent: 28 August 2018 14:22 > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update > > Naichuan- > > Are you running with [1]? If you are, the placement logs (at debug > level) should be giving you some useful info. If you're not... perhaps you could pull that in :) Note that it refactors the _get_provider_ids_matching method completely, so it's possible your problem will magically go away when you do. > > [1] https://review.openstack.org/#/c/590041/ > > On 08/28/2018 07:54 AM, Jay Pipes wrote: >> On 08/28/2018 04:17 AM, Naichuan Sun wrote: >>> Hi, experts, >>> >>> XenServer CI failed frequently with an error "No valid host was found. >>> " for more than a week. I think it is cause by placement update. >> >> Hi Naichuan, >> >> Can you give us a link to the logs a patchset's Citrix XenServer CI >> that has failed? Also, a timestamp for the failure you refer to would >> be useful so we can correlate across service logs. >> >> Thanks, >> -jay >> >> ______________________________________________________________________ >> ____ OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From bob.ball at citrix.com Tue Aug 28 16:21:58 2018 From: bob.ball at citrix.com (Bob Ball) Date: Tue, 28 Aug 2018 16:21:58 +0000 Subject: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update In-Reply-To: <5661e4f6-c6e3-ed1a-b27d-ee1bc07b575c@gmail.com> References: <97a6065b-02cc-5e0f-0de6-139f810b8d3c@gmail.com> <8c34e52161b844cfb32b7268685af0fc@AMSPEX02CL01.citrite.net> <5661e4f6-c6e3-ed1a-b27d-ee1bc07b575c@gmail.com> Message-ID: > Yeah, the nova.CONF cpu_allocation_ratio is being overridden to 0.0: The default there is 0.0[1] - and the passing tempest-full from Zuul on https://review.openstack.org/#/c/590041/ has the same line when reading the config[2]: We'll have a dig to see if we can figure out why it's not defaulting to 16 in the ComputeNode. Thanks! Bob [1] https://git.openstack.org/cgit/openstack/nova/tree/nova/conf/compute.py#n386 [2] http://logs.openstack.org/41/590041/17/check/tempest-full/b3f9ddd/controller/logs/screen-n-cpu.txt.gz#_Aug_27_14_18_24_078058 From mriedemos at gmail.com Tue Aug 28 16:36:52 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 28 Aug 2018 11:36:52 -0500 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <350f3b3c-fa38-c57c-049b-60104b4da788@gmail.com> Message-ID: <7c384c4b-53a4-de49-50c3-4b355763930a@gmail.com> On 8/28/2018 10:57 AM, Dan Smith wrote: > The from-rocky will also need to extract data from the nova-api database > for the placement tables and put it into the new placement database (as > real operators will have to do). It'll need to do this after the split > code has been installed and the schema has been sync'd. Without this, > the pre-upgrade resources won't have allocations known by the split > placement service. I do not think we should cheat by just pointing the > split placement at nova's database. Yes excellent points. > > Also, ISTR you added some allocation/inventory checking to devstack via > hook, maybe after the tempest job ran? We might want to add some stuff > to grenade to verify the pre/post resource allocations before we start > this move so we can make sure they're still good after we roll. I'll see > if I can hack something up to that effect. It's in nova: https://github.com/openstack/nova/blob/8b4fcdfdc6c59e024e7639e0d2da6ccbea5c73d3/gate/post_test_hook.sh#L55 And only run in the nova-next job: https://github.com/openstack/nova/blob/8b4fcdfdc6c59e024e7639e0d2da6ccbea5c73d3/playbooks/legacy/nova-next/run.yaml#L62 Grenade already has it's own "resources db" right? So we can shove things in there before we upgrade and then verify they are still there after the upgrade? The post-tempest check I added to nova is looking for orphaned allocations, meaning we successfully completed some operation, like resize for example, but failed to cleanup after ourselves (which we missed quite a bit of that in Pike). -- Thanks, Matt From dms at danplanet.com Tue Aug 28 16:44:03 2018 From: dms at danplanet.com (Dan Smith) Date: Tue, 28 Aug 2018 09:44:03 -0700 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: <7c384c4b-53a4-de49-50c3-4b355763930a@gmail.com> (Matt Riedemann's message of "Tue, 28 Aug 2018 11:36:52 -0500") References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <350f3b3c-fa38-c57c-049b-60104b4da788@gmail.com> <7c384c4b-53a4-de49-50c3-4b355763930a@gmail.com> Message-ID: > Grenade already has it's own "resources db" right? So we can shove > things in there before we upgrade and then verify they are still there > after the upgrade? Yep, I'm working on something right now. We create an instance that survives the upgrade and validate it on the other side. I'll just do some basic inventory and allocation validation that we'll trip over if we somehow don't migrate that data from nova to placement. --Dan From fungi at yuggoth.org Tue Aug 28 16:59:56 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 28 Aug 2018 16:59:56 +0000 Subject: [openstack-dev] [all][tc][openstack-helm] On the problem of OSF copyright headers Message-ID: <20180828165956.astfqwjshf5e2q5m@yuggoth.org> [Obligatory disclaimer: I am not a lawyer, this is not legal advice, and I am not representing the OpenStack Foundation in any legal capacity here.] TL;DR: You should not be putting "Copyright OpenStack Foundation" on content in Git repositories, or anywhere else for that matter (unless you know that you are actually an employee of the OSF or otherwise performing work-for-hire activities at the behest of the OSF). The OpenStack Individual Contributor License Agreement (ICLA) does not require copyright assignment. The foundation does not want, nor does it even generally accept, copyright assignment from developers. Your copyrightable contributions are your own, or by proxy are the copyright of your employer if you have created them as a part of any work-for-hire arrangement (unless you've negotiated with your employer to retain copyright of your work). This topic has been raised multiple times in the past. In the wake of a somewhat protracted thread on the legal-discuss at lists.openstack.org mailing list (it actually started out on the openstack-dev mailing list but was then redirected to a more appropriate venue) back in April, 2013, we attempted to record a summary in the wiki article we'd been maintaining regarding various frequently-asked legal questions: https://wiki.openstack.org/wiki/LegalIssuesFAQ#OpenStack_Foundation_Copyright_Headers In the intervening years, we've worked to make sure other important documentation moves out of the wiki and into more durable maintenance (mostly Git repositories under code review, rendered and published to a Web site). I propose that as this particular topic is germane to contributing to the development of OpenStack software, the OpenStack Technical Committee should publish a statement on the governance site similar in nature to or perhaps as an expansion of the https://governance.openstack.org/tc/reference/licensing.html page where we detail copyright licensing expectations for official OpenStack project team deliverables. As I look back through that wiki article, I see a few other sections which may also be appropriate to cover on the governance site. The reason I'm re-raising this age-old discussion is because change https://review.openstack.org/596619 came to my attention a few minutes ago, in which some 400+ files within the openstack/openstack-helm repository were updated to assign copyright to "OpenStack Foundation" based on this discussion from an openstack-helm IRC meeting back in March (which seems to have involved no legal representative of the OSF): http://eavesdrop.openstack.org/meetings/openstack_helm/2018/openstack_helm.2018-03-20-15.00.log.html#l-101 There are also a couple of similar changes under the same review topic for the openstack/openstack-helm-infra and openstack/openstack-helm-addons repositories, one of which I managed to -1 before it could be approved and merged. I don't recall any follow-up discussion on the legal-discuss at lists.openstack.org or even openstack-dev at lists.openstack.org mailing lists, which I would have expected for any change of this legal significance. The point of this message is of course not to berate anyone, but to raise the example which seems to indicate that as a community we've apparently not done a great job of communicating the legal aspects of contributing to OpenStack. If there are no objections, I'll push up a proposed addition to the openstack/governance repository addressing this semi-frequent misconception and follow up with a link to the review. I'm also going to post to the legal-discuss ML so as to make the subscribers there aware of this thread. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From me at not.mn Tue Aug 28 17:11:18 2018 From: me at not.mn (John Dickinson) Date: Tue, 28 Aug 2018 10:11:18 -0700 Subject: [openstack-dev] [all][tc][openstack-helm] On the problem of OSF copyright headers In-Reply-To: <20180828165956.astfqwjshf5e2q5m@yuggoth.org> References: <20180828165956.astfqwjshf5e2q5m@yuggoth.org> Message-ID: <7FE69900-B21F-4725-9EC2-463512137C23@not.mn> On 28 Aug 2018, at 9:59, Jeremy Stanley wrote: > [Obligatory disclaimer: I am not a lawyer, this is not legal advice, > and I am not representing the OpenStack Foundation in any legal > capacity here.] > > TL;DR: You should not be putting "Copyright OpenStack Foundation" on > content in Git repositories, or anywhere else for that matter > (unless you know that you are actually an employee of the OSF or > otherwise performing work-for-hire activities at the behest of the > OSF). The OpenStack Individual Contributor License Agreement (ICLA) > does not require copyright assignment. The foundation does not want, > nor does it even generally accept, copyright assignment from > developers. Your copyrightable contributions are your own, or by > proxy are the copyright of your employer if you have created them as > a part of any work-for-hire arrangement (unless you've negotiated > with your employer to retain copyright of your work). > > This topic has been raised multiple times in the past. In the wake > of a somewhat protracted thread on the > legal-discuss at lists.openstack.org mailing list (it actually started > out on the openstack-dev mailing list but was then redirected to a > more appropriate venue) back in April, 2013, we attempted to record > a summary in the wiki article we'd been maintaining regarding > various frequently-asked legal questions: > https://wiki.openstack.org/wiki/LegalIssuesFAQ#OpenStack_Foundation_Copyright_Headers > > In the intervening years, we've worked to make sure other important > documentation moves out of the wiki and into more durable > maintenance (mostly Git repositories under code review, rendered and > published to a Web site). I propose that as this particular topic is > germane to contributing to the development of OpenStack software, > the OpenStack Technical Committee should publish a statement on the > governance site similar in nature to or perhaps as an expansion of > the https://governance.openstack.org/tc/reference/licensing.html > page where we detail copyright licensing expectations for official > OpenStack project team deliverables. As I look back through that > wiki article, I see a few other sections which may also be > appropriate to cover on the governance site. > > The reason I'm re-raising this age-old discussion is because change > https://review.openstack.org/596619 came to my attention a few > minutes ago, in which some 400+ files within the > openstack/openstack-helm repository were updated to assign copyright > to "OpenStack Foundation" based on this discussion from an > openstack-helm IRC meeting back in March (which seems to have > involved no legal representative of the OSF): > http://eavesdrop.openstack.org/meetings/openstack_helm/2018/openstack_helm.2018-03-20-15.00.log.html#l-101 > > There are also a couple of similar changes under the same review > topic for the openstack/openstack-helm-infra and > openstack/openstack-helm-addons repositories, one of which I managed > to -1 before it could be approved and merged. I don't recall any > follow-up discussion on the legal-discuss at lists.openstack.org or > even openstack-dev at lists.openstack.org mailing lists, which I would > have expected for any change of this legal significance. > > The point of this message is of course not to berate anyone, but to > raise the example which seems to indicate that as a community we've > apparently not done a great job of communicating the legal aspects > of contributing to OpenStack. If there are no objections, I'll push > up a proposed addition to the openstack/governance repository > addressing this semi-frequent misconception and follow up with a > link to the review. I'm also going to post to the legal-discuss ML > so as to make the subscribers there aware of this thread. > -- > Jeremy Stanley It would be *really* helpful to have a simple rule or pattern for each file's header. Something like "Copyright (c) -present by contributors to this project". As you mentioned, this issue comes up about every two years, and having contributors police (via code review) the appropriate headers for every commit is not a sustainable pattern. The only thing I'm sure about is that the existing copyright headers are not correct, but I have no idea what the correct header are. --John > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Aug 28 17:29:40 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 28 Aug 2018 13:29:40 -0400 Subject: [openstack-dev] [all][tc][openstack-helm] On the problem of OSF copyright headers In-Reply-To: <20180828165956.astfqwjshf5e2q5m@yuggoth.org> References: <20180828165956.astfqwjshf5e2q5m@yuggoth.org> Message-ID: <1535476064-sup-9230@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-08-28 16:59:56 +0000: > [Obligatory disclaimer: I am not a lawyer, this is not legal advice, > and I am not representing the OpenStack Foundation in any legal > capacity here.] > > TL;DR: You should not be putting "Copyright OpenStack Foundation" on > content in Git repositories, or anywhere else for that matter > (unless you know that you are actually an employee of the OSF or > otherwise performing work-for-hire activities at the behest of the > OSF). The OpenStack Individual Contributor License Agreement (ICLA) > does not require copyright assignment. The foundation does not want, > nor does it even generally accept, copyright assignment from > developers. Your copyrightable contributions are your own, or by > proxy are the copyright of your employer if you have created them as > a part of any work-for-hire arrangement (unless you've negotiated > with your employer to retain copyright of your work). > > This topic has been raised multiple times in the past. In the wake > of a somewhat protracted thread on the > legal-discuss at lists.openstack.org mailing list (it actually started > out on the openstack-dev mailing list but was then redirected to a > more appropriate venue) back in April, 2013, we attempted to record > a summary in the wiki article we'd been maintaining regarding > various frequently-asked legal questions: > https://wiki.openstack.org/wiki/LegalIssuesFAQ#OpenStack_Foundation_Copyright_Headers > > In the intervening years, we've worked to make sure other important > documentation moves out of the wiki and into more durable > maintenance (mostly Git repositories under code review, rendered and > published to a Web site). I propose that as this particular topic is > germane to contributing to the development of OpenStack software, > the OpenStack Technical Committee should publish a statement on the > governance site similar in nature to or perhaps as an expansion of > the https://governance.openstack.org/tc/reference/licensing.html > page where we detail copyright licensing expectations for official > OpenStack project team deliverables. As I look back through that > wiki article, I see a few other sections which may also be > appropriate to cover on the governance site. > > The reason I'm re-raising this age-old discussion is because change > https://review.openstack.org/596619 came to my attention a few > minutes ago, in which some 400+ files within the > openstack/openstack-helm repository were updated to assign copyright > to "OpenStack Foundation" based on this discussion from an > openstack-helm IRC meeting back in March (which seems to have > involved no legal representative of the OSF): > http://eavesdrop.openstack.org/meetings/openstack_helm/2018/openstack_helm.2018-03-20-15.00.log.html#l-101 It's also not OK to simply change the copyright assignment for content written by someone else without their approval. That's why we tend not to go back and update existing copyright assignments in the source files anywhere, it's usually too hard to ensure we have everyone's +1. > > There are also a couple of similar changes under the same review > topic for the openstack/openstack-helm-infra and > openstack/openstack-helm-addons repositories, one of which I managed > to -1 before it could be approved and merged. I don't recall any > follow-up discussion on the legal-discuss at lists.openstack.org or > even openstack-dev at lists.openstack.org mailing lists, which I would > have expected for any change of this legal significance. > > The point of this message is of course not to berate anyone, but to > raise the example which seems to indicate that as a community we've > apparently not done a great job of communicating the legal aspects > of contributing to OpenStack. If there are no objections, I'll push > up a proposed addition to the openstack/governance repository > addressing this semi-frequent misconception and follow up with a > link to the review. I'm also going to post to the legal-discuss ML > so as to make the subscribers there aware of this thread. Yes, please do propose that documentation update in the governance repo. I wonder if we should address this at all in the contributors' guide, too? Perhaps just to link to the published governance docs. Doug From jim at jimrollenhagen.com Tue Aug 28 17:50:46 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Tue, 28 Aug 2018 13:50:46 -0400 Subject: [openstack-dev] [ironic] proposing metalsmith for inclusion into ironic governance In-Reply-To: References: Message-ID: On Mon, Aug 27, 2018 at 12:09 PM, Dmitry Tantsur wrote: > Hi all, > > I would like propose the metalsmith library [1][2] for inclusion into the > bare metal project governance. > > What it is and is not > --------------------- > > Metalsmith is a library and CLI tool for using Ironic+Neutron for > provisioning bare metal nodes. It can be seen as a lightweight replacement > of Nova when Nova is too much. The primary use case is single-tenant > standalone installer. > > Metalsmith is not a new service, it does not maintain any state, except > for state maintained by Ironic and Neutron. Metalsmith is not and will not > be a replacement for Nova in any proper cloud scenario. > > Metalsmith does have some overlap with Bifrost, with one important feature > difference: its primary feature is a mini-scheduler that allows to pick a > suitable bare metal node for deployment. > > I have a partial convergence plan as well! First, as part of this effort > I'm working on missing features in openstacksdk, which is used in the > OpenStack ansible modules, which are used in Bifrost. Second, I hope we can > use it as a helper for making Bifrost do scheduling decisions. > > Background > ---------- > > Metalsmith was born with the goal of replacing Nova in TripleO undercloud. > Indeed, the undercloud uses only a small subset of Nova features, while > having features that conflict with Nova's design (for example, bypassing > the scheduler [3]). > > We wanted to avoid putting a lot of provisioning logic into existing > TripleO components. So I wrote a library that does not carry any > TripleO-specific assumptions, but does allow to address its needs. > > Why under Ironic > ---------------- > > I believe the goal of Metalsmith is fully aligned with what the Ironic > team is doing around standalone deployment. I think Metalsmith can provide > a nice entry point into standalone deployments for people who (for any > reasons) will not use Bifrost. With this change I hope to get more exposure > for it. > > The library itself is small, documented [2], follows OpenStack practices > and does not have particular operating requirements. There is nothing in it > that is not familiar to the Ironic team members. > I agree with all of this, after reading the code/docs. +1 from me. // jim > > Please let me know if you have any questions or concerns. > > Dmitry > > > [1] https://github.com/openstack/metalsmith > [2] https://metalsmith.readthedocs.io/en/latest/ > [3] http://tripleo.org/install/advanced_deployment/node_placement.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Tue Aug 28 17:55:53 2018 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 28 Aug 2018 18:55:53 +0100 Subject: [openstack-dev] [ironic] proposing metalsmith for inclusion into ironic governance In-Reply-To: References: Message-ID: +1. I like it. Could also be a good fit for Kayobe's undercloud equivalent at some point. On Tue, 28 Aug 2018 at 18:51, Jim Rollenhagen wrote: > On Mon, Aug 27, 2018 at 12:09 PM, Dmitry Tantsur > wrote: > >> Hi all, >> >> I would like propose the metalsmith library [1][2] for inclusion into the >> bare metal project governance. >> >> What it is and is not >> --------------------- >> >> Metalsmith is a library and CLI tool for using Ironic+Neutron for >> provisioning bare metal nodes. It can be seen as a lightweight replacement >> of Nova when Nova is too much. The primary use case is single-tenant >> standalone installer. >> >> Metalsmith is not a new service, it does not maintain any state, except >> for state maintained by Ironic and Neutron. Metalsmith is not and will not >> be a replacement for Nova in any proper cloud scenario. >> >> Metalsmith does have some overlap with Bifrost, with one important >> feature difference: its primary feature is a mini-scheduler that allows to >> pick a suitable bare metal node for deployment. >> >> I have a partial convergence plan as well! First, as part of this effort >> I'm working on missing features in openstacksdk, which is used in the >> OpenStack ansible modules, which are used in Bifrost. Second, I hope we can >> use it as a helper for making Bifrost do scheduling decisions. >> >> Background >> ---------- >> >> Metalsmith was born with the goal of replacing Nova in TripleO >> undercloud. Indeed, the undercloud uses only a small subset of Nova >> features, while having features that conflict with Nova's design (for >> example, bypassing the scheduler [3]). >> >> We wanted to avoid putting a lot of provisioning logic into existing >> TripleO components. So I wrote a library that does not carry any >> TripleO-specific assumptions, but does allow to address its needs. >> >> Why under Ironic >> ---------------- >> >> I believe the goal of Metalsmith is fully aligned with what the Ironic >> team is doing around standalone deployment. I think Metalsmith can provide >> a nice entry point into standalone deployments for people who (for any >> reasons) will not use Bifrost. With this change I hope to get more exposure >> for it. >> >> The library itself is small, documented [2], follows OpenStack practices >> and does not have particular operating requirements. There is nothing in it >> that is not familiar to the Ironic team members. >> > > I agree with all of this, after reading the code/docs. +1 from me. > > // jim > > >> >> Please let me know if you have any questions or concerns. >> >> Dmitry >> >> >> [1] https://github.com/openstack/metalsmith >> [2] https://metalsmith.readthedocs.io/en/latest/ >> [3] http://tripleo.org/install/advanced_deployment/node_placement.html >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Tue Aug 28 18:03:57 2018 From: aj at suse.com (Andreas Jaeger) Date: Tue, 28 Aug 2018 20:03:57 +0200 Subject: [openstack-dev] [all][tc][openstack-helm] On the problem of OSF copyright headers In-Reply-To: <1535476064-sup-9230@lrrr.local> References: <20180828165956.astfqwjshf5e2q5m@yuggoth.org> <1535476064-sup-9230@lrrr.local> Message-ID: In this context, there's also the question of copyright headers for documentation files which we do not require - see https://wiki.openstack.org/wiki/Documentation/Copyright This came up recently with: https://review.openstack.org/#/c/593662/ I'm happy to see a canonical place for this information, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From cdent+os at anticdent.org Tue Aug 28 18:06:26 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 28 Aug 2018 19:06:26 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-35 Message-ID: HTML: https://anticdent.org/tc-report-18-35.html I didn't do a TC report last week because we spent much of the time discussing the issues surrounding extracting the placement service from nova. I've been working on making that happen—because that's been the intent from the start—for a couple of years now, so tend to be fairly central to those discussions. It felt inappropriate to use these reports as a bully pulpit and in any case I was exhausted, so took a break. However, the topic was still a factor in the recent week's discussion so I guess we're stuck with it: leaving it out would be odd given that it has occupied such a lot of TC attention _and_ these reports are expressly my subjective opinion. Placement is in the last section, in case you're of a mind to skip it. # The Tech Vision There's been a bit of discussion on the [Draft Technical Vision](https://review.openstack.org/#/c/592205/). First, [generally what it is trying to do and how do dependencies fit in](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-22.log.html#t2018-08-22T09:20:12). This eventually flowed into questioning how much [voice and discretion](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-22.log.html#t2018-08-22T18:37:24) individual contributors have with regard to OpenStack overall, as opposed to merely doing what their employers say. There were _widely_ divergent perspectives on this. The truth is probably that everyone has a different experience on a big spectrum. # TC Elections and Campaigning As [announced](http://lists.openstack.org/pipermail/openstack-dev/2018-August/133893.html), TC Election Season approaches. We had some discussion [Friday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-24.log.html#t2018-08-24T13:23:35) about making sure that the right skills were present in candidates and that any events we held with regard to campaigning, perhaps at the PTG, were not actively exclusionary. # That Placement Thing The links below are for historical reference, for people who want to catch up. The current state of affairs and immediate plans are being worked out in [this thread](http://lists.openstack.org/pipermail/openstack-dev/2018-August/thread.html#133781), based on a medium term plan of doing the technical work to create a separate and working repo and then get that repo working happily with nova, devstack, grenade and tempest. Technical consensus is being reached, sometimes slowly, sometimes quickly, but discussion is working and several people are participating. The questions about governance are not yet firmly resolved, but the hope is that before the end of the Stein cycle placement ought to be its own official project. In case you're curious about why the TC is involved in this topic at all, there are two reasons: a) Eric asked for advice, b) it is relevant to the TC's role as [ultimate appeals board](https://governance.openstack.org/tc/reference/charter.html). The torrid story goes something like this: While working on a PTG planning etherpad for [extracting placement from nova](https://etherpad.openstack.org/p/placement-extract-stein-copy), there were some questions about the eventual disposition of placement: a project within or beside nova. That resulted in a [huge email thread](http://lists.openstack.org/pipermail/openstack-dev/2018-August/thread.html#133445). In the midst of that thread, the nova scheduler meeting raised the question of [how do we decide](http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-08-20-14.00.log.html#l-64)? That got moved to the TC IRC channel and mutated from "how do we decide" to many different topics and perspectives. Thus ensued several hours of [argument](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-20.log.html#t2018-08-20T15:27:57). Followed by a ["wow" reprise](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-21.log.html#t2018-08-21T08:10:01) on Tuesday morning. By Thursday a potential compromise was mooted in the [nova meeting](http://eavesdrop.openstack.org/meetings/nova/2018/nova.2018-08-23-14.00.log.html#l-205). However, in the intervening period, several people in the TC, notably Doug and Thierry had expressed a desire to address some of the underlying issues (those that caused so much argument Monday and elsewhere) in a more concrete fashion. I wanted to be sure that they had a chance to provide their input before the compromise deal was sealed. The conversation was moved back to the TC IRC channel asking for input. This led to yet more [tension](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-23.log.html#t2018-08-23T14:48:55). It's not yet clear if that is resolved. All this must seem pretty ridiculous to observers. As is so often the case in community interactions, the tensions that are playing out are not directly tied to any specific technical issues (which, thankfully, are resolving in the short term for placement) but are from the accumulation and aggregation over time of difficulties and frustrations associated with unresolved problems in the exercise and distribution of control and trust, unfinished goals, and unfulfilled promises. When changes like the placement extraction come up, they can act as proxies for deep and lingering problems that we have not developed good systems for resolving. What we do instead of investigating the deep issues is address the immediate symptomatic problems in a technical way and try to move on. People who are not satisfied with this have little recourse. They can either move elsewhere or attempt to cope. We've lost plenty of good people as a result. Some of those that choose to stick around get tetchy. If you have thoughts and feelings about these (or any other) deep and systemic issues in OpenStack, anyone in the TC should be happy to speak with you about them. For best results you should be willing to speak about your concerns publicly. If for some reason you are not comfortable doing so, that is itself an issue that needs to be addressed, but starting out privately is welcomed. The big goal here is for OpenStack to be good, as a technical production _and_ as a community. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From fungi at yuggoth.org Tue Aug 28 18:19:12 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 28 Aug 2018 18:19:12 +0000 Subject: [openstack-dev] [all][tc][openstack-helm] On the problem of OSF copyright headers In-Reply-To: <7FE69900-B21F-4725-9EC2-463512137C23@not.mn> References: <20180828165956.astfqwjshf5e2q5m@yuggoth.org> <7FE69900-B21F-4725-9EC2-463512137C23@not.mn> Message-ID: <20180828181912.rj3l3gyfcv7jxkfc@yuggoth.org> On 2018-08-28 10:11:18 -0700 (-0700), John Dickinson wrote: [...] > It would be *really* helpful to have a simple rule or pattern for > each file's header. Something like "Copyright (c) created>-present by contributors to this project". I applaud and share your desire for a clear rule on such things. Sadly, I have serious doubts it's possible to get one. > As you mentioned, this issue comes up about every two years, and having > contributors police (via code review) the appropriate headers for every > commit is not a sustainable pattern. The only thing I'm sure about is that > the existing copyright headers are not correct, but I have no idea what the > correct header are. The point was not really for reviewers (who can't necessarily know whether or not a copyright claim is in any way legitimate), but rather for authors (please don't assign copyright of your works to the OSF). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ed at leafe.com Tue Aug 28 18:20:11 2018 From: ed at leafe.com (Ed Leafe) Date: Tue, 28 Aug 2018 13:20:11 -0500 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: <350f3b3c-fa38-c57c-049b-60104b4da788@gmail.com> References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <350f3b3c-fa38-c57c-049b-60104b4da788@gmail.com> Message-ID: <2F07E3D6-AF58-4B7E-B730-05EE9BFF7110@leafe.com> On Aug 28, 2018, at 10:47 AM, Matt Riedemann wrote: > >> 1.1 Run the git filter-branch on a copy of nova >> 1.1.1 Add missing files to the file list: >> 1.1.1.1 .gitignore >> 1.1.1.2 # ANYTHING ELSE? > > Unless I were to actually run the git filter-branch tooling to see what is excluded from the new repo, I can't really say what is missing at this time. I assume it would be clear during review - which is why I'm asking that we do this stuff in gerrit where we do reviews. Since I have run this tool a few times, I can answer that. The tool removes all files and their history from git, saving only those you explicitly ask to keep. Every commit involving those files is kept, along with any other files that were part of that commit. The end result is a much smaller directory tree, and a much smaller .git directory (513M -> 113M). Because the history is *removed*, running `git diff @^` on the result will not show the many files deleted by the history filter. So if you were looking for a diff in gerrit that shows all those deletions, it won’t be there. You can do a regular linux diff of the filtered directory and a nova directory to get a list of what has been removed, but that can be somewhat messy. I just want to set expectations when reviewing the extracted code in gerrit. -- Ed Leafe From MM9745 at att.com Tue Aug 28 18:24:27 2018 From: MM9745 at att.com (MCEUEN, MATT) Date: Tue, 28 Aug 2018 18:24:27 +0000 Subject: [openstack-dev] [all][tc][openstack-helm] On the problem of OSF copyright headers In-Reply-To: <20180828165956.astfqwjshf5e2q5m@yuggoth.org> References: <20180828165956.astfqwjshf5e2q5m@yuggoth.org> Message-ID: <7C64A75C21BB8D43BD75BB18635E4D896CA7A5C8@MOSTLS1MSGUSRFF.ITServices.sbc.com> Sorry for the noise folks - the change was well-intentioned but uninformed! We will revert the changes. Thanks, Matt -----Original Message----- From: Jeremy Stanley Sent: Tuesday, August 28, 2018 12:00 PM To: openstack-dev at lists.openstack.org Cc: MCEUEN, MATT Subject: [all][tc][openstack-helm] On the problem of OSF copyright headers [Obligatory disclaimer: I am not a lawyer, this is not legal advice, and I am not representing the OpenStack Foundation in any legal capacity here.] TL;DR: You should not be putting "Copyright OpenStack Foundation" on content in Git repositories, or anywhere else for that matter (unless you know that you are actually an employee of the OSF or otherwise performing work-for-hire activities at the behest of the OSF). The OpenStack Individual Contributor License Agreement (ICLA) does not require copyright assignment. The foundation does not want, nor does it even generally accept, copyright assignment from developers. Your copyrightable contributions are your own, or by proxy are the copyright of your employer if you have created them as a part of any work-for-hire arrangement (unless you've negotiated with your employer to retain copyright of your work). This topic has been raised multiple times in the past. In the wake of a somewhat protracted thread on the legal-discuss at lists.openstack.org mailing list (it actually started out on the openstack-dev mailing list but was then redirected to a more appropriate venue) back in April, 2013, we attempted to record a summary in the wiki article we'd been maintaining regarding various frequently-asked legal questions: https://wiki.openstack.org/wiki/LegalIssuesFAQ#OpenStack_Foundation_Copyright_Headers In the intervening years, we've worked to make sure other important documentation moves out of the wiki and into more durable maintenance (mostly Git repositories under code review, rendered and published to a Web site). I propose that as this particular topic is germane to contributing to the development of OpenStack software, the OpenStack Technical Committee should publish a statement on the governance site similar in nature to or perhaps as an expansion of the https://governance.openstack.org/tc/reference/licensing.html page where we detail copyright licensing expectations for official OpenStack project team deliverables. As I look back through that wiki article, I see a few other sections which may also be appropriate to cover on the governance site. The reason I'm re-raising this age-old discussion is because change https://review.openstack.org/596619 came to my attention a few minutes ago, in which some 400+ files within the openstack/openstack-helm repository were updated to assign copyright to "OpenStack Foundation" based on this discussion from an openstack-helm IRC meeting back in March (which seems to have involved no legal representative of the OSF): http://eavesdrop.openstack.org/meetings/openstack_helm/2018/openstack_helm.2018-03-20-15.00.log.html#l-101 There are also a couple of similar changes under the same review topic for the openstack/openstack-helm-infra and openstack/openstack-helm-addons repositories, one of which I managed to -1 before it could be approved and merged. I don't recall any follow-up discussion on the legal-discuss at lists.openstack.org or even openstack-dev at lists.openstack.org mailing lists, which I would have expected for any change of this legal significance. The point of this message is of course not to berate anyone, but to raise the example which seems to indicate that as a community we've apparently not done a great job of communicating the legal aspects of contributing to OpenStack. If there are no objections, I'll push up a proposed addition to the openstack/governance repository addressing this semi-frequent misconception and follow up with a link to the review. I'm also going to post to the legal-discuss ML so as to make the subscribers there aware of this thread. -- Jeremy Stanley From mriedemos at gmail.com Tue Aug 28 20:26:02 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 28 Aug 2018 15:26:02 -0500 Subject: [openstack-dev] [stable][nova] Nominating melwitt for nova stable core Message-ID: <4e8e03b4-175a-96dc-7aa4-d89ddbad2aa5@gmail.com> I hereby nominate Melanie Witt for nova stable core. Mel has shown that she knows the stable branch policy and is also an active reviewer of nova stable changes. +1/-1 comes from the stable-maint-core team [1] and then after a week with no negative votes I think it's a done deal. Of course +1/-1 from existing nova-stable-maint [2] is also good feedback. [1] https://review.openstack.org/#/admin/groups/530,members [2] https://review.openstack.org/#/admin/groups/540,members -- Thanks, Matt From mtreinish at kortar.org Tue Aug 28 20:35:22 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Tue, 28 Aug 2018 16:35:22 -0400 Subject: [openstack-dev] [stable][nova] Nominating melwitt for nova stable core In-Reply-To: <4e8e03b4-175a-96dc-7aa4-d89ddbad2aa5@gmail.com> References: <4e8e03b4-175a-96dc-7aa4-d89ddbad2aa5@gmail.com> Message-ID: <20180828203522.GA20320@sinanju.localdomain> On Tue, Aug 28, 2018 at 03:26:02PM -0500, Matt Riedemann wrote: > I hereby nominate Melanie Witt for nova stable core. Mel has shown that she > knows the stable branch policy and is also an active reviewer of nova stable > changes. > > +1/-1 comes from the stable-maint-core team [1] and then after a week with > no negative votes I think it's a done deal. Of course +1/-1 from existing > nova-stable-maint [2] is also good feedback. +1 from me. -Matt Treinish > > [1] https://review.openstack.org/#/admin/groups/530,members > [2] https://review.openstack.org/#/admin/groups/540,members > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From zbitter at redhat.com Tue Aug 28 20:47:06 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 28 Aug 2018 16:47:06 -0400 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: <1535398507-sup-4428@lrrr.local> References: <1535398507-sup-4428@lrrr.local> Message-ID: On 27/08/18 15:37, Doug Hellmann wrote: > == Next Steps == > > If your team is ready to have your zuul settings migrated, please > let us know by following up to this email. We will start with the > volunteers, and then work our way through the other teams. Heat is ready. I already did master (and by extension stable/rocky) a little while back in openstack/heat, but you should check that it's correct :) - ZB From miguel at mlavalle.com Tue Aug 28 20:51:49 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Tue, 28 Aug 2018 15:51:49 -0500 Subject: [openstack-dev] [Neutron] Denver PTG Team Dinner Message-ID: Dear Neutron Team, Our PTG Denver team dinner will take place on Thursday 7pm, at https://www.famousdaves.com/Denver-Stapleton, 0.5 miles from the Renaissance Hotel, which translates to an easy 10 minutes walk: https://goo.gl/JZVg8D I am holding a reservation under my name for 20 people. Please confirm your attendance by adding a "Yes" to the line in the etherpad where you registered your name: https://etherpad.openstack.org/p/neutron-stein-ptg Looking forward to see you all there Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Tue Aug 28 21:11:28 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 28 Aug 2018 21:11:28 +0000 Subject: [openstack-dev] [ironic] proposing metalsmith for inclusion into ironic governance In-Reply-To: References: , Message-ID: <1A3C52DFCD06494D8528644858247BF01C1860FA@EX10MBOX03.pnnl.gov> Might be a good option to plug in to the kubernetes cluster api https://github.com/kubernetes-sigs/cluster-api too. Thanks, Kevin ________________________________ From: Mark Goddard [mark at stackhpc.com] Sent: Tuesday, August 28, 2018 10:55 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [ironic] proposing metalsmith for inclusion into ironic governance +1. I like it. Could also be a good fit for Kayobe's undercloud equivalent at some point. On Tue, 28 Aug 2018 at 18:51, Jim Rollenhagen > wrote: On Mon, Aug 27, 2018 at 12:09 PM, Dmitry Tantsur > wrote: Hi all, I would like propose the metalsmith library [1][2] for inclusion into the bare metal project governance. What it is and is not --------------------- Metalsmith is a library and CLI tool for using Ironic+Neutron for provisioning bare metal nodes. It can be seen as a lightweight replacement of Nova when Nova is too much. The primary use case is single-tenant standalone installer. Metalsmith is not a new service, it does not maintain any state, except for state maintained by Ironic and Neutron. Metalsmith is not and will not be a replacement for Nova in any proper cloud scenario. Metalsmith does have some overlap with Bifrost, with one important feature difference: its primary feature is a mini-scheduler that allows to pick a suitable bare metal node for deployment. I have a partial convergence plan as well! First, as part of this effort I'm working on missing features in openstacksdk, which is used in the OpenStack ansible modules, which are used in Bifrost. Second, I hope we can use it as a helper for making Bifrost do scheduling decisions. Background ---------- Metalsmith was born with the goal of replacing Nova in TripleO undercloud. Indeed, the undercloud uses only a small subset of Nova features, while having features that conflict with Nova's design (for example, bypassing the scheduler [3]). We wanted to avoid putting a lot of provisioning logic into existing TripleO components. So I wrote a library that does not carry any TripleO-specific assumptions, but does allow to address its needs. Why under Ironic ---------------- I believe the goal of Metalsmith is fully aligned with what the Ironic team is doing around standalone deployment. I think Metalsmith can provide a nice entry point into standalone deployments for people who (for any reasons) will not use Bifrost. With this change I hope to get more exposure for it. The library itself is small, documented [2], follows OpenStack practices and does not have particular operating requirements. There is nothing in it that is not familiar to the Ironic team members. I agree with all of this, after reading the code/docs. +1 from me. // jim Please let me know if you have any questions or concerns. Dmitry [1] https://github.com/openstack/metalsmith [2] https://metalsmith.readthedocs.io/en/latest/ [3] http://tripleo.org/install/advanced_deployment/node_placement.html __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Tue Aug 28 21:26:57 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 28 Aug 2018 16:26:57 -0500 Subject: [openstack-dev] [stable][nova] Nominating melwitt for nova stable core In-Reply-To: <4e8e03b4-175a-96dc-7aa4-d89ddbad2aa5@gmail.com> References: <4e8e03b4-175a-96dc-7aa4-d89ddbad2aa5@gmail.com> Message-ID: <20180828212656.GA23018@sm-workstation> On Tue, Aug 28, 2018 at 03:26:02PM -0500, Matt Riedemann wrote: > I hereby nominate Melanie Witt for nova stable core. Mel has shown that she > knows the stable branch policy and is also an active reviewer of nova stable > changes. > > +1/-1 comes from the stable-maint-core team [1] and then after a week with > no negative votes I think it's a done deal. Of course +1/-1 from existing > nova-stable-maint [2] is also good feedback. > > [1] https://review.openstack.org/#/admin/groups/530,members > [2] https://review.openstack.org/#/admin/groups/540,members > +1 from me. From davanum at gmail.com Tue Aug 28 21:33:59 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Tue, 28 Aug 2018 17:33:59 -0400 Subject: [openstack-dev] [stable][nova] Nominating melwitt for nova stable core In-Reply-To: <20180828212656.GA23018@sm-workstation> References: <4e8e03b4-175a-96dc-7aa4-d89ddbad2aa5@gmail.com> <20180828212656.GA23018@sm-workstation> Message-ID: +1 from me On Tue, Aug 28, 2018 at 5:27 PM Sean McGinnis wrote: > On Tue, Aug 28, 2018 at 03:26:02PM -0500, Matt Riedemann wrote: > > I hereby nominate Melanie Witt for nova stable core. Mel has shown that > she > > knows the stable branch policy and is also an active reviewer of nova > stable > > changes. > > > > +1/-1 comes from the stable-maint-core team [1] and then after a week > with > > no negative votes I think it's a done deal. Of course +1/-1 from existing > > nova-stable-maint [2] is also good feedback. > > > > [1] https://review.openstack.org/#/admin/groups/530,members > > [2] https://review.openstack.org/#/admin/groups/540,members > > > > +1 from me. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Aug 28 21:42:29 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 28 Aug 2018 17:42:29 -0400 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: References: <1535398507-sup-4428@lrrr.local> Message-ID: <1535492470-sup-670@lrrr.local> Excerpts from Zane Bitter's message of 2018-08-28 16:47:06 -0400: > On 27/08/18 15:37, Doug Hellmann wrote: > > == Next Steps == > > > > If your team is ready to have your zuul settings migrated, please > > let us know by following up to this email. We will start with the > > volunteers, and then work our way through the other teams. > > Heat is ready. I already did master (and by extension stable/rocky) a > little while back in openstack/heat, but you should check that it's > correct :) > > - ZB > OK, I ran the scripts and generated a bunch of patches for all of your repositories. You will want to review the ones on master carefully if you have already done part of the work. Let me know if you have questions about resolving any conflicts. Doug +----------------------------------------------+-------------------------------+---------------+ | Subject | Repo | Branch | +----------------------------------------------+-------------------------------+---------------+ | import zuul job settings from project-config | openstack-dev/heat-cfnclient | master | | add python 3.6 unit test job | openstack-dev/heat-cfnclient | master | | add python 3.6 unit test job | openstack-dev/heat-cfnclient | master | | convert py35 jobs to py3 | openstack/heat | master | | import zuul job settings from project-config | openstack/heat | master | | switch documentation job to new PTI | openstack/heat | master | | add python 3.6 unit test job | openstack/heat | master | | import zuul job settings from project-config | openstack/heat | stable/ocata | | import zuul job settings from project-config | openstack/heat | stable/pike | | import zuul job settings from project-config | openstack/heat | stable/queens | | import zuul job settings from project-config | openstack/heat | stable/rocky | | import zuul job settings from project-config | openstack/heat-agents | master | | switch documentation job to new PTI | openstack/heat-agents | master | | add python 3.6 unit test job | openstack/heat-agents | master | | import zuul job settings from project-config | openstack/heat-agents | stable/ocata | | import zuul job settings from project-config | openstack/heat-agents | stable/pike | | import zuul job settings from project-config | openstack/heat-agents | stable/queens | | import zuul job settings from project-config | openstack/heat-cfntools | master | | switch documentation job to new PTI | openstack/heat-cfntools | master | | add python 3.6 unit test job | openstack/heat-cfntools | master | | import zuul job settings from project-config | openstack/heat-dashboard | master | | switch documentation job to new PTI | openstack/heat-dashboard | master | | import zuul job settings from project-config | openstack/heat-dashboard | stable/queens | | import zuul job settings from project-config | openstack/heat-specs | master | | import zuul job settings from project-config | openstack/heat-tempest-plugin | master | | fix tox python3 overrides | openstack/heat-templates | master | | import zuul job settings from project-config | openstack/heat-translator | master | | switch documentation job to new PTI | openstack/heat-translator | master | | add python 3.6 unit test job | openstack/heat-translator | master | | import zuul job settings from project-config | openstack/heat-translator | stable/pike | | import zuul job settings from project-config | openstack/heat-translator | stable/queens | | import zuul job settings from project-config | openstack/heat-translator | stable/rocky | | import zuul job settings from project-config | openstack/python-heatclient | master | | switch documentation job to new PTI | openstack/python-heatclient | master | | add python 3.6 unit test job | openstack/python-heatclient | master | | import zuul job settings from project-config | openstack/python-heatclient | stable/ocata | | import zuul job settings from project-config | openstack/python-heatclient | stable/pike | | import zuul job settings from project-config | openstack/python-heatclient | stable/queens | | import zuul job settings from project-config | openstack/python-heatclient | stable/rocky | | import zuul job settings from project-config | openstack/tosca-parser | master | | switch documentation job to new PTI | openstack/tosca-parser | master | | add python 3.6 unit test job | openstack/tosca-parser | master | | import zuul job settings from project-config | openstack/tosca-parser | stable/pike | | import zuul job settings from project-config | openstack/tosca-parser | stable/queens | | import zuul job settings from project-config | openstack/tosca-parser | stable/rocky | | | | | | | | | +----------------------------------------------+-------------------------------+---------------+ From doug at doughellmann.com Tue Aug 28 21:44:45 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 28 Aug 2018 17:44:45 -0400 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: <1535492470-sup-670@lrrr.local> References: <1535398507-sup-4428@lrrr.local> <1535492470-sup-670@lrrr.local> Message-ID: <1535492628-sup-529@lrrr.local> Excerpts from Doug Hellmann's message of 2018-08-28 17:42:29 -0400: > Excerpts from Zane Bitter's message of 2018-08-28 16:47:06 -0400: > > On 27/08/18 15:37, Doug Hellmann wrote: > > > == Next Steps == > > > > > > If your team is ready to have your zuul settings migrated, please > > > let us know by following up to this email. We will start with the > > > volunteers, and then work our way through the other teams. > > > > Heat is ready. I already did master (and by extension stable/rocky) a > > little while back in openstack/heat, but you should check that it's > > correct :) > > > > - ZB > > > > OK, I ran the scripts and generated a bunch of patches for all of > your repositories. You will want to review the ones on master > carefully if you have already done part of the work. Let me know > if you have questions about resolving any conflicts. Small correction: My query tool that built the list below doesn't differentiate based on author, so it looks like some of those patches are actually created by other folks and using the 'python3-first' topic. Doug > > Doug > > +----------------------------------------------+-------------------------------+---------------+ > | Subject | Repo | Branch | > +----------------------------------------------+-------------------------------+---------------+ > | import zuul job settings from project-config | openstack-dev/heat-cfnclient | master | > | add python 3.6 unit test job | openstack-dev/heat-cfnclient | master | > | add python 3.6 unit test job | openstack-dev/heat-cfnclient | master | > | convert py35 jobs to py3 | openstack/heat | master | > | import zuul job settings from project-config | openstack/heat | master | > | switch documentation job to new PTI | openstack/heat | master | > | add python 3.6 unit test job | openstack/heat | master | > | import zuul job settings from project-config | openstack/heat | stable/ocata | > | import zuul job settings from project-config | openstack/heat | stable/pike | > | import zuul job settings from project-config | openstack/heat | stable/queens | > | import zuul job settings from project-config | openstack/heat | stable/rocky | > | import zuul job settings from project-config | openstack/heat-agents | master | > | switch documentation job to new PTI | openstack/heat-agents | master | > | add python 3.6 unit test job | openstack/heat-agents | master | > | import zuul job settings from project-config | openstack/heat-agents | stable/ocata | > | import zuul job settings from project-config | openstack/heat-agents | stable/pike | > | import zuul job settings from project-config | openstack/heat-agents | stable/queens | > | import zuul job settings from project-config | openstack/heat-cfntools | master | > | switch documentation job to new PTI | openstack/heat-cfntools | master | > | add python 3.6 unit test job | openstack/heat-cfntools | master | > | import zuul job settings from project-config | openstack/heat-dashboard | master | > | switch documentation job to new PTI | openstack/heat-dashboard | master | > | import zuul job settings from project-config | openstack/heat-dashboard | stable/queens | > | import zuul job settings from project-config | openstack/heat-specs | master | > | import zuul job settings from project-config | openstack/heat-tempest-plugin | master | > | fix tox python3 overrides | openstack/heat-templates | master | > | import zuul job settings from project-config | openstack/heat-translator | master | > | switch documentation job to new PTI | openstack/heat-translator | master | > | add python 3.6 unit test job | openstack/heat-translator | master | > | import zuul job settings from project-config | openstack/heat-translator | stable/pike | > | import zuul job settings from project-config | openstack/heat-translator | stable/queens | > | import zuul job settings from project-config | openstack/heat-translator | stable/rocky | > | import zuul job settings from project-config | openstack/python-heatclient | master | > | switch documentation job to new PTI | openstack/python-heatclient | master | > | add python 3.6 unit test job | openstack/python-heatclient | master | > | import zuul job settings from project-config | openstack/python-heatclient | stable/ocata | > | import zuul job settings from project-config | openstack/python-heatclient | stable/pike | > | import zuul job settings from project-config | openstack/python-heatclient | stable/queens | > | import zuul job settings from project-config | openstack/python-heatclient | stable/rocky | > | import zuul job settings from project-config | openstack/tosca-parser | master | > | switch documentation job to new PTI | openstack/tosca-parser | master | > | add python 3.6 unit test job | openstack/tosca-parser | master | > | import zuul job settings from project-config | openstack/tosca-parser | stable/pike | > | import zuul job settings from project-config | openstack/tosca-parser | stable/queens | > | import zuul job settings from project-config | openstack/tosca-parser | stable/rocky | > | | | | > | | | | > +----------------------------------------------+-------------------------------+---------------+ From doug at doughellmann.com Tue Aug 28 21:46:09 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 28 Aug 2018 17:46:09 -0400 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: References: <1535398507-sup-4428@lrrr.local> Message-ID: <1535492703-sup-9104@lrrr.local> Excerpts from Michel Peterson's message of 2018-08-28 16:30:02 +0300: > On Mon, Aug 27, 2018 at 10:37 PM, Doug Hellmann > wrote: > > > > > If your team is ready to have your zuul settings migrated, please > > let us know by following up to this email. We will start with the > > volunteers, and then work our way through the other teams. > > > > The networking-odl team is willing to volunteer for this. networking-odl is part of the neutron team, and the tools are set up to work based on full (not partial) teams. So, if the neutron team is ready we can go ahead with those. Let us know. Doug From doug at doughellmann.com Tue Aug 28 21:58:31 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 28 Aug 2018 17:58:31 -0400 Subject: [openstack-dev] [goal][python3] goal champion tools Message-ID: <1535493178-sup-3554@lrrr.local> The story with all of the tasks for tracking the zuul migration has so many comments it will no longer display in my browser. So, I've created 2 tools to look at the data from the command line. https://review.openstack.org/597245 has a tool to list the task assignments https://review.openstack.org/597244 has a tool to change the task assignment for 1 task The UX for the assignment tool is terrible, so you have to provide your numerical user ID. The easiest way to find that may be to use the list tool and copy it from one of the tasks already assigned to you. Doug From ekcs.openstack at gmail.com Tue Aug 28 22:06:54 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Tue, 28 Aug 2018 15:06:54 -0700 Subject: [openstack-dev] [tempest][qa][congress] trouble setting tempest feature flag In-Reply-To: References: Message-ID: Any thoughts on what could be going wrong that the tempest tests still see the default conf values rather than those set here? Thanks lots! Here is the devstack log line showing the flags being set: http://logs.openstack.org/64/594564/4/check/congress-devstack-api-mysql/ce34264/logs/devstacklog.txt.gz#_2018-08-28_21_23_15_934 On Wed, Aug 22, 2018 at 9:12 AM Eric K wrote: > > Hi all, > > I have added feature flags for the congress tempest plugin [1] and set > them in the devstack plugin [2], but the flags seem to be ignored. The > tests are skipped [3] according to the default False flag rather than > run according to the True flag set in devstack plugin. Any hints on > what may be wrong? Thanks so much! > > [1] https://review.openstack.org/#/c/594747/3 > [2] https://review.openstack.org/#/c/594793/1/devstack/plugin.sh > [3] http://logs.openstack.org/64/594564/3/check/congress-devstack-api-mysql/b2cd46f/logs/testr_results.html.gz > (the bottom two skipped tests were expected to run) From naichuan.sun at citrix.com Tue Aug 28 22:59:51 2018 From: naichuan.sun at citrix.com (Naichuan Sun) Date: Tue, 28 Aug 2018 22:59:51 +0000 Subject: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update In-Reply-To: References: <97a6065b-02cc-5e0f-0de6-139f810b8d3c@gmail.com> <8c34e52161b844cfb32b7268685af0fc@AMSPEX02CL01.citrite.net> <5661e4f6-c6e3-ed1a-b27d-ee1bc07b575c@gmail.com> Message-ID: Thank you very much for the help, Bob, Jay and Eric. Naichuan Sun -----Original Message----- From: Bob Ball [mailto:bob.ball at citrix.com] Sent: Wednesday, August 29, 2018 12:22 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update > Yeah, the nova.CONF cpu_allocation_ratio is being overridden to 0.0: The default there is 0.0[1] - and the passing tempest-full from Zuul on https://review.openstack.org/#/c/590041/ has the same line when reading the config[2]: We'll have a dig to see if we can figure out why it's not defaulting to 16 in the ComputeNode. Thanks! Bob [1] https://git.openstack.org/cgit/openstack/nova/tree/nova/conf/compute.py#n386 [2] http://logs.openstack.org/41/590041/17/check/tempest-full/b3f9ddd/controller/logs/screen-n-cpu.txt.gz#_Aug_27_14_18_24_078058 __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Tue Aug 28 23:11:13 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 28 Aug 2018 19:11:13 -0400 Subject: [openstack-dev] [all][election] TC Election Season In-Reply-To: <20180828051500.GM26778@thor.bakeyournoodle.com> References: <20180828051500.GM26778@thor.bakeyournoodle.com> Message-ID: <1535497617-sup-8077@lrrr.local> Excerpts from Tony Breeds's message of 2018-08-28 15:15:00 +1000: > Election details: https://governance.openstack.org/election/ > > Please read the stipulations and timelines for candidates and > electorate contained in this governance documentation. > > There will be further announcements posted to the mailing list as > action is required from the electorate or candidates. This email > is for information purposes only. > > If you have any questions which you feel affect others please reply > to this email thread. > > If you have any questions that you which to discuss in private please > email any of the election officials[1] so that we may address your > concerns. > > Thank you, > > [1] https://governance.openstack.org/election/#election-officials > > > Yours Tony. This is a good time for everyone in the community to start thinking about who you want to have serving on the TC, and to encourage those folks to run. Don't rely on them to nominate themselves without being prompted. Doug From ekcs.openstack at gmail.com Tue Aug 28 23:20:37 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Tue, 28 Aug 2018 16:20:37 -0700 Subject: [openstack-dev] [tempest][qa][congress] trouble setting tempest feature flag In-Reply-To: References: Message-ID: Ha. Turned out to be a simple mistake in hyphens vs underscores. On Tue, Aug 28, 2018 at 3:06 PM Eric K wrote: > > Any thoughts on what could be going wrong that the tempest tests still > see the default conf values rather than those set here? Thanks lots! > > Here is the devstack log line showing the flags being set: > http://logs.openstack.org/64/594564/4/check/congress-devstack-api-mysql/ce34264/logs/devstacklog.txt.gz#_2018-08-28_21_23_15_934 > > On Wed, Aug 22, 2018 at 9:12 AM Eric K wrote: > > > > Hi all, > > > > I have added feature flags for the congress tempest plugin [1] and set > > them in the devstack plugin [2], but the flags seem to be ignored. The > > tests are skipped [3] according to the default False flag rather than > > run according to the True flag set in devstack plugin. Any hints on > > what may be wrong? Thanks so much! > > > > [1] https://review.openstack.org/#/c/594747/3 > > [2] https://review.openstack.org/#/c/594793/1/devstack/plugin.sh > > [3] http://logs.openstack.org/64/594564/3/check/congress-devstack-api-mysql/b2cd46f/logs/testr_results.html.gz > > (the bottom two skipped tests were expected to run) From sukhdevkapur at gmail.com Wed Aug 29 01:36:03 2018 From: sukhdevkapur at gmail.com (Sukhdev Kapur) Date: Tue, 28 Aug 2018 18:36:03 -0700 Subject: [openstack-dev] [Neutron] Tungsten Fabric (formally OpenContrail) at Denver PTG Message-ID: The Tungsten Fabric community invites you to join us at the OpenStack PTG in Denver to discuss and contribute to two great projects: Tungsten Fabric and Networking-OpenContrail . We’ll be meeting on Tuesday, September 11, in Room Telluride B of Renaissance Denver Stapleton Hotel from 9am - 6pm. Here’s the agenda: 9am - 1:00 pm: *Networking-OpenContrail* Networking-OpenContrail is the OpenStack Neutron ML2 driver to integrate Tungsten Fabric to OpenStack. It is designed to eventually replace the old monolithic driver. This session will provide an update on the project, and then we’ll discuss the next steps. 1:00-2:00: Lunch 2:00-6:00: *Tungsten **Fabric * In this session, we’ll start with a brief overview of Tungsten Fabric for the benefit of those who may be new to the project. Then we’ll dive in deeper, discussing the Tungsten Fabric architecture, developer workflow, getting patches into Gerrit, and building and testing TF for OpenStack and Kubernetes. We hope you’ll join us: *Register* for the PTG here , and Please let us know if you’ll attend these sessions: RSVP Sukhdev Kapur and TF Networking Team IRC - Sukhdev sukhdevkapur at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Aug 29 02:02:35 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 28 Aug 2018 20:02:35 -0600 Subject: [openstack-dev] [TripleO][kolla-ansible][DevStack][Tempest][openstack-ansible] Collaborate towards creating a unified ansible tempest role in openstack-ansible project In-Reply-To: References: <6F13BBD8-679B-4F3E-8585-2F90F6A5F077@rackspace.co.uk> Message-ID: On Mon, Aug 27, 2018 at 12:43 PM Mohammed Naser wrote: > Hi Chandan, > > This is great, I added some more OSA-side comments, I'd love for us to > find sometime to sit down to discuss this at the PTG. > > Thanks, > Mohammed > > On Mon, Aug 27, 2018 at 12:39 PM, Jesse Pretorius > wrote: > >>On 8/27/18, 7:33 AM, "Chandan kumar" wrote: > > > >> I have summarized the problem statement and requirements on this > etherpad [3]. > >> Feel free to add your requirements and questions for the same on the > >> etherpad so that we can shape the unified ansible role in a better > >> way. > > > >> Links: > >> 1. > http://lists.openstack.org/pipermail/openstack-dev/2018-August/133119.html > >> 2. https://github.com/openstack/openstack-ansible-os_tempest > >> 3. https://etherpad.openstack.org/p/ansible-tempest-role > > > > Thanks for compiling this Chandan. I've added the really base > requirements from an OSA standpoint that come to mind and a question that's > been hanging in the recesses of my mind for a while. > > > > > Thanks Chandan for kicking this off! > > ________________________________ > > Rackspace Limited is a company registered in England & Wales (company > registered number 038 > 97010) > whose registered office is at 5 Millington Road, Hyde Park Hayes, Middlesex > UB3 4AZ. Rackspace Limited privacy policy can be viewed at > www.rackspace.co.uk/legal/privacy-policy - This e-mail message may > contain confidential or privileged information intended for the recipient. > Any dissemination, distribution or copying of the enclosed material is > prohibited. If you receive this transmission in error, please notify us > immediately by e-mail at abuse at rackspace.com and delete the original > message. Your cooperation is appreciated. > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 <(514)%20316-8872> > D. 800-910-1726 ext. 200 <(800)%20910-1726> > E. mnaser at vexxhost.com > W. http://vexxhost.com > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Wes Hayutin Associate MANAGER Red Hat whayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Wed Aug 29 02:42:52 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 29 Aug 2018 12:42:52 +1000 Subject: [openstack-dev] [stable][nova] Nominating melwitt for nova stable core In-Reply-To: <4e8e03b4-175a-96dc-7aa4-d89ddbad2aa5@gmail.com> References: <4e8e03b4-175a-96dc-7aa4-d89ddbad2aa5@gmail.com> Message-ID: <20180829024252.GP26778@thor.bakeyournoodle.com> On Tue, Aug 28, 2018 at 03:26:02PM -0500, Matt Riedemann wrote: > I hereby nominate Melanie Witt for nova stable core. Mel has shown that she > knows the stable branch policy and is also an active reviewer of nova stable > changes. > > +1/-1 comes from the stable-maint-core team [1] and then after a week with > no negative votes I think it's a done deal. Of course +1/-1 from existing > nova-stable-maint [2] is also good feedback. > > [1] https://review.openstack.org/#/admin/groups/530,members > [2] https://review.openstack.org/#/admin/groups/540,members +1 from me! Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From nguyentrihai93 at gmail.com Wed Aug 29 03:02:12 2018 From: nguyentrihai93 at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gVHLDrSBI4bqjaQ==?=) Date: Wed, 29 Aug 2018 12:02:12 +0900 Subject: [openstack-dev] [goal][python3] goal champion tools In-Reply-To: <1535493178-sup-3554@lrrr.local> References: <1535493178-sup-3554@lrrr.local> Message-ID: For dealing with this problem, I turn off all the Timeline events in Preferences. On Wed, Aug 29, 2018 at 6:58 AM Doug Hellmann wrote: > The story with all of the tasks for tracking the zuul migration has so > many comments it will no longer display in my browser. So, I've created > 2 tools to look at the data from the command line. > > https://review.openstack.org/597245 has a tool to list the task > assignments > > https://review.openstack.org/597244 has a tool to change the task > assignment for 1 task > > The UX for the assignment tool is terrible, so you have to provide > your numerical user ID. The easiest way to find that may be to use > the list tool and copy it from one of the tasks already assigned > to you. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Nguyen Tri Hai / Ph.D. Student ANDA Lab., Soongsil Univ., Seoul, South Korea *[image: http://link.springer.com/chapter/10.1007/978-3-319-26135-5_4] * -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Wed Aug 29 04:43:32 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 29 Aug 2018 06:43:32 +0200 Subject: [openstack-dev] [infra] Retiring some unused repositories Message-ID: <4b253470-706c-3618-e0ae-cedd9784f106@suse.com> The infra team plans to retire the following repositories: openstack-infra/activity-board openstack-infra/beaker-localhost openstack-infra/beaker-nodepool openstack-infra/err2d2 openstack-infra/js-afs-blob-store openstack-infra/js-openstack-registry-hooks openstack-infra/js-generator-openstack openstack-infra/pypi-mirror openstack-infra/trystack-site openstack-infra/puppet-vinz openstack-infra/vinz openstack-infra/vinz-webclient Explanation: openstack-infra/activity-board: Was part of bitergia work with openstack which is no longer done openstack-infra/beaker-localhost openstack-infra/beaker-nodepool These may have been attempts at having puppet beaker testing play nice with our CI but we hacked around it by pointing the beaker config at localhost. There are no commits here should be safe to retire. openstack-infra/err2d2 Part of a plan to make our IRC bots errbot driven but more recent spec has us sticking with supybot fork and plugins. openstack-infra/js-afs-blob-store openstack-infra/js-openstack-registry-hooks Allowed us to mirror npm packages on afs until npm grew too large. I think we can retire this as we don't mirror npm any longer. openstack-infra/js-generator-openstack Appears to be a cookiecutter like system for openstack js projects. Maybe we should just use cookiecutter? openstack-infra/pypi-mirror Was tooling to make a mirror of a subset of pypi. We moved to bandersnatch and now just proxy cache pypi. openstack-infra/trystack-site Trystack has been retired due to spam and the passport program is suggested instead. openstack-infra/puppet-vinz openstack-infra/vinz openstack-infra/vinz-webclient Project to provide alternate Gerrit UI that never got commits. Process is started with https://review.openstack.org/597370, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From yamamoto at midokura.com Wed Aug 29 06:06:44 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Wed, 29 Aug 2018 15:06:44 +0900 Subject: [openstack-dev] [neutron] tox-siblings alternative for local testing Message-ID: hi, after a recent change [1] , neutron-fwaas' unit tests need an unreleased version of neutron. (the one including the corresponding change [2]) while it's handled by tox-siblings on the gate, it requires extra steps if you want to run the tests locally. is there any preferred solution for this? i guess the simplest solution is to make an intermediate release of neutron and publish it on pypi. i wonder if it's acceptable or not. [1] https://review.openstack.org/#/c/596971/ [2] https://review.openstack.org/#/c/586525/ From xiang.edison at gmail.com Wed Aug 29 06:36:45 2018 From: xiang.edison at gmail.com (Edison Xiang) Date: Wed, 29 Aug 2018 14:36:45 +0800 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API Message-ID: Hi team, As we know, Open API 3.0 was released on July, 2017, it is about one year ago. Open API 3.0 support some new features like anyof, oneof and allof than Open API 2.0(Swagger 2.0). Now OpenStack projects do not support Open API. Also I found some old emails in the Mail List about supporting Open API 2.0 in OpenStack. Some limitations are mentioned in the Mail List for OpenStack API: 1. The POST */action APIs. These APIs are exist in lots of projects like nova, cinder. These APIs have the same URI but the responses will be different when the request is different. 2. Micro versions. These are controller via headers, which are sometimes used to describe behavioral changes in an API, not just request/response schema changes. About the first limitation, we can find the solution in the Open API 3.0. The example [2] shows that we can define different request/response in the same URI by anyof feature in Open API 3.0. About the micro versions problem, I think it is not a limitation related a special API Standard. We can list all micro versions API schema files in one directory like nova/V2, or we can list the schema changes between micro versions as tempest project did [3]. Based on Open API 3.0, it can bring lots of benefits for OpenStack Community and does not impact the current features the Community has. For example, we can automatically generate API documents, different language Clients(SDK) maybe for different micro versions, and generate cloud tool adapters for OpenStack, like ansible module, terraform providers and so on. Also we can make an API UI to provide an online and visible API search, API Calling for every OpenStack API. 3rd party developers can also do some self-defined development. [1] https://github.com/OAI/OpenAPI-Specification [2] https://github.com/edisonxiang/OpenAPI-Specification/blob/master/examples/v3.0/petstore.yaml#L94-L109 [3] https://github.com/openstack/tempest/tree/master/tempest/lib/api_schema/response/compute Best Regards, Edison Xiang -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Wed Aug 29 07:08:28 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 29 Aug 2018 09:08:28 +0200 Subject: [openstack-dev] [infra] Retiring some unused repositories In-Reply-To: <4b253470-706c-3618-e0ae-cedd9784f106@suse.com> References: <4b253470-706c-3618-e0ae-cedd9784f106@suse.com> Message-ID: <9df354d6-ba76-cd73-40ef-c69e9ab98543@suse.com> On 2018-08-29 06:43, Andreas Jaeger wrote: > The infra team plans to retire the following repositories: >  [...] One more repository to retire: openstack-infra/pynotedb Part of storyboard effort but unused and no further interest in it. Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From aschadin at sbcloud.ru Wed Aug 29 07:42:27 2018 From: aschadin at sbcloud.ru (=?utf-8?B?0KfQsNC00LjQvSDQkNC70LXQutGB0LDQvdC00YAg0KHQtdGA0LPQtdC10LI=?= =?utf-8?B?0LjRhw==?=) Date: Wed, 29 Aug 2018 07:42:27 +0000 Subject: [openstack-dev] [watcher] weekly meeting Message-ID: <950C8598-2C5C-44C8-9A10-2A93DB6CFF7D@sbcloud.ru> Greetings, We’ll have meeting today at 8:00 UTC on #openstack-meeting-3 channel. Best Regards, ____ Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From naichuan.sun at citrix.com Wed Aug 29 09:26:16 2018 From: naichuan.sun at citrix.com (Naichuan Sun) Date: Wed, 29 Aug 2018 09:26:16 +0000 Subject: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update In-Reply-To: References: <97a6065b-02cc-5e0f-0de6-139f810b8d3c@gmail.com> <8c34e52161b844cfb32b7268685af0fc@AMSPEX02CL01.citrite.net> <5661e4f6-c6e3-ed1a-b27d-ee1bc07b575c@gmail.com> Message-ID: <77a83f596cc04aa685b7b7eb2aa9d985@SINPEX02CL01.citrite.net> Hi, Eric and Jay, The VCPU/Disk/RAM allocation ratio are set to 0.0 by default, and resource tracker would reset it to valid values in https://github.com/openstack/nova/blob/master/nova/objects/compute_node.py#L199. But it looks the value is set back to 0.0 by some function(I'm not sure who does it...), so xenserver CI is broken. Any suggestion about that? Looks libvirt works well, do they set allocation ratio in the configure file? Thank you very much. BR. Naichuan Sun -----Original Message----- From: Naichuan Sun Sent: Wednesday, August 29, 2018 7:00 AM To: OpenStack Development Mailing List (not for usage questions) Subject: RE: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update Thank you very much for the help, Bob, Jay and Eric. Naichuan Sun -----Original Message----- From: Bob Ball [mailto:bob.ball at citrix.com] Sent: Wednesday, August 29, 2018 12:22 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update > Yeah, the nova.CONF cpu_allocation_ratio is being overridden to 0.0: The default there is 0.0[1] - and the passing tempest-full from Zuul on https://review.openstack.org/#/c/590041/ has the same line when reading the config[2]: We'll have a dig to see if we can figure out why it's not defaulting to 16 in the ComputeNode. Thanks! Bob [1] https://git.openstack.org/cgit/openstack/nova/tree/nova/conf/compute.py#n386 [2] http://logs.openstack.org/41/590041/17/check/tempest-full/b3f9ddd/controller/logs/screen-n-cpu.txt.gz#_Aug_27_14_18_24_078058 __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From aj at suse.com Wed Aug 29 10:24:41 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 29 Aug 2018 12:24:41 +0200 Subject: [openstack-dev] [infra] Retiring some unused repositories In-Reply-To: <9df354d6-ba76-cd73-40ef-c69e9ab98543@suse.com> References: <4b253470-706c-3618-e0ae-cedd9784f106@suse.com> <9df354d6-ba76-cd73-40ef-c69e9ab98543@suse.com> Message-ID: On 2018-08-29 09:08, Andreas Jaeger wrote: > On 2018-08-29 06:43, Andreas Jaeger wrote: >> The infra team plans to retire the following repositories: >>  [...] > > One more repository to retire: > > openstack-infra/pynotedb > Part of storyboard effort but unused and no further interest in it. Further discussion needed for the one above. I'm adding these two now as well to retire: openstack-infra/releasestatus openstack-infra/puppet-releasestatus releasestatus is not used since December 2015, see https://review.openstack.org/#/c/254817 Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From davanum at gmail.com Wed Aug 29 10:31:23 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Wed, 29 Aug 2018 06:31:23 -0400 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: References: Message-ID: Edison, This is definitely a step in the right direction if we can pull it off. Given the previous experiences and the current situation of how and where we store the information currently and how we generate the website for the API(s), can you please outline - what would be the impact to projects? - what steps they would have to take? Also, the whole point of having these definitions is that the generated code works. Do we have a sample/mock API where we can show that the Action and Microversions can be declared to reflect reality and it can actually work with the generated code? Thanks, Dims On Wed, Aug 29, 2018 at 2:37 AM Edison Xiang wrote: > Hi team, > > As we know, Open API 3.0 was released on July, 2017, it is about one year > ago. > Open API 3.0 support some new features like anyof, oneof and allof than > Open API 2.0(Swagger 2.0). > Now OpenStack projects do not support Open API. > Also I found some old emails in the Mail List about supporting Open API > 2.0 in OpenStack. > > Some limitations are mentioned in the Mail List for OpenStack API: > 1. The POST */action APIs. > These APIs are exist in lots of projects like nova, cinder. > These APIs have the same URI but the responses will be different when > the request is different. > 2. Micro versions. > These are controller via headers, which are sometimes used to describe > behavioral changes in an API, not just request/response schema changes. > > About the first limitation, we can find the solution in the Open API 3.0. > The example [2] shows that we can define different request/response in the > same URI by anyof feature in Open API 3.0. > > About the micro versions problem, I think it is not a limitation related a > special API Standard. > We can list all micro versions API schema files in one directory like > nova/V2, > or we can list the schema changes between micro versions as tempest > project did [3]. > > Based on Open API 3.0, it can bring lots of benefits for OpenStack > Community and does not impact the current features the Community has. > For example, we can automatically generate API documents, different > language Clients(SDK) maybe for different micro versions, > and generate cloud tool adapters for OpenStack, like ansible module, > terraform providers and so on. > Also we can make an API UI to provide an online and visible API search, > API Calling for every OpenStack API. > 3rd party developers can also do some self-defined development. > > [1] https://github.com/OAI/OpenAPI-Specification > [2] > https://github.com/edisonxiang/OpenAPI-Specification/blob/master/examples/v3.0/petstore.yaml#L94-L109 > [3] > https://github.com/openstack/tempest/tree/master/tempest/lib/api_schema/response/compute > > Best Regards, > Edison Xiang > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims -------------- next part -------------- An HTML attachment was scrubbed... URL: From rgerganov at vmware.com Wed Aug 29 10:59:26 2018 From: rgerganov at vmware.com (Radoslav Gerganov) Date: Wed, 29 Aug 2018 13:59:26 +0300 Subject: [openstack-dev] [nova][vmware] need help triaging a vmware driver bug In-Reply-To: <07bbd498-69e8-56ff-5e01-83ef0eea4cfd@gmail.com> References: <45e95976-1e14-c466-8b4f-45aff35df4fb@gmail.com> <07bbd498-69e8-56ff-5e01-83ef0eea4cfd@gmail.com> Message-ID: On 23.08.2018 23:27, melanie witt wrote: > > So, I think we could add something to the launchpad bug template to link to a doc that explains tips about reporting VMware related bugs. I suggest linking to a doc because the bug template is already really long and looks like it would be best to have something short, like, "For tips on reporting VMware virt driver bugs, see this doc: " and provide a link to, for example, a openstack wiki about the VMware virt driver (is there one?). The question is, where can we put the doc? Wiki? Or maybe here at the bottom [1]? Let me know what you think. > Sorry for the late reply, I was on PTO last week. I have posted a patch which adds a "Troubleshooting" section to the VMware documentation in Nova: https://review.openstack.org/#/c/597446 If this is OK then we can add a link to this particular paragraph in the bug template. Thanks, Rado From jaypipes at gmail.com Wed Aug 29 12:11:47 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 29 Aug 2018 08:11:47 -0400 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: References: Message-ID: <6d6d19ae-0b81-ce3d-3a7f-c8e0cc0ad0b3@gmail.com> On 08/29/2018 02:36 AM, Edison Xiang wrote: > Based on Open API 3.0, it can bring lots of benefits for OpenStack > Community and does not impact the current features the Community has. > 3rd party developers can also do some self-defined development. Hi Edison, Would you mind expanding on what you are referring to with the above line about 3rd party developers doing self-defined development? Thanks! -jay From jaypipes at gmail.com Wed Aug 29 12:39:08 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 29 Aug 2018 08:39:08 -0400 Subject: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update In-Reply-To: <77a83f596cc04aa685b7b7eb2aa9d985@SINPEX02CL01.citrite.net> References: <97a6065b-02cc-5e0f-0de6-139f810b8d3c@gmail.com> <8c34e52161b844cfb32b7268685af0fc@AMSPEX02CL01.citrite.net> <5661e4f6-c6e3-ed1a-b27d-ee1bc07b575c@gmail.com> <77a83f596cc04aa685b7b7eb2aa9d985@SINPEX02CL01.citrite.net> Message-ID: I think the immediate solution would be to just set cpu_allocation_ratio to 16.0 in the nova.CONF that your CI system is using. Best, -jay On 08/29/2018 05:26 AM, Naichuan Sun wrote: > Hi, Eric and Jay, > > The VCPU/Disk/RAM allocation ratio are set to 0.0 by default, and resource tracker would reset it to valid values in https://github.com/openstack/nova/blob/master/nova/objects/compute_node.py#L199. > But it looks the value is set back to 0.0 by some function(I'm not sure who does it...), so xenserver CI is broken. Any suggestion about that? > Looks libvirt works well, do they set allocation ratio in the configure file? > > Thank you very much. > > BR. > Naichuan Sun > > -----Original Message----- > From: Naichuan Sun > Sent: Wednesday, August 29, 2018 7:00 AM > To: OpenStack Development Mailing List (not for usage questions) > Subject: RE: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update > > Thank you very much for the help, Bob, Jay and Eric. > > Naichuan Sun > > -----Original Message----- > From: Bob Ball [mailto:bob.ball at citrix.com] > Sent: Wednesday, August 29, 2018 12:22 AM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update > >> Yeah, the nova.CONF cpu_allocation_ratio is being overridden to 0.0: > > The default there is 0.0[1] - and the passing tempest-full from Zuul on https://review.openstack.org/#/c/590041/ has the same line when reading the config[2]: > > We'll have a dig to see if we can figure out why it's not defaulting to 16 in the ComputeNode. > > Thanks! > > Bob > > [1] https://git.openstack.org/cgit/openstack/nova/tree/nova/conf/compute.py#n386 > [2] http://logs.openstack.org/41/590041/17/check/tempest-full/b3f9ddd/controller/logs/screen-n-cpu.txt.gz#_Aug_27_14_18_24_078058 > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Wed Aug 29 12:51:40 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 29 Aug 2018 08:51:40 -0400 Subject: [openstack-dev] [goal][python3] goal champion tools In-Reply-To: References: <1535493178-sup-3554@lrrr.local> Message-ID: <1535547070-sup-2915@lrrr.local> Excerpts from Nguyễn Trí Hải's message of 2018-08-29 12:02:12 +0900: > For dealing with this problem, I turn off all the Timeline events in > Preferences. Nice tip! I didn't realize that was something I could control. Thanks, Doug > > On Wed, Aug 29, 2018 at 6:58 AM Doug Hellmann wrote: > > > The story with all of the tasks for tracking the zuul migration has so > > many comments it will no longer display in my browser. So, I've created > > 2 tools to look at the data from the command line. > > > > https://review.openstack.org/597245 has a tool to list the task > > assignments > > > > https://review.openstack.org/597244 has a tool to change the task > > assignment for 1 task > > > > The UX for the assignment tool is terrible, so you have to provide > > your numerical user ID. The easiest way to find that may be to use > > the list tool and copy it from one of the tasks already assigned > > to you. > > > > Doug > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From dpeacock at redhat.com Wed Aug 29 12:53:26 2018 From: dpeacock at redhat.com (David Peacock) Date: Wed, 29 Aug 2018 08:53:26 -0400 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: <1535398507-sup-4428@lrrr.local> References: <1535398507-sup-4428@lrrr.local> Message-ID: On Mon, Aug 27, 2018 at 3:38 PM Doug Hellmann wrote: > If your team is ready to have your zuul settings migrated, please > let us know by following up to this email. We will start with the > volunteers, and then work our way through the other teams. > TripleO team is ready to participate. I'll coordinate on our end. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bodenvmw at gmail.com Wed Aug 29 13:03:10 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Wed, 29 Aug 2018 07:03:10 -0600 Subject: [openstack-dev] [neutron] tox-siblings alternative for local testing In-Reply-To: References: Message-ID: <8df456f3-a29d-a748-c5a9-a5b3855e3b1e@gmail.com> On 8/29/18 12:06 AM, Takashi Yamamoto wrote: > is there any preferred solution for this? > i guess the simplest solution is to make an intermediate release of neutron > and publish it on pypi. i wonder if it's acceptable or not. What we've been doing to date is adding tox target(s) to the respective repo for local testing. These local targets install the dependencies from source where necessary (in place tox siblings). For more details see the "How to setup dependencies for local tox targets" of [1]. This is also a topic I wanted to bring up at the neutron PTG [2]. There maybe other solutions, but I'm unaware of them. [1] https://etherpad.openstack.org/p/neutron-sibling-setup [2] https://etherpad.openstack.org/p/neutron-stein-ptg From ed at leafe.com Wed Aug 29 13:32:22 2018 From: ed at leafe.com (Ed Leafe) Date: Wed, 29 Aug 2018 08:32:22 -0500 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: References: Message-ID: On Aug 29, 2018, at 1:36 AM, Edison Xiang wrote: > > As we know, Open API 3.0 was released on July, 2017, it is about one year ago. > Open API 3.0 support some new features like anyof, oneof and allof than Open API 2.0(Swagger 2.0). > Now OpenStack projects do not support Open API. > Also I found some old emails in the Mail List about supporting Open API 2.0 in OpenStack. There is currently an effort by some developers to investigate the possibility of using GraphQL with OpenStack APIs. What would Open API 3.0 provide that GraphQL would not? I’m asking because I don’t know enough about Open API to compare them. -- Ed Leafe From naichuan.sun at citrix.com Wed Aug 29 13:36:58 2018 From: naichuan.sun at citrix.com (Naichuan Sun) Date: Wed, 29 Aug 2018 13:36:58 +0000 Subject: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update In-Reply-To: References: <97a6065b-02cc-5e0f-0de6-139f810b8d3c@gmail.com> <8c34e52161b844cfb32b7268685af0fc@AMSPEX02CL01.citrite.net> <5661e4f6-c6e3-ed1a-b27d-ee1bc07b575c@gmail.com> <77a83f596cc04aa685b7b7eb2aa9d985@SINPEX02CL01.citrite.net> Message-ID: <9ed4625799034e84a02b812c3d0a0712@SINPEX02CL01.citrite.net> Hi, Jay, I have add the configuration and the CI should be OK now. Just interested in the reason :) Thanks. Naichuan Sun -----Original Message----- From: Jay Pipes [mailto:jaypipes at gmail.com] Sent: Wednesday, August 29, 2018 8:39 PM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update I think the immediate solution would be to just set cpu_allocation_ratio to 16.0 in the nova.CONF that your CI system is using. Best, -jay On 08/29/2018 05:26 AM, Naichuan Sun wrote: > Hi, Eric and Jay, > > The VCPU/Disk/RAM allocation ratio are set to 0.0 by default, and resource tracker would reset it to valid values in https://github.com/openstack/nova/blob/master/nova/objects/compute_node.py#L199. > But it looks the value is set back to 0.0 by some function(I'm not sure who does it...), so xenserver CI is broken. Any suggestion about that? > Looks libvirt works well, do they set allocation ratio in the configure file? > > Thank you very much. > > BR. > Naichuan Sun > > -----Original Message----- > From: Naichuan Sun > Sent: Wednesday, August 29, 2018 7:00 AM > To: OpenStack Development Mailing List (not for usage questions) > > Subject: RE: [openstack-dev] [nova] [placement] XenServer CI failed > frequently because of placement update > > Thank you very much for the help, Bob, Jay and Eric. > > Naichuan Sun > > -----Original Message----- > From: Bob Ball [mailto:bob.ball at citrix.com] > Sent: Wednesday, August 29, 2018 12:22 AM > To: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] [nova] [placement] XenServer CI failed > frequently because of placement update > >> Yeah, the nova.CONF cpu_allocation_ratio is being overridden to 0.0: > > The default there is 0.0[1] - and the passing tempest-full from Zuul on https://review.openstack.org/#/c/590041/ has the same line when reading the config[2]: > > We'll have a dig to see if we can figure out why it's not defaulting to 16 in the ComputeNode. > > Thanks! > > Bob > > [1] > https://git.openstack.org/cgit/openstack/nova/tree/nova/conf/compute.p > y#n386 [2] > http://logs.openstack.org/41/590041/17/check/tempest-full/b3f9ddd/cont > roller/logs/screen-n-cpu.txt.gz#_Aug_27_14_18_24_078058 > ______________________________________________________________________ > ____ OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > ______________________________________________________________________ > ____ OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From eng.szaher at gmail.com Wed Aug 29 13:41:15 2018 From: eng.szaher at gmail.com (Saad Zaher) Date: Wed, 29 Aug 2018 14:41:15 +0100 Subject: [openstack-dev] [Freezer] Reactivate the team In-Reply-To: References: <201808271025487809975@zte.com.cn> Message-ID: On Tue, Aug 28, 2018 at 2:12 PM Trinh Nguyen wrote: > Hi Saad, > > That is the time to migrate Freezer to Storyboard, not the meeting time. :) > *Sorry for my missing understanding .... * > > Bests, > > > *Trinh Nguyen *| Founder & Chief Architect > > > > *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * > > > > On Tue, Aug 28, 2018 at 8:48 PM Saad Zaher wrote: > >> Hello Kendall, >> >> Can we get the old meeting slot which is Thursday @ 14:00 UTC if this is >> Ok with everyone ? >> >> Thanks, >> Saad! >> >> On Mon, Aug 27, 2018 at 11:46 PM Kendall Nelson >> wrote: >> >>> Hello, >>> >>> Here is the change that adds Freezer to StoryBoard[1]. If we can get the >>> PTL's +1, we can move forward with the migration. Does Friday work for you >>> all? >>> >>> -Kendall (diablo_rojo) >>> >>> [1] https://review.openstack.org/#/c/596918/ >>> >>> On Sun, Aug 26, 2018 at 7:59 PM Trinh Nguyen >>> wrote: >>> >>>> @Kendall: please help the Freezer team. Thanks. >>>> >>>> @gengchc2: I think you should send an email to TC and ask for help. The >>>> Freezer core seems to inactive. >>>> >>>> >>>> *Trinh Nguyen *| Founder & Chief Architect >>>> >>>> >>>> >>>> *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz >>>> * >>>> >>>> >>>> >>>> On Mon, Aug 27, 2018 at 11:26 AM wrote: >>>> >>>>> Hi,Kendall: >>>>> >>>>> I agree to migrate freezer project from Launchpad to Storyboard, >>>>> Thanks. >>>>> >>>>> By the way, When will grant privileges for gengchc2 on Launchpad and >>>>> Project Gerrit repositories? >>>>> >>>>> >>>>> >>>>> Best regards, >>>>> >>>>> gengchc2 >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> -- >> -------------------------- >> Best Regards, >> Saad! >> > -- -------------------------- Best Regards, Saad! -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Aug 29 13:50:58 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 29 Aug 2018 09:50:58 -0400 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: References: <1535398507-sup-4428@lrrr.local> Message-ID: <1535550637-sup-5597@lrrr.local> Excerpts from David Peacock's message of 2018-08-29 08:53:26 -0400: > On Mon, Aug 27, 2018 at 3:38 PM Doug Hellmann wrote: > > > If your team is ready to have your zuul settings migrated, please > > let us know by following up to this email. We will start with the > > volunteers, and then work our way through the other teams. > > > > TripleO team is ready to participate. I'll coordinate on our end. I will generate the patches today and watch for a time when the CI load is low to submit them. Doug From mriedemos at gmail.com Wed Aug 29 14:11:42 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 29 Aug 2018 09:11:42 -0500 Subject: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update In-Reply-To: <9ed4625799034e84a02b812c3d0a0712@SINPEX02CL01.citrite.net> References: <97a6065b-02cc-5e0f-0de6-139f810b8d3c@gmail.com> <8c34e52161b844cfb32b7268685af0fc@AMSPEX02CL01.citrite.net> <5661e4f6-c6e3-ed1a-b27d-ee1bc07b575c@gmail.com> <77a83f596cc04aa685b7b7eb2aa9d985@SINPEX02CL01.citrite.net> <9ed4625799034e84a02b812c3d0a0712@SINPEX02CL01.citrite.net> Message-ID: <5efa9189-e53f-1307-dba7-45a98802fe78@gmail.com> On 8/29/2018 8:36 AM, Naichuan Sun wrote: > Hi, Jay, > > I have add the configuration and the CI should be OK now. > Just interested in the reason:) > Thanks. > > Naichuan Sun zigo is reporting the same thing in the nova channel this morning, this was his inventory for the compute node provider: http://paste.openstack.org/show/729051/ Clearly we have a regression in Rocky where we're not setting the allocation_ratio properly on the compute node inventory, and is a pretty severe issue if the resource tracker doesn't update it. -- Thanks, Matt From naichuan.sun at citrix.com Wed Aug 29 14:31:59 2018 From: naichuan.sun at citrix.com (Naichuan Sun) Date: Wed, 29 Aug 2018 14:31:59 +0000 Subject: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update In-Reply-To: <5efa9189-e53f-1307-dba7-45a98802fe78@gmail.com> References: <97a6065b-02cc-5e0f-0de6-139f810b8d3c@gmail.com> <8c34e52161b844cfb32b7268685af0fc@AMSPEX02CL01.citrite.net> <5661e4f6-c6e3-ed1a-b27d-ee1bc07b575c@gmail.com> <77a83f596cc04aa685b7b7eb2aa9d985@SINPEX02CL01.citrite.net> <9ed4625799034e84a02b812c3d0a0712@SINPEX02CL01.citrite.net> <5efa9189-e53f-1307-dba7-45a98802fe78@gmail.com> Message-ID: <0d4d9068590f48c9a2c6a28fdfa62849@SINPEX02CL01.citrite.net> Thanks, Matt. Should we create a ticket about it? BR. Naichuan Sun -----Original Message----- From: Matt Riedemann [mailto:mriedemos at gmail.com] Sent: Wednesday, August 29, 2018 10:12 PM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update On 8/29/2018 8:36 AM, Naichuan Sun wrote: > Hi, Jay, > > I have add the configuration and the CI should be OK now. > Just interested in the reason:) > Thanks. > > Naichuan Sun zigo is reporting the same thing in the nova channel this morning, this was his inventory for the compute node provider: http://paste.openstack.org/show/729051/ Clearly we have a regression in Rocky where we're not setting the allocation_ratio properly on the compute node inventory, and is a pretty severe issue if the resource tracker doesn't update it. -- Thanks, Matt __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From no-reply at openstack.org Wed Aug 29 14:37:29 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Wed, 29 Aug 2018 14:37:29 -0000 Subject: [openstack-dev] masakari 6.0.0.0rc2 (rocky) Message-ID: Hello everyone, A new release candidate for masakari for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/masakari/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/masakari/log/?h=stable/rocky Release notes for masakari can be found at: https://docs.openstack.org/releasenotes/masakari/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/masakari and tag it *rocky-rc-potential* to bring it to the masakari release crew's attention. From mriedemos at gmail.com Wed Aug 29 14:38:40 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 29 Aug 2018 09:38:40 -0500 Subject: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update In-Reply-To: <0d4d9068590f48c9a2c6a28fdfa62849@SINPEX02CL01.citrite.net> References: <97a6065b-02cc-5e0f-0de6-139f810b8d3c@gmail.com> <8c34e52161b844cfb32b7268685af0fc@AMSPEX02CL01.citrite.net> <5661e4f6-c6e3-ed1a-b27d-ee1bc07b575c@gmail.com> <77a83f596cc04aa685b7b7eb2aa9d985@SINPEX02CL01.citrite.net> <9ed4625799034e84a02b812c3d0a0712@SINPEX02CL01.citrite.net> <5efa9189-e53f-1307-dba7-45a98802fe78@gmail.com> <0d4d9068590f48c9a2c6a28fdfa62849@SINPEX02CL01.citrite.net> Message-ID: <0bad60ed-80c2-2335-617d-9d8941ae173d@gmail.com> On 8/29/2018 9:31 AM, Naichuan Sun wrote: > Thanks, Matt. Should we create a ticket about it? Already done: https://bugs.launchpad.net/nova/+bug/1789654 I'm working on pushing some debug log patches now. -- Thanks, Matt From anne at openstack.org Wed Aug 29 14:44:05 2018 From: anne at openstack.org (Anne Bertucio) Date: Wed, 29 Aug 2018 07:44:05 -0700 Subject: [openstack-dev] Aug 30, 1500 UTC: Community Meeting: Come learn what's new in Rocky! Message-ID: <732A908B-928A-47AA-9D50-D01E0CCFF8C8@openstack.org> A reminder that there’ll be a community meeting tomorrow August 30, at 1500UTC/8am Pacific, where you can learn about some of the new things in OpenStack Rocky, and get updates on the pilot projects Airship, Kata Containers, StarlingX, and Zuul. We’ll hear from PTLs (Julia Kreger, Ironic; Alex Schultz, TripleO) on what’s new in their projects, as well as pilot project technical contributors Eric Ernst (Kata), Bruce Jones (StarlingX), and OSF staff + contributors Chris Hoge (Airship) and Jeremy Stanley (Zuul). You can join using the webinar info below, but this session will be recorded if you can’t make it live! Learn what's new in the Rocky release, and get updates on Airship, Kata Containers, StarlingX, and Zuul. ————————— When: Aug 30, 2018 8:00 AM Pacific Time (US and Canada) Topic: OpenStack Community Meeting Please click the link below to join the webinar: https://zoom.us/j/551803657 Or iPhone one-tap : US: +16699006833,,551803657# or +16468769923,,551803657# Or Telephone: Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 Webinar ID: 551 803 657 International numbers available: https://zoom.us/u/bh2jVweqf Anne Bertucio OpenStack Foundation anne at openstack.org | irc: annabelleB -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Wed Aug 29 16:19:34 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 29 Aug 2018 12:19:34 -0400 Subject: [openstack-dev] [kayobe] Kayobe update In-Reply-To: References: Message-ID: Hey Mark, Here's the link to the Ops etherpad https://etherpad.openstack.org/p/ops-meetup-ptg-denver-2018 I added a listing for you, and we'll have a schedule out shortly, but I noted that it should be Tuesday. See you in Denver! Cheers, Erik On Wed, Aug 22, 2018 at 5:01 PM Mark Goddard wrote: > > > > On Wed, 22 Aug 2018, 19:08 Erik McCormick, wrote: >> >> >> >> On Wed, Aug 22, 2018, 1:52 PM Mark Goddard wrote: >>> >>> Hello Kayobians, >>> >>> I thought it is about time to do another update. >> >> >> >> >>> >>> # PTG >>> >>> There won't be an official Kayobe session at the PTG in Denver, although I and a few others from the team will be present. If anyone would like to meet to discuss Kayobe then don't be shy. Please get in touch either via email or IRC (mgoddard). >> >> >> Would you have any interest in doing an overview / Q&A session with Operators Monday before lunch or sometime Tuesday? It doesn't need to be anything fancy or formal as these are all fishbowl sessions. It might be a good way to get some traction and feedback. > > > Absolutely, that's a great idea. I was hoping to attend the Scientific SIG session on Monday, but any time on Tuesday would work. > >> >>> >>> >>> Cheers, >>> Mark >> >> >> -Erik >> >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lbragstad at gmail.com Wed Aug 29 16:21:05 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 29 Aug 2018 11:21:05 -0500 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: <1535398507-sup-4428@lrrr.local> References: <1535398507-sup-4428@lrrr.local> Message-ID: On Mon, Aug 27, 2018 at 2:37 PM Doug Hellmann wrote: > This is week 3 of the "Run under Python 3 by default" goal > (https://governance.openstack.org/tc/goals/stein/python3-first.html). > > == What we learned last week == > > We have a few enthusiastic folks who want to contribute to the goal > who have not been involved in the previous discussion with goal > champions. If you are one of them, please get in touch with me > BEFORE beginning any work. > http://lists.openstack.org/pipermail/openstack-dev/2018-August/133610.html > > In the course of adding python 3.6 unit tests to Manilla, a recursion > bug setting up the SSL context was reported. > https://bugs.launchpad.net/manila/+bug/1788253 (We could use some > help debugging it.) > > Several projects have their .gitignore files set up to ignore all > '.' files. I'm not sure why this is the case. It has caused some > issues with the migration, but I think we've worked around the > problem in the scripts now. > > We extended the scripts for generating the migration patches to > handle the neutron-specific versions of the unit test jobs for > python 3.5 and 3.6. > > The Storyboard UI has some performance issue when a single story > has several hundred comments. This is an unusual situation, which > we don't expect to come up for "normal" stories, but the SB team > discussed some ways to address it. > > Akihiro Mitoki expressed some concern about the new release notes > job being set up in horizon, and how to test it. The "new" job is > the same as the "old" job except that it sets up sphinx using > python3. The versions of sphinx and reno that we rely on for the > release notes jobs all work under python3, and projects don't have > any convenient way to install extra dependencies, so we are confident > that the new version of the job works. If you find that not to be > true for your project, we can help fix the problem. > > We have a few repos with unstable functional tests, and we seem to > have some instability in the integrated gate as well. > > == Ongoing and Completed Work == > > These teams have started or completed their Zuul migration work: > > +---------------------+------+-------+------+ > | Team | Open | Total | Done | > +---------------------+------+-------+------+ > | Documentation | 0 | 12 | yes | > | OpenStack-Helm | 5 | 5 | | > | OpenStackAnsible | 70 | 270 | | > | OpenStackClient | 10 | 19 | | > | OpenStackSDK | 12 | 15 | | > | PowerVMStackers | 0 | 15 | yes | > | Technical Committee | 0 | 5 | yes | > | blazar | 16 | 16 | | > | congress | 1 | 16 | | > | cyborg | 2 | 9 | | > | designate | 10 | 17 | | > | ec2-api | 4 | 7 | | > | freezer | 26 | 30 | | > | glance | 16 | 16 | | > | horizon | 0 | 8 | yes | > | ironic | 22 | 60 | | > | karbor | 30 | 30 | | > | keystone | 35 | 35 | | > | kolla | 1 | 8 | | > | kuryr | 26 | 29 | | > | magnum | 24 | 29 | | > | manila | 19 | 19 | | > | masakari | 18 | 18 | | > | mistral | 0 | 25 | yes | > | monasca | 20 | 69 | | > | murano | 25 | 25 | | > | octavia | 5 | 23 | | > | oslo | 3 | 157 | | > | other | 3 | 7 | | > | qinling | 1 | 6 | | > | requirements | 0 | 5 | yes | > | sahara | 0 | 27 | yes | > | searchlight | 5 | 13 | | > | solum | 0 | 17 | yes | > | storlets | 5 | 5 | | > | swift | 9 | 11 | | > | tacker | 16 | 16 | | > | tricircle | 5 | 9 | | > | tripleo | 67 | 78 | | > | vitrage | 0 | 17 | yes | > | watcher | 12 | 17 | | > | winstackers | 6 | 11 | | > | zaqar | 12 | 17 | | > | zun | 0 | 13 | yes | > +---------------------+------+-------+------+ > > == Next Steps == > > If your team is ready to have your zuul settings migrated, please > let us know by following up to this email. We will start with the > volunteers, and then work our way through the other teams. > > The keystone team is ready. Just FYI - there are pre-existing patches proposed to our repositories, but they weren't initiated by one of the goal champions [0]. I can help work through issues on our end. [0] https://review.openstack.org/#/q/(status:open+OR+status:merged)+project:openstack/keystone+topic:python3-first > After the Rocky cycle-trailing projects are released, I will propose > the change to project-config to change all of the packaging jobs > to use the new publish-to-pypi-python3 template. We should be able > to have that change in place before the first milestone for Stein > so that we have an opportunity to test it. > > == How can you help? == > > 1. Choose a patch that has failing tests and help fix it. > > https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) > 2. Review the patches for the zuul changes. Keep in mind that some of > those patches will be on the stable branches for projects. > 3. Work on adding functional test jobs that run under Python 3. > > == How can you ask for help? == > > If you have any questions, please post them here to the openstack-dev > list with the topic tag [python3] in the subject line. Posting > questions to the mailing list will give the widest audience the > chance to see the answers. > > We are using the #openstack-dev IRC channel for discussion as well, > but I'm not sure how good our timezone coverage is so it's probably > better to use the mailing list. > > == Reference Material == > > Goal description: > https://governance.openstack.org/tc/goals/stein/python3-first.html > Open patches needing reviews: > https://review.openstack.org/#/q/topic:python3-first+is:open > Storyboard: https://storyboard.openstack.org/#!/board/104 > Zuul migration notes: https://etherpad.openstack.org/p/python3-first > Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 > Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Wed Aug 29 16:51:58 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 29 Aug 2018 11:51:58 -0500 Subject: [openstack-dev] OpenStack Summit Forum in Berlin: Topic Selection Process Message-ID: <5B86CF2E.5010708@openstack.org> Hi all, Welcome to the topic selection process for our Forum in Berlin. This is not a classic conference track with speakers and presentations. OSF community members (participants in development teams, operators, working groups, SIGs, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is your opportunity to help shape the development of future project releases. For OpenStack Berlin marks the beginning of Stein’s release cycle, where ideas and requirements will be gathered. We should come armed with feedback from August's Rocky release if at all possible. We aim to ensure the broadest coverage of topics that will allow for multiple parts of the community getting together to discuss key areas within our community/projects. For OSF Projects (StarlingX, Zuul, Airship, Kata Containers) Welcome! Berlin is your first official opportunity to participate in a Forum. The idea is to gather ideas and requirements for your project’s upcoming release. Look to https://wiki.openstack.org/wiki/Forum for an idea of how to structure fishbowls and discussions for your project. The idea is to ensure the broadest coverage of topics, while allowing for the project community to discuss critical areas of concern. To make sure we are presenting the best topics for discussion, we have asked representatives of each of your projects to help us out in the Forum selection process. There are two stages to the brainstorming: 1. Starting today, set up an etherpad with your team and start discussing ideas you'd like to talk about at the Forum and work out which ones to submit. 2. Then, in a couple of weeks, we will open up a more formal web-based tool for you to submit abstracts for the most popular sessions that came out of your brainstorming. Make an etherpad and add it to the list at: https://wiki.openstack.org/wiki/Forum/Berlin2018 This is your opportunity to think outside the box and talk with other projects, groups, and individuals that you might not see during Summit sessions. Look for interested parties to collaborate with and share your ideas. Examples of typical sessions that make for a great Forum: Strategic, whole-of-community discussions, to think about the big picture, including beyond just one release cycle and new technologies e.g. OpenStack One Platform for containers/VMs/Bare Metal (Strategic session) the entire community congregates to share opinions on how to make OpenStack achieve its integration engine goal Cross-project sessions, in a similar vein to what has happened at past forums, but with increased emphasis on issues that are of relevant to all areas of the community e.g. Rolling Upgrades at Scale (Cross-Project session) – the Large Deployments Team collaborates with Nova, Cinder and Keystone to tackle issues that come up with rolling upgrades when there’s a large number of machines.
 Project-specific sessions, where community members most interested in a specific project can discuss their experience with the project over the last release and provide feedback, collaborate on priorities, and present or generate 'blue sky' ideas for the next release e.g. Neutron Pain Points (Project-Specific session) – Co-organized by neutron developers and users. Neutron developers bring some specific questions about implementation and usage. Neutron users bring feedback from the latest release. All community members interested in Neutron discuss ideas about the future. Think about what kind of session ideas might end up as: Project-specific, cross-project or strategic/whole-of-community discussions. There'll be more slots for the latter two, so do try and think outside the box! This part of the process is where we gather broad community consensus - in theory the second part is just about fitting in as many of the good ideas into the schedule as we can. Further details about the forum can be found at: https://wiki.openstack.org/wiki/Forum Thanks all! Jimmy McArthur, on behalf of the OpenStack Foundation, User Committee & Technical Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Aug 29 17:02:00 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 29 Aug 2018 13:02:00 -0400 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: <1535550637-sup-5597@lrrr.local> References: <1535398507-sup-4428@lrrr.local> <1535550637-sup-5597@lrrr.local> Message-ID: <1535561981-sup-3301@lrrr.local> Excerpts from Doug Hellmann's message of 2018-08-29 09:50:58 -0400: > Excerpts from David Peacock's message of 2018-08-29 08:53:26 -0400: > > On Mon, Aug 27, 2018 at 3:38 PM Doug Hellmann wrote: > > > > > If your team is ready to have your zuul settings migrated, please > > > let us know by following up to this email. We will start with the > > > volunteers, and then work our way through the other teams. > > > > > > > TripleO team is ready to participate. I'll coordinate on our end. > > I will generate the patches today and watch for a time when the CI load > is low to submit them. > > Doug > It appears that someone who is not listed as a goal champion has already submitted a bunch of patches for importing the zuul settings into the TripleO repositories without updating our tracking story. The keystone team elected to abandon a similar set of patches because some of them were incorrect. I don't know whether that applies to these. Do you want to review the ones that are open, or would you like for me to generate a new batch? Doug From Greg.Waines at windriver.com Wed Aug 29 17:08:15 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 29 Aug 2018 17:08:15 +0000 Subject: [openstack-dev] [keystone] [barbican] Keystone's use of Barbican ? Message-ID: My understanding is that Keystone can be configured to use Barbican to securely store user passwords. Is this true ? If yes, is this the standard / recommended / upstream way to securely store Keystone user passwords ? If yes, I can’t find any descriptions of this is configured ? Can someone provide some pointers ? Greg. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaosorior at redhat.com Wed Aug 29 18:00:44 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Wed, 29 Aug 2018 21:00:44 +0300 Subject: [openstack-dev] [keystone] [barbican] Keystone's use of Barbican ? In-Reply-To: References: Message-ID: <67f51eb3-d278-0e43-0d2a-bd3d3f7639ae@redhat.com> This is not the case. Barbican requires users and systems that use it to use keystone for authentication. So keystone can't use Barbican for this. Chicken and egg problem. On 08/29/2018 08:08 PM, Waines, Greg wrote: > > My understanding is that Keystone can be configured to use Barbican to > securely store user passwords. > > Is this true ? > >   > > If yes, is this the standard / recommended / upstream way to securely > store Keystone user passwords ? > >   > > If yes, I can’t find any descriptions of this is configured ? > > Can someone provide some pointers ? > >   > > Greg. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Wed Aug 29 18:08:45 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 29 Aug 2018 14:08:45 -0400 Subject: [openstack-dev] OpenStack Summit Forum in Berlin: Topic Selection Process In-Reply-To: <5B86CF2E.5010708@openstack.org> References: <5B86CF2E.5010708@openstack.org> Message-ID: On Wed, Aug 29, 2018 at 12:51 PM, Jimmy McArthur wrote: > Examples of typical sessions that make for a great Forum: > > Strategic, whole-of-community discussions, to think about the big > picture, including beyond just one release cycle and new technologies > > e.g. OpenStack One Platform for containers/VMs/Bare Metal (Strategic > session) the entire community congregates to share opinions on how to make > OpenStack achieve its integration engine goal > Just to clarify some speculation going on in IRC: this is an example, right? Not a new thing being announced? // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Wed Aug 29 18:16:44 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 29 Aug 2018 18:16:44 +0000 Subject: [openstack-dev] [keystone] [barbican] Keystone's use of Barbican ? In-Reply-To: <67f51eb3-d278-0e43-0d2a-bd3d3f7639ae@redhat.com> References: <67f51eb3-d278-0e43-0d2a-bd3d3f7639ae@redhat.com> Message-ID: Makes sense. So what is the recommended upstream approach for securely storing user passwords in keystone ? Is that what is being described here ? https://docs.openstack.org/keystone/pike/admin/identity-credential-encryption.html Greg. From: Juan Antonio Osorio Robles Reply-To: "openstack-dev at lists.openstack.org" Date: Wednesday, August 29, 2018 at 2:00 PM To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [keystone] [barbican] Keystone's use of Barbican ? This is not the case. Barbican requires users and systems that use it to use keystone for authentication. So keystone can't use Barbican for this. Chicken and egg problem. On 08/29/2018 08:08 PM, Waines, Greg wrote: My understanding is that Keystone can be configured to use Barbican to securely store user passwords. Is this true ? If yes, is this the standard / recommended / upstream way to securely store Keystone user passwords ? If yes, I can’t find any descriptions of this is configured ? Can someone provide some pointers ? Greg. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Aug 29 18:24:34 2018 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 29 Aug 2018 19:24:34 +0100 Subject: [openstack-dev] [kayobe] Kayobe update In-Reply-To: References: Message-ID: Thanks Erik, see you then. Mark On Wed, 29 Aug 2018 at 17:20, Erik McCormick wrote: > Hey Mark, > > Here's the link to the Ops etherpad > > https://etherpad.openstack.org/p/ops-meetup-ptg-denver-2018 > > I added a listing for you, and we'll have a schedule out shortly, but > I noted that it should be Tuesday. See you in Denver! > > Cheers, > Erik > > On Wed, Aug 22, 2018 at 5:01 PM Mark Goddard wrote: > > > > > > > > On Wed, 22 Aug 2018, 19:08 Erik McCormick, > wrote: > >> > >> > >> > >> On Wed, Aug 22, 2018, 1:52 PM Mark Goddard wrote: > >>> > >>> Hello Kayobians, > >>> > >>> I thought it is about time to do another update. > >> > >> > >> > >> > >>> > >>> # PTG > >>> > >>> There won't be an official Kayobe session at the PTG in Denver, > although I and a few others from the team will be present. If anyone would > like to meet to discuss Kayobe then don't be shy. Please get in touch > either via email or IRC (mgoddard). > >> > >> > >> Would you have any interest in doing an overview / Q&A session with > Operators Monday before lunch or sometime Tuesday? It doesn't need to be > anything fancy or formal as these are all fishbowl sessions. It might be a > good way to get some traction and feedback. > > > > > > Absolutely, that's a great idea. I was hoping to attend the Scientific > SIG session on Monday, but any time on Tuesday would work. > > > >> > >>> > >>> > >>> Cheers, > >>> Mark > >> > >> > >> -Erik > >> > >>> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Wed Aug 29 18:29:13 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 29 Aug 2018 13:29:13 -0500 Subject: [openstack-dev] [keystone] [barbican] Keystone's use of Barbican ? In-Reply-To: References: <67f51eb3-d278-0e43-0d2a-bd3d3f7639ae@redhat.com> Message-ID: On Wed, Aug 29, 2018 at 1:16 PM Waines, Greg wrote: > Makes sense. > > > > So what is the recommended upstream approach for securely storing user > passwords in keystone ? > Keystone will hash passwords before persisting them in their own table. Encrypted passwords are never stored. > > > Is that what is being described here ? > https://docs.openstack.org/keystone/pike/admin/identity-credential-encryption.html > This is a separate mechanism for storing secrets, not necessarily passwords (although I agree the term credentials automatically makes people assume passwords). This is used if consuming keystone's native MFA implementation. For example, storing a shared secret between the user and keystone that is provided as a additional authentication method along with a username and password combination. > > > > > Greg. > > > > > > *From: *Juan Antonio Osorio Robles > *Reply-To: *"openstack-dev at lists.openstack.org" < > openstack-dev at lists.openstack.org> > *Date: *Wednesday, August 29, 2018 at 2:00 PM > *To: *"openstack-dev at lists.openstack.org" < > openstack-dev at lists.openstack.org> > *Subject: *Re: [openstack-dev] [keystone] [barbican] Keystone's use of > Barbican ? > > > > This is not the case. Barbican requires users and systems that use it to > use keystone for authentication. So keystone can't use Barbican for this. > Chicken and egg problem. > > > > On 08/29/2018 08:08 PM, Waines, Greg wrote: > > My understanding is that Keystone can be configured to use Barbican to > securely store user passwords. > > Is this true ? > > > > If yes, is this the standard / recommended / upstream way to securely > store Keystone user passwords ? > > > > If yes, I can’t find any descriptions of this is configured ? > > Can someone provide some pointers ? > > > > Greg. > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Wed Aug 29 18:29:52 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 29 Aug 2018 13:29:52 -0500 Subject: [openstack-dev] [Openstack-operators] OpenStack Summit Forum in Berlin: Topic Selection Process In-Reply-To: References: <5B86CF2E.5010708@openstack.org> Message-ID: <5B86E620.7070707@openstack.org> 100% correct. Just a random example text that we've been reusing since early 2017. Next time, we will consider lorem ipsum ;) Jim Rollenhagen wrote: > On Wed, Aug 29, 2018 at 12:51 PM, Jimmy McArthur > wrote: > > > Examples of typical sessions that make for a great Forum: > > Strategic, whole-of-community discussions, to think about the big > picture, including beyond just one release cycle and new technologies > > e.g. OpenStack One Platform for containers/VMs/Bare Metal > (Strategic session) the entire community congregates to share > opinions on how to make OpenStack achieve its integration engine goal > > > Just to clarify some speculation going on in IRC: this is an example, > right? Not a new thing being announced? > > // jim > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpeacock at redhat.com Wed Aug 29 19:12:03 2018 From: dpeacock at redhat.com (David Peacock) Date: Wed, 29 Aug 2018 15:12:03 -0400 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: <1535561981-sup-3301@lrrr.local> References: <1535398507-sup-4428@lrrr.local> <1535550637-sup-5597@lrrr.local> <1535561981-sup-3301@lrrr.local> Message-ID: On Wed, Aug 29, 2018 at 1:02 PM Doug Hellmann wrote: > Excerpts from Doug Hellmann's message of 2018-08-29 09:50:58 -0400: > > Excerpts from David Peacock's message of 2018-08-29 08:53:26 -0400: > > > On Mon, Aug 27, 2018 at 3:38 PM Doug Hellmann > wrote: > > > > > > > If your team is ready to have your zuul settings migrated, please > > > > let us know by following up to this email. We will start with the > > > > volunteers, and then work our way through the other teams. > > > > > > > > > > TripleO team is ready to participate. I'll coordinate on our end. > > > > I will generate the patches today and watch for a time when the CI load > > is low to submit them. > > > > Doug > > > > It appears that someone who is not listed as a goal champion has > already submitted a bunch of patches for importing the zuul settings > into the TripleO repositories without updating our tracking story. > The keystone team elected to abandon a similar set of patches because > some of them were incorrect. I don't know whether that applies to > these. > > Do you want to review the ones that are open, or would you like for me > to generate a new batch? > > Doug > Please would you mind pasting me the reviews in question, then I'll take a look and let you know which direction. Thanks! > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashlee at openstack.org Wed Aug 29 19:16:10 2018 From: ashlee at openstack.org (Ashlee Ferguson) Date: Wed, 29 Aug 2018 14:16:10 -0500 Subject: [openstack-dev] Travel Support Deadline Tomorrow Message-ID: <48D19CD6-6A56-46EA-A33F-E955E2471735@openstack.org> Hi everyone, Reminder that the deadline to apply for Travel Support to the Berlin Summit closes tomorrow, Thursday, August 30 at 11:59pm PT. APPLY HERE The Travel Support Program's aim is to facilitate participation of active community members to the Summit by covering the costs for their travel and accommodation. If you are a key contributor to a project managed by the OpenStack Foundation, and your company does not cover the costs of your travel and accommodation to Berlin, you can apply for the Travel Support Program. Please email summit at openstack.org with any questions. Thanks, Ashlee Ashlee Ferguson OpenStack Foundation ashlee at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Aug 29 19:22:56 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 29 Aug 2018 15:22:56 -0400 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: References: <1535398507-sup-4428@lrrr.local> <1535550637-sup-5597@lrrr.local> <1535561981-sup-3301@lrrr.local> Message-ID: <1535570540-sup-7062@lrrr.local> Excerpts from David Peacock's message of 2018-08-29 15:12:03 -0400: > On Wed, Aug 29, 2018 at 1:02 PM Doug Hellmann wrote: > > > Excerpts from Doug Hellmann's message of 2018-08-29 09:50:58 -0400: > > > Excerpts from David Peacock's message of 2018-08-29 08:53:26 -0400: > > > > On Mon, Aug 27, 2018 at 3:38 PM Doug Hellmann > > wrote: > > > > > > > > > If your team is ready to have your zuul settings migrated, please > > > > > let us know by following up to this email. We will start with the > > > > > volunteers, and then work our way through the other teams. > > > > > > > > > > > > > TripleO team is ready to participate. I'll coordinate on our end. > > > > > > I will generate the patches today and watch for a time when the CI load > > > is low to submit them. > > > > > > Doug > > > > > > > It appears that someone who is not listed as a goal champion has > > already submitted a bunch of patches for importing the zuul settings > > into the TripleO repositories without updating our tracking story. > > The keystone team elected to abandon a similar set of patches because > > some of them were incorrect. I don't know whether that applies to > > these. > > > > Do you want to review the ones that are open, or would you like for me > > to generate a new batch? > > > > Doug > > > > Please would you mind pasting me the reviews in question, then I'll take a > look and let you know which direction. > > Thanks! Here's the list of open changes I see right now: +----------------------------------------------+-------------------------------------+------------+--------------+-------------------------------------+---------------+ | Subject | Repo | Tests | Workflow | URL | Branch | +----------------------------------------------+-------------------------------------+------------+--------------+-------------------------------------+---------------+ | fix tox python3 overrides | openstack-infra/tripleo-ci | PASS | REVIEWED | https://review.openstack.org/588587 | master | | import zuul job settings from project-config | openstack/ansible-role-k8s-glance | FAILED | NEW | https://review.openstack.org/596021 | master | | import zuul job settings from project-config | openstack/ansible-role-k8s-keystone | FAILED | NEW | https://review.openstack.org/596022 | master | | import zuul job settings from project-config | openstack/ansible-role-k8s-mariadb | FAILED | NEW | https://review.openstack.org/596023 | master | | import zuul job settings from project-config | openstack/dib-utils | PASS | NEW | https://review.openstack.org/596024 | master | | fix tox python3 overrides | openstack/instack | PASS | REVIEWED | https://review.openstack.org/572904 | master | | import zuul job settings from project-config | openstack/instack | PASS | NEW | https://review.openstack.org/596025 | master | | add python 3.6 unit test job | openstack/instack | PASS | NEW | https://review.openstack.org/596026 | master | | add python 3.6 unit test job | openstack/instack | PASS | NEW | https://review.openstack.org/596027 | master | | import zuul job settings from project-config | openstack/instack | PASS | NEW | https://review.openstack.org/596085 | stable/ocata | | import zuul job settings from project-config | openstack/instack | PASS | NEW | https://review.openstack.org/596105 | stable/pike | | import zuul job settings from project-config | openstack/instack | PASS | NEW | https://review.openstack.org/596121 | stable/queens | | import zuul job settings from project-config | openstack/instack | PASS | NEW | https://review.openstack.org/596138 | stable/rocky | | import zuul job settings from project-config | openstack/instack-undercloud | FAILED | NEW | https://review.openstack.org/596086 | stable/ocata | | import zuul job settings from project-config | openstack/instack-undercloud | PASS | NEW | https://review.openstack.org/596106 | stable/pike | | import zuul job settings from project-config | openstack/instack-undercloud | FAILED | NEW | https://review.openstack.org/596122 | stable/queens | | import zuul job settings from project-config | openstack/os-apply-config | FAILED | NEW | https://review.openstack.org/596087 | stable/ocata | | import zuul job settings from project-config | openstack/os-apply-config | PASS | NEW | https://review.openstack.org/596107 | stable/pike | | import zuul job settings from project-config | openstack/os-apply-config | PASS | NEW | https://review.openstack.org/596123 | stable/queens | | import zuul job settings from project-config | openstack/os-collect-config | FAILED | NEW | https://review.openstack.org/596094 | stable/ocata | | import zuul job settings from project-config | openstack/os-collect-config | PASS | NEW | https://review.openstack.org/596108 | stable/pike | | import zuul job settings from project-config | openstack/os-collect-config | PASS | NEW | https://review.openstack.org/596124 | stable/queens | | import zuul job settings from project-config | openstack/os-net-config | FAILED | NEW | https://review.openstack.org/596095 | stable/ocata | | import zuul job settings from project-config | openstack/os-net-config | PASS | REVIEWED | https://review.openstack.org/596109 | stable/pike | | import zuul job settings from project-config | openstack/os-net-config | PASS | REVIEWED | https://review.openstack.org/596125 | stable/queens | | import zuul job settings from project-config | openstack/os-refresh-config | PASS | NEW | https://review.openstack.org/596096 | stable/ocata | | import zuul job settings from project-config | openstack/os-refresh-config | PASS | NEW | https://review.openstack.org/596110 | stable/pike | | import zuul job settings from project-config | openstack/os-refresh-config | PASS | NEW | https://review.openstack.org/596126 | stable/queens | | import zuul job settings from project-config | openstack/paunch | FAILED | NEW | https://review.openstack.org/596041 | master | | switch documentation job to new PTI | openstack/paunch | FAILED | NEW | https://review.openstack.org/596042 | master | | add python 3.6 unit test job | openstack/paunch | FAILED | NEW | https://review.openstack.org/596043 | master | | import zuul job settings from project-config | openstack/paunch | PASS | NEW | https://review.openstack.org/596111 | stable/pike | | import zuul job settings from project-config | openstack/paunch | FAILED | NEW | https://review.openstack.org/596127 | stable/queens | | import zuul job settings from project-config | openstack/puppet-pacemaker | FAILED | NEW | https://review.openstack.org/596044 | master | | switch documentation job to new PTI | openstack/puppet-pacemaker | FAILED | NEW | https://review.openstack.org/596045 | master | | import zuul job settings from project-config | openstack/puppet-tripleo | FAILED | NEW | https://review.openstack.org/596097 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-tripleo | PASS | NEW | https://review.openstack.org/596112 | stable/pike | | import zuul job settings from project-config | openstack/puppet-tripleo | FAILED | NEW | https://review.openstack.org/596128 | stable/queens | | import zuul job settings from project-config | openstack/python-tripleoclient | FAILED | REVIEWED | https://review.openstack.org/596048 | master | | switch documentation job to new PTI | openstack/python-tripleoclient | PASS | REVIEWED | https://review.openstack.org/596049 | master | | add python 3.6 unit test job | openstack/python-tripleoclient | PASS | REVIEWED | https://review.openstack.org/596050 | master | | import zuul job settings from project-config | openstack/python-tripleoclient | FAILED | REVIEWED | https://review.openstack.org/596098 | stable/ocata | | import zuul job settings from project-config | openstack/python-tripleoclient | PASS | REVIEWED | https://review.openstack.org/596113 | stable/pike | | import zuul job settings from project-config | openstack/python-tripleoclient | FAILED | NEW | https://review.openstack.org/596129 | stable/queens | | import zuul job settings from project-config | openstack/python-tripleoclient | PASS | REVIEWED | https://review.openstack.org/596139 | stable/rocky | | import zuul job settings from project-config | openstack/tempest-tripleo-ui | UNKNOWN | APPROVED | https://review.openstack.org/596051 | master | | switch documentation job to new PTI | openstack/tempest-tripleo-ui | UNKNOWN | APPROVED | https://review.openstack.org/596052 | master | | import zuul job settings from project-config | openstack/tripleo-common | FAILED | NEW | https://review.openstack.org/596053 | master | | switch documentation job to new PTI | openstack/tripleo-common | FAILED | NEW | https://review.openstack.org/596054 | master | | add python 3.6 unit test job | openstack/tripleo-common | FAILED | NEW | https://review.openstack.org/596055 | master | | import zuul job settings from project-config | openstack/tripleo-common | FAILED | NEW | https://review.openstack.org/596099 | stable/ocata | | import zuul job settings from project-config | openstack/tripleo-common | FAILED | NEW | https://review.openstack.org/596114 | stable/pike | | import zuul job settings from project-config | openstack/tripleo-common | FAILED | NEW | https://review.openstack.org/596130 | stable/queens | | import zuul job settings from project-config | openstack/tripleo-docs | PASS | REVIEWED | https://review.openstack.org/596058 | master | | switch documentation job to new PTI | openstack/tripleo-docs | PASS | NEW | https://review.openstack.org/596059 | master | | switch documentation job to new PTI | openstack/tripleo-heat-templates | FAILED | NEW | https://review.openstack.org/596061 | master | | import zuul job settings from project-config | openstack/tripleo-heat-templates | FAILED | NEW | https://review.openstack.org/596100 | stable/ocata | | import zuul job settings from project-config | openstack/tripleo-heat-templates | PASS | NEW | https://review.openstack.org/596115 | stable/pike | | import zuul job settings from project-config | openstack/tripleo-heat-templates | FAILED | NEW | https://review.openstack.org/596131 | stable/queens | | add python 3.6 unit test job | openstack/tripleo-image-elements | FAILED | NEW | https://review.openstack.org/596064 | master | | import zuul job settings from project-config | openstack/tripleo-image-elements | FAILED | NEW | https://review.openstack.org/596101 | stable/ocata | | import zuul job settings from project-config | openstack/tripleo-image-elements | PASS | NEW | https://review.openstack.org/596116 | stable/pike | | import zuul job settings from project-config | openstack/tripleo-image-elements | PASS | NEW | https://review.openstack.org/596132 | stable/queens | | import zuul job settings from project-config | openstack/tripleo-ipsec | PASS | NEW | https://review.openstack.org/596133 | stable/queens | | import zuul job settings from project-config | openstack/tripleo-puppet-elements | FAILED | NEW | https://review.openstack.org/596102 | stable/ocata | | import zuul job settings from project-config | openstack/tripleo-puppet-elements | FAILED | NEW | https://review.openstack.org/596117 | stable/pike | | import zuul job settings from project-config | openstack/tripleo-puppet-elements | FAILED | NEW | https://review.openstack.org/596134 | stable/queens | | import zuul job settings from project-config | openstack/tripleo-quickstart | FAILED | NEW | https://review.openstack.org/596071 | master | | switch documentation job to new PTI | openstack/tripleo-quickstart | FAILED | NEW | https://review.openstack.org/596072 | master | | import zuul job settings from project-config | openstack/tripleo-quickstart-extras | FAILED | NEW | https://review.openstack.org/596073 | master | | switch documentation job to new PTI | openstack/tripleo-quickstart-extras | FAILED | NEW | https://review.openstack.org/596074 | master | | import zuul job settings from project-config | openstack/tripleo-specs | PASS | NEW | https://review.openstack.org/596077 | master | | import zuul job settings from project-config | openstack/tripleo-ui | PASS | REVIEWED | https://review.openstack.org/596078 | master | | switch documentation job to new PTI | openstack/tripleo-ui | PASS | REVIEWED | https://review.openstack.org/596079 | master | | import zuul job settings from project-config | openstack/tripleo-ui | PASS | REVIEWED | https://review.openstack.org/596103 | stable/ocata | | import zuul job settings from project-config | openstack/tripleo-ui | PASS | REVIEWED | https://review.openstack.org/596118 | stable/pike | | import zuul job settings from project-config | openstack/tripleo-ui | PASS | REVIEWED | https://review.openstack.org/596135 | stable/queens | | import zuul job settings from project-config | openstack/tripleo-upgrade | PASS | NEW | https://review.openstack.org/596080 | master | | switch documentation job to new PTI | openstack/tripleo-upgrade | PASS | NEW | https://review.openstack.org/596081 | master | | import zuul job settings from project-config | openstack/tripleo-upgrade | PASS | NEW | https://review.openstack.org/596119 | stable/pike | | import zuul job settings from project-config | openstack/tripleo-upgrade | PASS | NEW | https://review.openstack.org/596136 | stable/queens | | import zuul job settings from project-config | openstack/tripleo-validations | PASS | REVIEWED | https://review.openstack.org/596082 | master | | switch documentation job to new PTI | openstack/tripleo-validations | PASS | REVIEWED | https://review.openstack.org/596083 | master | | add python 3.6 unit test job | openstack/tripleo-validations | PASS | REVIEWED | https://review.openstack.org/596084 | master | | import zuul job settings from project-config | openstack/tripleo-validations | FAILED | REVIEWED | https://review.openstack.org/596104 | stable/ocata | | import zuul job settings from project-config | openstack/tripleo-validations | PASS | REVIEWED | https://review.openstack.org/596120 | stable/pike | | import zuul job settings from project-config | openstack/tripleo-validations | PASS | REVIEWED | https://review.openstack.org/596137 | stable/queens | | | | | | | | | | | FAILED: 38 | APPROVED: 2 | | | | | | PASS: 47 | NEW: 63 | | | | | | UNKNOWN: 2 | REVIEWED: 22 | | | +----------------------------------------------+-------------------------------------+------------+--------------+-------------------------------------+---------------+ From samueldmq at gmail.com Wed Aug 29 19:24:56 2018 From: samueldmq at gmail.com (Samuel de Medeiros Queiroz) Date: Wed, 29 Aug 2018 16:24:56 -0300 Subject: [openstack-dev] Stepping down as keystone core Message-ID: Hi Stackers! It has been both an honor and privilege to serve this community as a keystone core. I am in a position that does not allow me enough time to devote reviewing code and participating of the development process in keystone. As a consequence, I am stepping down as a core reviewer. A big thank you for your trust and for helping me to grow both as a person and as professional during this time in service. I will stay around: I am doing research on interoperability for my masters degree, which means I am around the SDK project. In addition to that, I recently became the Outreachy coordinator for OpenStack. Let me know if you are interested on one of those things. Get in touch on #openstack-outreachy, #openstack-sdks or #openstack-keystone. Thanks, Samuel de Medeiros Queiroz (samueldmq) -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Wed Aug 29 19:33:00 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 29 Aug 2018 14:33:00 -0500 Subject: [openstack-dev] Stepping down as keystone core In-Reply-To: References: Message-ID: Samuel, Thanks for all the dedication and hard work upstream. I'm relieved that you won't be too far away and that you're still involved with the Outreachy programs. You played an instrumental role in getting keystone involved with that community. As always, we'd be happy to have you back in the event your work involves keystone again. Best, Lance On Wed, Aug 29, 2018 at 2:25 PM Samuel de Medeiros Queiroz < samueldmq at gmail.com> wrote: > Hi Stackers! > > It has been both an honor and privilege to serve this community as a > keystone core. > > I am in a position that does not allow me enough time to devote reviewing > code and participating of the development process in keystone. As a > consequence, I am stepping down as a core reviewer. > > A big thank you for your trust and for helping me to grow both as a person > and as professional during this time in service. > > I will stay around: I am doing research on interoperability for my masters > degree, which means I am around the SDK project. In addition to that, I > recently became the Outreachy coordinator for OpenStack. > > Let me know if you are interested on one of those things. > > Get in touch on #openstack-outreachy, #openstack-sdks or > #openstack-keystone. > > Thanks, > Samuel de Medeiros Queiroz (samueldmq) > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Aug 29 20:15:11 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 29 Aug 2018 16:15:11 -0400 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: References: <1535398507-sup-4428@lrrr.local> Message-ID: <1535573656-sup-5347@lrrr.local> Excerpts from Lance Bragstad's message of 2018-08-29 11:21:05 -0500: > The keystone team is ready. Just FYI - there are pre-existing patches > proposed to our repositories, but they weren't initiated by one of the goal > champions [0]. > > I can help work through issues on our end. > > [0] > https://review.openstack.org/#/q/(status:open+OR+status:merged)+project:openstack/keystone+topic:python3-first I've submitted new versions of all of the patches for keystone. - Doug +----------------------------------------------+-----------------------------------+-------------------------------------+---------------+ | Subject | Repo | URL | Branch | +----------------------------------------------+-----------------------------------+-------------------------------------+---------------+ | import zuul job settings from project-config | openstack/keystone | https://review.openstack.org/597652 | master | | switch documentation job to new PTI | openstack/keystone | https://review.openstack.org/597653 | master | | add python 3.6 unit test job | openstack/keystone | https://review.openstack.org/597654 | master | | import zuul job settings from project-config | openstack/keystone | https://review.openstack.org/597675 | stable/ocata | | import zuul job settings from project-config | openstack/keystone | https://review.openstack.org/597680 | stable/pike | | import zuul job settings from project-config | openstack/keystone | https://review.openstack.org/597686 | stable/queens | | import zuul job settings from project-config | openstack/keystone | https://review.openstack.org/597692 | stable/rocky | | import zuul job settings from project-config | openstack/keystone-specs | https://review.openstack.org/597663 | master | | import zuul job settings from project-config | openstack/keystone-tempest-plugin | https://review.openstack.org/597664 | master | | import zuul job settings from project-config | openstack/keystoneauth | https://review.openstack.org/597655 | master | | switch documentation job to new PTI | openstack/keystoneauth | https://review.openstack.org/597656 | master | | add python 3.6 unit test job | openstack/keystoneauth | https://review.openstack.org/597657 | master | | add lib-forward-testing-python3 test job | openstack/keystoneauth | https://review.openstack.org/597658 | master | | import zuul job settings from project-config | openstack/keystoneauth | https://review.openstack.org/597676 | stable/ocata | | import zuul job settings from project-config | openstack/keystoneauth | https://review.openstack.org/597681 | stable/pike | | import zuul job settings from project-config | openstack/keystoneauth | https://review.openstack.org/597687 | stable/queens | | import zuul job settings from project-config | openstack/keystoneauth | https://review.openstack.org/597693 | stable/rocky | | import zuul job settings from project-config | openstack/keystonemiddleware | https://review.openstack.org/597659 | master | | switch documentation job to new PTI | openstack/keystonemiddleware | https://review.openstack.org/597660 | master | | add python 3.6 unit test job | openstack/keystonemiddleware | https://review.openstack.org/597661 | master | | add lib-forward-testing-python3 test job | openstack/keystonemiddleware | https://review.openstack.org/597662 | master | | import zuul job settings from project-config | openstack/keystonemiddleware | https://review.openstack.org/597677 | stable/ocata | | import zuul job settings from project-config | openstack/keystonemiddleware | https://review.openstack.org/597682 | stable/pike | | import zuul job settings from project-config | openstack/keystonemiddleware | https://review.openstack.org/597688 | stable/queens | | import zuul job settings from project-config | openstack/keystonemiddleware | https://review.openstack.org/597694 | stable/rocky | | import zuul job settings from project-config | openstack/ldappool | https://review.openstack.org/597665 | master | | add python 3.6 unit test job | openstack/ldappool | https://review.openstack.org/597666 | master | | import zuul job settings from project-config | openstack/ldappool | https://review.openstack.org/597683 | stable/pike | | import zuul job settings from project-config | openstack/ldappool | https://review.openstack.org/597689 | stable/queens | | import zuul job settings from project-config | openstack/pycadf | https://review.openstack.org/597667 | master | | switch documentation job to new PTI | openstack/pycadf | https://review.openstack.org/597668 | master | | add python 3.6 unit test job | openstack/pycadf | https://review.openstack.org/597669 | master | | add lib-forward-testing-python3 test job | openstack/pycadf | https://review.openstack.org/597670 | master | | import zuul job settings from project-config | openstack/pycadf | https://review.openstack.org/597678 | stable/ocata | | import zuul job settings from project-config | openstack/pycadf | https://review.openstack.org/597684 | stable/pike | | import zuul job settings from project-config | openstack/pycadf | https://review.openstack.org/597690 | stable/queens | | import zuul job settings from project-config | openstack/pycadf | https://review.openstack.org/597695 | stable/rocky | | import zuul job settings from project-config | openstack/python-keystoneclient | https://review.openstack.org/597671 | master | | switch documentation job to new PTI | openstack/python-keystoneclient | https://review.openstack.org/597672 | master | | add python 3.6 unit test job | openstack/python-keystoneclient | https://review.openstack.org/597673 | master | | add lib-forward-testing-python3 test job | openstack/python-keystoneclient | https://review.openstack.org/597674 | master | | import zuul job settings from project-config | openstack/python-keystoneclient | https://review.openstack.org/597679 | stable/ocata | | import zuul job settings from project-config | openstack/python-keystoneclient | https://review.openstack.org/597685 | stable/pike | | import zuul job settings from project-config | openstack/python-keystoneclient | https://review.openstack.org/597691 | stable/queens | | import zuul job settings from project-config | openstack/python-keystoneclient | https://review.openstack.org/597696 | stable/rocky | +----------------------------------------------+-----------------------------------+-------------------------------------+---------------+ From miguel at mlavalle.com Wed Aug 29 20:24:16 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 29 Aug 2018 15:24:16 -0500 Subject: [openstack-dev] [neutron] Rocky PTG retrospective etherpad Message-ID: Dear Neutrinos, In preparation for our PTG in Denver, I have started an etherpad to gather feedback for our retrospective session. Please add your comments there Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Wed Aug 29 20:24:50 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 29 Aug 2018 15:24:50 -0500 Subject: [openstack-dev] [neutron] Rocky PTG retrospective etherpad In-Reply-To: References: Message-ID: Here's the etherpad: https://etherpad.openstack.org/p/neutron-rocky-retrospective On Wed, Aug 29, 2018 at 3:24 PM, Miguel Lavalle wrote: > Dear Neutrinos, > > In preparation for our PTG in Denver, I have started an etherpad to gather > feedback for our retrospective session. Please add your comments there > > Best regards > > Miguel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate.johnston at redhat.com Wed Aug 29 21:14:11 2018 From: nate.johnston at redhat.com (Nate Johnston) Date: Wed, 29 Aug 2018 17:14:11 -0400 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: <1535492703-sup-9104@lrrr.local> References: <1535398507-sup-4428@lrrr.local> <1535492703-sup-9104@lrrr.local> Message-ID: <20180829211411.hua6wpepu3xpnndh@bishop> On Tue, Aug 28, 2018 at 05:46:09PM -0400, Doug Hellmann wrote: > Excerpts from Michel Peterson's message of 2018-08-28 16:30:02 +0300: > > On Mon, Aug 27, 2018 at 10:37 PM, Doug Hellmann > > wrote: > > > > > > > > If your team is ready to have your zuul settings migrated, please > > > let us know by following up to this email. We will start with the > > > volunteers, and then work our way through the other teams. > > > > > > > The networking-odl team is willing to volunteer for this. > > networking-odl is part of the neutron team, and the tools are set up to > work based on full (not partial) teams. So, if the neutron team is ready > we can go ahead with those. Yes, the Neutron team is ready to proceed. I am firing off an email now to officially alert the stadium projects that this is incoming. I know that over time we have done a fair amount of work to get ready for the python3 transition, so if any anomalies come up as the patches get generated for Neutron please feel free to reach out to me to get them resolved. Nate Johnston From nate.johnston at redhat.com Wed Aug 29 21:14:24 2018 From: nate.johnston at redhat.com (Nate Johnston) Date: Wed, 29 Aug 2018 17:14:24 -0400 Subject: [openstack-dev] [neutron][python3] Neutron and stadium - python 3 community goal changes coming soon Message-ID: <20180829211424.a7skfdjygykehwga@bishop> Neutrinos and contributors to stadium projects, As part of the "Run under Python 3 by default" community goal [1] for OpenStack in the Stein cycle, a group of goal champions has assembled and have been programmatically generating changes to help projects with the process of migrating to a python 3 compatible state [2][3]. Since we are now early in the Stein release cycle, it is a good time to land these patches and make sure we are in a good state relative to the community goal early, when we have the most time to handle any issues that may arise. As our fearless leader mlavalle said, "If not now, when?" So please expect to see changes landing in your projects under the topic "python3-first". If these cause issues for you please feel free to reach out to either me or the python 3 goal champions. Progress is also being tracked in a wiki page [4]. Thanks! Nate Johnston [1] https://governance.openstack.org/tc/goals/stein/python3-first.html [2] http://lists.openstack.org/pipermail/openstack-dev/2018-August/133232.html [3] http://lists.openstack.org/pipermail/openstack-dev/2018-August/133880.html [4] https://wiki.openstack.org/wiki/Python3 From mriedemos at gmail.com Wed Aug 29 21:39:50 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 29 Aug 2018 16:39:50 -0500 Subject: [openstack-dev] [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: <8EF14CDA-F135-4ED9-A9B0-1654CDC08D64@cern.ch> References: <2fd5feda-0399-6b24-7eb7-738f36e14e70@gmail.com> <9b6b87e3-024b-6035-11ec-3ab41f795a3f@gmail.com> <60e4654e-91ba-7f14-a6d9-7a588c17baee@gmail.com> <8EF14CDA-F135-4ED9-A9B0-1654CDC08D64@cern.ch> Message-ID: <3cae56bb-cca3-d251-e46f-63c328f254d2@gmail.com> On 8/29/2018 3:21 PM, Tim Bell wrote: > Sounds like a good topic for PTG/Forum? Yeah it's already on the PTG agenda [1][2]. I started the thread because I wanted to get the ball rolling as early as possible, and with people that won't attend the PTG and/or the Forum, to weigh in on not only the known issues with cross-cell migration but also the things I'm not thinking about. [1] https://etherpad.openstack.org/p/nova-ptg-stein [2] https://etherpad.openstack.org/p/nova-ptg-stein-cells -- Thanks, Matt From doug at doughellmann.com Wed Aug 29 21:49:28 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 29 Aug 2018 17:49:28 -0400 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: <20180829211411.hua6wpepu3xpnndh@bishop> References: <1535398507-sup-4428@lrrr.local> <1535492703-sup-9104@lrrr.local> <20180829211411.hua6wpepu3xpnndh@bishop> Message-ID: <1535579316-sup-4280@lrrr.local> Excerpts from Nate Johnston's message of 2018-08-29 17:14:11 -0400: > On Tue, Aug 28, 2018 at 05:46:09PM -0400, Doug Hellmann wrote: > > Excerpts from Michel Peterson's message of 2018-08-28 16:30:02 +0300: > > > On Mon, Aug 27, 2018 at 10:37 PM, Doug Hellmann > > > wrote: > > > > > > > > > > > If your team is ready to have your zuul settings migrated, please > > > > let us know by following up to this email. We will start with the > > > > volunteers, and then work our way through the other teams. > > > > > > > > > > The networking-odl team is willing to volunteer for this. > > > > networking-odl is part of the neutron team, and the tools are set up to > > work based on full (not partial) teams. So, if the neutron team is ready > > we can go ahead with those. > > Yes, the Neutron team is ready to proceed. I am firing off an email now > to officially alert the stadium projects that this is incoming. I know > that over time we have done a fair amount of work to get ready for the > python3 transition, so if any anomalies come up as the patches get > generated for Neutron please feel free to reach out to me to get them > resolved. > > Nate Johnston > OK, there are somewhere just over 100 patches for all of the neutron repositories, so I'm going to wait for a quieter time of day to submit them to avoid blocking other, smaller, bits of work. Doug From melwittt at gmail.com Wed Aug 29 23:01:01 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 29 Aug 2018 16:01:01 -0700 Subject: [openstack-dev] [nova][vmware] need help triaging a vmware driver bug In-Reply-To: References: <45e95976-1e14-c466-8b4f-45aff35df4fb@gmail.com> <07bbd498-69e8-56ff-5e01-83ef0eea4cfd@gmail.com> Message-ID: <37672cac-48ee-016f-b5e0-5635d25be3fb@gmail.com> On Wed, 29 Aug 2018 13:59:26 +0300, Radoslav Gerganov wrote: > On 23.08.2018 23:27, melanie witt wrote: >> So, I think we could add something to the launchpad bug template to link to a doc that explains tips about reporting VMware related bugs. I suggest linking to a doc because the bug template is already really long and looks like it would be best to have something short, like, "For tips on reporting VMware virt driver bugs, see this doc: " and provide a link to, for example, a openstack wiki about the VMware virt driver (is there one?). The question is, where can we put the doc? Wiki? Or maybe here at the bottom [1]? Let me know what you think. >> > Sorry for the late reply, I was on PTO last week. I have posted a patch which adds a "Troubleshooting" section to the VMware documentation in Nova: > > https://review.openstack.org/#/c/597446 > > If this is OK then we can add a link to this particular paragraph in the bug template. Thanks, Rado. The doc patch has merged and the change propagated to the published docs, so I've added a link to it in the bug template. Cheers, -melanie From doug at doughellmann.com Thu Aug 30 00:04:16 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 29 Aug 2018 20:04:16 -0400 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: <1535570540-sup-7062@lrrr.local> References: <1535398507-sup-4428@lrrr.local> <1535550637-sup-5597@lrrr.local> <1535561981-sup-3301@lrrr.local> <1535570540-sup-7062@lrrr.local> Message-ID: <1535587356-sup-2136@lrrr.local> Excerpts from Doug Hellmann's message of 2018-08-29 15:22:56 -0400: > Excerpts from David Peacock's message of 2018-08-29 15:12:03 -0400: > > On Wed, Aug 29, 2018 at 1:02 PM Doug Hellmann wrote: > > > > > Excerpts from Doug Hellmann's message of 2018-08-29 09:50:58 -0400: > > > > Excerpts from David Peacock's message of 2018-08-29 08:53:26 -0400: > > > > > On Mon, Aug 27, 2018 at 3:38 PM Doug Hellmann > > > wrote: > > > > > > > > > > > If your team is ready to have your zuul settings migrated, please > > > > > > let us know by following up to this email. We will start with the > > > > > > volunteers, and then work our way through the other teams. > > > > > > > > > > > > > > > > TripleO team is ready to participate. I'll coordinate on our end. > > > > > > > > I will generate the patches today and watch for a time when the CI load > > > > is low to submit them. > > > > > > > > Doug > > > > > > > > > > It appears that someone who is not listed as a goal champion has > > > already submitted a bunch of patches for importing the zuul settings > > > into the TripleO repositories without updating our tracking story. > > > The keystone team elected to abandon a similar set of patches because > > > some of them were incorrect. I don't know whether that applies to > > > these. > > > > > > Do you want to review the ones that are open, or would you like for me > > > to generate a new batch? > > > > > > Doug > > > > > > > Please would you mind pasting me the reviews in question, then I'll take a > > look and let you know which direction. > > > > Thanks! > > Here's the list of open changes I see right now: > > +----------------------------------------------+-------------------------------------+------------+--------------+-------------------------------------+---------------+ > | Subject | Repo | Tests | Workflow | URL | Branch | > +----------------------------------------------+-------------------------------------+------------+--------------+-------------------------------------+---------------+ > | fix tox python3 overrides | openstack-infra/tripleo-ci | PASS | REVIEWED | https://review.openstack.org/588587 | master | > | import zuul job settings from project-config | openstack/ansible-role-k8s-glance | FAILED | NEW | https://review.openstack.org/596021 | master | > | import zuul job settings from project-config | openstack/ansible-role-k8s-keystone | FAILED | NEW | https://review.openstack.org/596022 | master | > | import zuul job settings from project-config | openstack/ansible-role-k8s-mariadb | FAILED | NEW | https://review.openstack.org/596023 | master | > | import zuul job settings from project-config | openstack/dib-utils | PASS | NEW | https://review.openstack.org/596024 | master | > | fix tox python3 overrides | openstack/instack | PASS | REVIEWED | https://review.openstack.org/572904 | master | > | import zuul job settings from project-config | openstack/instack | PASS | NEW | https://review.openstack.org/596025 | master | > | add python 3.6 unit test job | openstack/instack | PASS | NEW | https://review.openstack.org/596026 | master | > | add python 3.6 unit test job | openstack/instack | PASS | NEW | https://review.openstack.org/596027 | master | > | import zuul job settings from project-config | openstack/instack | PASS | NEW | https://review.openstack.org/596085 | stable/ocata | > | import zuul job settings from project-config | openstack/instack | PASS | NEW | https://review.openstack.org/596105 | stable/pike | > | import zuul job settings from project-config | openstack/instack | PASS | NEW | https://review.openstack.org/596121 | stable/queens | > | import zuul job settings from project-config | openstack/instack | PASS | NEW | https://review.openstack.org/596138 | stable/rocky | > | import zuul job settings from project-config | openstack/instack-undercloud | FAILED | NEW | https://review.openstack.org/596086 | stable/ocata | > | import zuul job settings from project-config | openstack/instack-undercloud | PASS | NEW | https://review.openstack.org/596106 | stable/pike | > | import zuul job settings from project-config | openstack/instack-undercloud | FAILED | NEW | https://review.openstack.org/596122 | stable/queens | > | import zuul job settings from project-config | openstack/os-apply-config | FAILED | NEW | https://review.openstack.org/596087 | stable/ocata | > | import zuul job settings from project-config | openstack/os-apply-config | PASS | NEW | https://review.openstack.org/596107 | stable/pike | > | import zuul job settings from project-config | openstack/os-apply-config | PASS | NEW | https://review.openstack.org/596123 | stable/queens | > | import zuul job settings from project-config | openstack/os-collect-config | FAILED | NEW | https://review.openstack.org/596094 | stable/ocata | > | import zuul job settings from project-config | openstack/os-collect-config | PASS | NEW | https://review.openstack.org/596108 | stable/pike | > | import zuul job settings from project-config | openstack/os-collect-config | PASS | NEW | https://review.openstack.org/596124 | stable/queens | > | import zuul job settings from project-config | openstack/os-net-config | FAILED | NEW | https://review.openstack.org/596095 | stable/ocata | > | import zuul job settings from project-config | openstack/os-net-config | PASS | REVIEWED | https://review.openstack.org/596109 | stable/pike | > | import zuul job settings from project-config | openstack/os-net-config | PASS | REVIEWED | https://review.openstack.org/596125 | stable/queens | > | import zuul job settings from project-config | openstack/os-refresh-config | PASS | NEW | https://review.openstack.org/596096 | stable/ocata | > | import zuul job settings from project-config | openstack/os-refresh-config | PASS | NEW | https://review.openstack.org/596110 | stable/pike | > | import zuul job settings from project-config | openstack/os-refresh-config | PASS | NEW | https://review.openstack.org/596126 | stable/queens | > | import zuul job settings from project-config | openstack/paunch | FAILED | NEW | https://review.openstack.org/596041 | master | > | switch documentation job to new PTI | openstack/paunch | FAILED | NEW | https://review.openstack.org/596042 | master | > | add python 3.6 unit test job | openstack/paunch | FAILED | NEW | https://review.openstack.org/596043 | master | > | import zuul job settings from project-config | openstack/paunch | PASS | NEW | https://review.openstack.org/596111 | stable/pike | > | import zuul job settings from project-config | openstack/paunch | FAILED | NEW | https://review.openstack.org/596127 | stable/queens | > | import zuul job settings from project-config | openstack/puppet-pacemaker | FAILED | NEW | https://review.openstack.org/596044 | master | > | switch documentation job to new PTI | openstack/puppet-pacemaker | FAILED | NEW | https://review.openstack.org/596045 | master | > | import zuul job settings from project-config | openstack/puppet-tripleo | FAILED | NEW | https://review.openstack.org/596097 | stable/ocata | > | import zuul job settings from project-config | openstack/puppet-tripleo | PASS | NEW | https://review.openstack.org/596112 | stable/pike | > | import zuul job settings from project-config | openstack/puppet-tripleo | FAILED | NEW | https://review.openstack.org/596128 | stable/queens | > | import zuul job settings from project-config | openstack/python-tripleoclient | FAILED | REVIEWED | https://review.openstack.org/596048 | master | > | switch documentation job to new PTI | openstack/python-tripleoclient | PASS | REVIEWED | https://review.openstack.org/596049 | master | > | add python 3.6 unit test job | openstack/python-tripleoclient | PASS | REVIEWED | https://review.openstack.org/596050 | master | > | import zuul job settings from project-config | openstack/python-tripleoclient | FAILED | REVIEWED | https://review.openstack.org/596098 | stable/ocata | > | import zuul job settings from project-config | openstack/python-tripleoclient | PASS | REVIEWED | https://review.openstack.org/596113 | stable/pike | > | import zuul job settings from project-config | openstack/python-tripleoclient | FAILED | NEW | https://review.openstack.org/596129 | stable/queens | > | import zuul job settings from project-config | openstack/python-tripleoclient | PASS | REVIEWED | https://review.openstack.org/596139 | stable/rocky | > | import zuul job settings from project-config | openstack/tempest-tripleo-ui | UNKNOWN | APPROVED | https://review.openstack.org/596051 | master | > | switch documentation job to new PTI | openstack/tempest-tripleo-ui | UNKNOWN | APPROVED | https://review.openstack.org/596052 | master | > | import zuul job settings from project-config | openstack/tripleo-common | FAILED | NEW | https://review.openstack.org/596053 | master | > | switch documentation job to new PTI | openstack/tripleo-common | FAILED | NEW | https://review.openstack.org/596054 | master | > | add python 3.6 unit test job | openstack/tripleo-common | FAILED | NEW | https://review.openstack.org/596055 | master | > | import zuul job settings from project-config | openstack/tripleo-common | FAILED | NEW | https://review.openstack.org/596099 | stable/ocata | > | import zuul job settings from project-config | openstack/tripleo-common | FAILED | NEW | https://review.openstack.org/596114 | stable/pike | > | import zuul job settings from project-config | openstack/tripleo-common | FAILED | NEW | https://review.openstack.org/596130 | stable/queens | > | import zuul job settings from project-config | openstack/tripleo-docs | PASS | REVIEWED | https://review.openstack.org/596058 | master | > | switch documentation job to new PTI | openstack/tripleo-docs | PASS | NEW | https://review.openstack.org/596059 | master | > | switch documentation job to new PTI | openstack/tripleo-heat-templates | FAILED | NEW | https://review.openstack.org/596061 | master | > | import zuul job settings from project-config | openstack/tripleo-heat-templates | FAILED | NEW | https://review.openstack.org/596100 | stable/ocata | > | import zuul job settings from project-config | openstack/tripleo-heat-templates | PASS | NEW | https://review.openstack.org/596115 | stable/pike | > | import zuul job settings from project-config | openstack/tripleo-heat-templates | FAILED | NEW | https://review.openstack.org/596131 | stable/queens | > | add python 3.6 unit test job | openstack/tripleo-image-elements | FAILED | NEW | https://review.openstack.org/596064 | master | > | import zuul job settings from project-config | openstack/tripleo-image-elements | FAILED | NEW | https://review.openstack.org/596101 | stable/ocata | > | import zuul job settings from project-config | openstack/tripleo-image-elements | PASS | NEW | https://review.openstack.org/596116 | stable/pike | > | import zuul job settings from project-config | openstack/tripleo-image-elements | PASS | NEW | https://review.openstack.org/596132 | stable/queens | > | import zuul job settings from project-config | openstack/tripleo-ipsec | PASS | NEW | https://review.openstack.org/596133 | stable/queens | > | import zuul job settings from project-config | openstack/tripleo-puppet-elements | FAILED | NEW | https://review.openstack.org/596102 | stable/ocata | > | import zuul job settings from project-config | openstack/tripleo-puppet-elements | FAILED | NEW | https://review.openstack.org/596117 | stable/pike | > | import zuul job settings from project-config | openstack/tripleo-puppet-elements | FAILED | NEW | https://review.openstack.org/596134 | stable/queens | > | import zuul job settings from project-config | openstack/tripleo-quickstart | FAILED | NEW | https://review.openstack.org/596071 | master | > | switch documentation job to new PTI | openstack/tripleo-quickstart | FAILED | NEW | https://review.openstack.org/596072 | master | > | import zuul job settings from project-config | openstack/tripleo-quickstart-extras | FAILED | NEW | https://review.openstack.org/596073 | master | > | switch documentation job to new PTI | openstack/tripleo-quickstart-extras | FAILED | NEW | https://review.openstack.org/596074 | master | > | import zuul job settings from project-config | openstack/tripleo-specs | PASS | NEW | https://review.openstack.org/596077 | master | > | import zuul job settings from project-config | openstack/tripleo-ui | PASS | REVIEWED | https://review.openstack.org/596078 | master | > | switch documentation job to new PTI | openstack/tripleo-ui | PASS | REVIEWED | https://review.openstack.org/596079 | master | > | import zuul job settings from project-config | openstack/tripleo-ui | PASS | REVIEWED | https://review.openstack.org/596103 | stable/ocata | > | import zuul job settings from project-config | openstack/tripleo-ui | PASS | REVIEWED | https://review.openstack.org/596118 | stable/pike | > | import zuul job settings from project-config | openstack/tripleo-ui | PASS | REVIEWED | https://review.openstack.org/596135 | stable/queens | > | import zuul job settings from project-config | openstack/tripleo-upgrade | PASS | NEW | https://review.openstack.org/596080 | master | > | switch documentation job to new PTI | openstack/tripleo-upgrade | PASS | NEW | https://review.openstack.org/596081 | master | > | import zuul job settings from project-config | openstack/tripleo-upgrade | PASS | NEW | https://review.openstack.org/596119 | stable/pike | > | import zuul job settings from project-config | openstack/tripleo-upgrade | PASS | NEW | https://review.openstack.org/596136 | stable/queens | > | import zuul job settings from project-config | openstack/tripleo-validations | PASS | REVIEWED | https://review.openstack.org/596082 | master | > | switch documentation job to new PTI | openstack/tripleo-validations | PASS | REVIEWED | https://review.openstack.org/596083 | master | > | add python 3.6 unit test job | openstack/tripleo-validations | PASS | REVIEWED | https://review.openstack.org/596084 | master | > | import zuul job settings from project-config | openstack/tripleo-validations | FAILED | REVIEWED | https://review.openstack.org/596104 | stable/ocata | > | import zuul job settings from project-config | openstack/tripleo-validations | PASS | REVIEWED | https://review.openstack.org/596120 | stable/pike | > | import zuul job settings from project-config | openstack/tripleo-validations | PASS | REVIEWED | https://review.openstack.org/596137 | stable/queens | > | | | | | | | > | | | FAILED: 38 | APPROVED: 2 | | | > | | | PASS: 47 | NEW: 63 | | | > | | | UNKNOWN: 2 | REVIEWED: 22 | | | > +----------------------------------------------+-------------------------------------+------------+--------------+-------------------------------------+---------------+ I went ahead and regenerated those, just to be safe. The full list is below. I think it's probably better to take the new ones. +----------------------------------------------+-------------------------------------+-------------------------------------+---------------+ | Subject | Repo | URL | Branch | +----------------------------------------------+-------------------------------------+-------------------------------------+---------------+ | fix tox python3 overrides | openstack-infra/tripleo-ci | https://review.openstack.org/588587 | master | | import zuul job settings from project-config | openstack/ansible-role-k8s-glance | https://review.openstack.org/596021 | master | | import zuul job settings from project-config | openstack/ansible-role-k8s-glance | https://review.openstack.org/597746 | master | | import zuul job settings from project-config | openstack/ansible-role-k8s-keystone | https://review.openstack.org/596022 | master | | import zuul job settings from project-config | openstack/ansible-role-k8s-keystone | https://review.openstack.org/597747 | master | | import zuul job settings from project-config | openstack/ansible-role-k8s-mariadb | https://review.openstack.org/596023 | master | | import zuul job settings from project-config | openstack/ansible-role-k8s-mariadb | https://review.openstack.org/597748 | master | | import zuul job settings from project-config | openstack/dib-utils | https://review.openstack.org/596024 | master | | import zuul job settings from project-config | openstack/dib-utils | https://review.openstack.org/597749 | master | | fix tox python3 overrides | openstack/instack | https://review.openstack.org/572904 | master | | import zuul job settings from project-config | openstack/instack | https://review.openstack.org/597750 | master | | add python 3.5 unit test job | openstack/instack | https://review.openstack.org/597751 | master | | add python 3.6 unit test job | openstack/instack | https://review.openstack.org/597752 | master | | import zuul job settings from project-config | openstack/instack | https://review.openstack.org/597794 | stable/ocata | | import zuul job settings from project-config | openstack/instack | https://review.openstack.org/597808 | stable/pike | | import zuul job settings from project-config | openstack/instack | https://review.openstack.org/597825 | stable/queens | | import zuul job settings from project-config | openstack/instack | https://review.openstack.org/597842 | stable/rocky | | import zuul job settings from project-config | openstack/instack-undercloud | https://review.openstack.org/596086 | stable/ocata | | import zuul job settings from project-config | openstack/instack-undercloud | https://review.openstack.org/596106 | stable/pike | | import zuul job settings from project-config | openstack/instack-undercloud | https://review.openstack.org/596122 | stable/queens | | import zuul job settings from project-config | openstack/instack-undercloud | https://review.openstack.org/597753 | master | | switch documentation job to new PTI | openstack/instack-undercloud | https://review.openstack.org/597754 | master | | add python 3.6 unit test job | openstack/instack-undercloud | https://review.openstack.org/597755 | master | | import zuul job settings from project-config | openstack/instack-undercloud | https://review.openstack.org/597795 | stable/ocata | | import zuul job settings from project-config | openstack/instack-undercloud | https://review.openstack.org/597809 | stable/pike | | import zuul job settings from project-config | openstack/instack-undercloud | https://review.openstack.org/597826 | stable/queens | | import zuul job settings from project-config | openstack/instack-undercloud | https://review.openstack.org/597843 | stable/rocky | | import zuul job settings from project-config | openstack/os-apply-config | https://review.openstack.org/596087 | stable/ocata | | import zuul job settings from project-config | openstack/os-apply-config | https://review.openstack.org/596107 | stable/pike | | import zuul job settings from project-config | openstack/os-apply-config | https://review.openstack.org/596123 | stable/queens | | import zuul job settings from project-config | openstack/os-apply-config | https://review.openstack.org/597796 | stable/ocata | | import zuul job settings from project-config | openstack/os-apply-config | https://review.openstack.org/597810 | stable/pike | | import zuul job settings from project-config | openstack/os-apply-config | https://review.openstack.org/597827 | stable/queens | | import zuul job settings from project-config | openstack/os-apply-config | https://review.openstack.org/597844 | stable/rocky | | import zuul job settings from project-config | openstack/os-collect-config | https://review.openstack.org/596094 | stable/ocata | | import zuul job settings from project-config | openstack/os-collect-config | https://review.openstack.org/596108 | stable/pike | | import zuul job settings from project-config | openstack/os-collect-config | https://review.openstack.org/596124 | stable/queens | | import zuul job settings from project-config | openstack/os-collect-config | https://review.openstack.org/597797 | stable/ocata | | import zuul job settings from project-config | openstack/os-collect-config | https://review.openstack.org/597811 | stable/pike | | import zuul job settings from project-config | openstack/os-collect-config | https://review.openstack.org/597828 | stable/queens | | import zuul job settings from project-config | openstack/os-collect-config | https://review.openstack.org/597845 | stable/rocky | | import zuul job settings from project-config | openstack/os-net-config | https://review.openstack.org/596095 | stable/ocata | | import zuul job settings from project-config | openstack/os-net-config | https://review.openstack.org/596109 | stable/pike | | import zuul job settings from project-config | openstack/os-net-config | https://review.openstack.org/596125 | stable/queens | | import zuul job settings from project-config | openstack/os-net-config | https://review.openstack.org/597756 | master | | switch documentation job to new PTI | openstack/os-net-config | https://review.openstack.org/597757 | master | | import zuul job settings from project-config | openstack/os-net-config | https://review.openstack.org/597798 | stable/ocata | | import zuul job settings from project-config | openstack/os-net-config | https://review.openstack.org/597812 | stable/pike | | import zuul job settings from project-config | openstack/os-net-config | https://review.openstack.org/597829 | stable/queens | | import zuul job settings from project-config | openstack/os-net-config | https://review.openstack.org/597846 | stable/rocky | | import zuul job settings from project-config | openstack/os-refresh-config | https://review.openstack.org/596096 | stable/ocata | | import zuul job settings from project-config | openstack/os-refresh-config | https://review.openstack.org/596110 | stable/pike | | import zuul job settings from project-config | openstack/os-refresh-config | https://review.openstack.org/596126 | stable/queens | | import zuul job settings from project-config | openstack/os-refresh-config | https://review.openstack.org/597799 | stable/ocata | | import zuul job settings from project-config | openstack/os-refresh-config | https://review.openstack.org/597813 | stable/pike | | import zuul job settings from project-config | openstack/os-refresh-config | https://review.openstack.org/597830 | stable/queens | | import zuul job settings from project-config | openstack/os-refresh-config | https://review.openstack.org/597847 | stable/rocky | | import zuul job settings from project-config | openstack/paunch | https://review.openstack.org/596041 | master | | switch documentation job to new PTI | openstack/paunch | https://review.openstack.org/596042 | master | | add python 3.6 unit test job | openstack/paunch | https://review.openstack.org/596043 | master | | import zuul job settings from project-config | openstack/paunch | https://review.openstack.org/596111 | stable/pike | | import zuul job settings from project-config | openstack/paunch | https://review.openstack.org/596127 | stable/queens | | import zuul job settings from project-config | openstack/paunch | https://review.openstack.org/597758 | master | | switch documentation job to new PTI | openstack/paunch | https://review.openstack.org/597759 | master | | add python 3.6 unit test job | openstack/paunch | https://review.openstack.org/597760 | master | | import zuul job settings from project-config | openstack/paunch | https://review.openstack.org/597814 | stable/pike | | import zuul job settings from project-config | openstack/paunch | https://review.openstack.org/597831 | stable/queens | | import zuul job settings from project-config | openstack/paunch | https://review.openstack.org/597848 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-pacemaker | https://review.openstack.org/596044 | master | | switch documentation job to new PTI | openstack/puppet-pacemaker | https://review.openstack.org/596045 | master | | import zuul job settings from project-config | openstack/puppet-pacemaker | https://review.openstack.org/597761 | master | | switch documentation job to new PTI | openstack/puppet-pacemaker | https://review.openstack.org/597762 | master | | import zuul job settings from project-config | openstack/puppet-tripleo | https://review.openstack.org/596097 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-tripleo | https://review.openstack.org/596112 | stable/pike | | import zuul job settings from project-config | openstack/puppet-tripleo | https://review.openstack.org/596128 | stable/queens | | import zuul job settings from project-config | openstack/puppet-tripleo | https://review.openstack.org/597763 | master | | switch documentation job to new PTI | openstack/puppet-tripleo | https://review.openstack.org/597764 | master | | import zuul job settings from project-config | openstack/puppet-tripleo | https://review.openstack.org/597800 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-tripleo | https://review.openstack.org/597815 | stable/pike | | import zuul job settings from project-config | openstack/puppet-tripleo | https://review.openstack.org/597832 | stable/queens | | import zuul job settings from project-config | openstack/puppet-tripleo | https://review.openstack.org/597849 | stable/rocky | | import zuul job settings from project-config | openstack/python-tripleoclient | https://review.openstack.org/596048 | master | | switch documentation job to new PTI | openstack/python-tripleoclient | https://review.openstack.org/596049 | master | | add python 3.6 unit test job | openstack/python-tripleoclient | https://review.openstack.org/596050 | master | | import zuul job settings from project-config | openstack/python-tripleoclient | https://review.openstack.org/596098 | stable/ocata | | import zuul job settings from project-config | openstack/python-tripleoclient | https://review.openstack.org/596113 | stable/pike | | import zuul job settings from project-config | openstack/python-tripleoclient | https://review.openstack.org/596129 | stable/queens | | import zuul job settings from project-config | openstack/python-tripleoclient | https://review.openstack.org/596139 | stable/rocky | | import zuul job settings from project-config | openstack/python-tripleoclient | https://review.openstack.org/597765 | master | | switch documentation job to new PTI | openstack/python-tripleoclient | https://review.openstack.org/597766 | master | | add python 3.6 unit test job | openstack/python-tripleoclient | https://review.openstack.org/597767 | master | | import zuul job settings from project-config | openstack/python-tripleoclient | https://review.openstack.org/597801 | stable/ocata | | import zuul job settings from project-config | openstack/python-tripleoclient | https://review.openstack.org/597816 | stable/pike | | import zuul job settings from project-config | openstack/python-tripleoclient | https://review.openstack.org/597833 | stable/queens | | import zuul job settings from project-config | openstack/python-tripleoclient | https://review.openstack.org/597850 | stable/rocky | | import zuul job settings from project-config | openstack/tempest-tripleo-ui | https://review.openstack.org/596051 | master | | switch documentation job to new PTI | openstack/tempest-tripleo-ui | https://review.openstack.org/596052 | master | | import zuul job settings from project-config | openstack/tempest-tripleo-ui | https://review.openstack.org/597768 | master | | switch documentation job to new PTI | openstack/tempest-tripleo-ui | https://review.openstack.org/597769 | master | | import zuul job settings from project-config | openstack/tripleo-common | https://review.openstack.org/596053 | master | | switch documentation job to new PTI | openstack/tripleo-common | https://review.openstack.org/596054 | master | | add python 3.6 unit test job | openstack/tripleo-common | https://review.openstack.org/596055 | master | | import zuul job settings from project-config | openstack/tripleo-common | https://review.openstack.org/596099 | stable/ocata | | import zuul job settings from project-config | openstack/tripleo-common | https://review.openstack.org/596114 | stable/pike | | import zuul job settings from project-config | openstack/tripleo-common | https://review.openstack.org/596130 | stable/queens | | import zuul job settings from project-config | openstack/tripleo-common | https://review.openstack.org/597770 | master | | switch documentation job to new PTI | openstack/tripleo-common | https://review.openstack.org/597771 | master | | add python 3.6 unit test job | openstack/tripleo-common | https://review.openstack.org/597772 | master | | import zuul job settings from project-config | openstack/tripleo-common | https://review.openstack.org/597802 | stable/ocata | | import zuul job settings from project-config | openstack/tripleo-common | https://review.openstack.org/597817 | stable/pike | | import zuul job settings from project-config | openstack/tripleo-common | https://review.openstack.org/597834 | stable/queens | | import zuul job settings from project-config | openstack/tripleo-common | https://review.openstack.org/597851 | stable/rocky | | switch documentation job to new PTI | openstack/tripleo-docs | https://review.openstack.org/596059 | master | | import zuul job settings from project-config | openstack/tripleo-docs | https://review.openstack.org/597773 | master | | switch documentation job to new PTI | openstack/tripleo-docs | https://review.openstack.org/597774 | master | | switch documentation job to new PTI | openstack/tripleo-heat-templates | https://review.openstack.org/596061 | master | | import zuul job settings from project-config | openstack/tripleo-heat-templates | https://review.openstack.org/596100 | stable/ocata | | import zuul job settings from project-config | openstack/tripleo-heat-templates | https://review.openstack.org/596115 | stable/pike | | import zuul job settings from project-config | openstack/tripleo-heat-templates | https://review.openstack.org/596131 | stable/queens | | import zuul job settings from project-config | openstack/tripleo-heat-templates | https://review.openstack.org/597775 | master | | import zuul job settings from project-config | openstack/tripleo-heat-templates | https://review.openstack.org/597803 | stable/ocata | | import zuul job settings from project-config | openstack/tripleo-heat-templates | https://review.openstack.org/597818 | stable/pike | | import zuul job settings from project-config | openstack/tripleo-heat-templates | https://review.openstack.org/597835 | stable/queens | | import zuul job settings from project-config | openstack/tripleo-heat-templates | https://review.openstack.org/597852 | stable/rocky | | add python 3.6 unit test job | openstack/tripleo-image-elements | https://review.openstack.org/596064 | master | | import zuul job settings from project-config | openstack/tripleo-image-elements | https://review.openstack.org/596101 | stable/ocata | | import zuul job settings from project-config | openstack/tripleo-image-elements | https://review.openstack.org/596116 | stable/pike | | import zuul job settings from project-config | openstack/tripleo-image-elements | https://review.openstack.org/596132 | stable/queens | | import zuul job settings from project-config | openstack/tripleo-image-elements | https://review.openstack.org/597776 | master | | switch documentation job to new PTI | openstack/tripleo-image-elements | https://review.openstack.org/597777 | master | | add python 3.5 unit test job | openstack/tripleo-image-elements | https://review.openstack.org/597778 | master | | add python 3.6 unit test job | openstack/tripleo-image-elements | https://review.openstack.org/597779 | master | | import zuul job settings from project-config | openstack/tripleo-image-elements | https://review.openstack.org/597804 | stable/ocata | | import zuul job settings from project-config | openstack/tripleo-image-elements | https://review.openstack.org/597819 | stable/pike | | import zuul job settings from project-config | openstack/tripleo-image-elements | https://review.openstack.org/597836 | stable/queens | | import zuul job settings from project-config | openstack/tripleo-image-elements | https://review.openstack.org/597853 | stable/rocky | | import zuul job settings from project-config | openstack/tripleo-ipsec | https://review.openstack.org/597837 | stable/queens | | import zuul job settings from project-config | openstack/tripleo-puppet-elements | https://review.openstack.org/596102 | stable/ocata | | import zuul job settings from project-config | openstack/tripleo-puppet-elements | https://review.openstack.org/596117 | stable/pike | | import zuul job settings from project-config | openstack/tripleo-puppet-elements | https://review.openstack.org/596134 | stable/queens | | import zuul job settings from project-config | openstack/tripleo-puppet-elements | https://review.openstack.org/597780 | master | | switch documentation job to new PTI | openstack/tripleo-puppet-elements | https://review.openstack.org/597781 | master | | import zuul job settings from project-config | openstack/tripleo-puppet-elements | https://review.openstack.org/597805 | stable/ocata | | import zuul job settings from project-config | openstack/tripleo-puppet-elements | https://review.openstack.org/597820 | stable/pike | | import zuul job settings from project-config | openstack/tripleo-puppet-elements | https://review.openstack.org/597838 | stable/queens | | import zuul job settings from project-config | openstack/tripleo-puppet-elements | https://review.openstack.org/597854 | stable/rocky | | import zuul job settings from project-config | openstack/tripleo-quickstart | https://review.openstack.org/596071 | master | | switch documentation job to new PTI | openstack/tripleo-quickstart | https://review.openstack.org/596072 | master | | import zuul job settings from project-config | openstack/tripleo-quickstart | https://review.openstack.org/597782 | master | | switch documentation job to new PTI | openstack/tripleo-quickstart | https://review.openstack.org/597783 | master | | import zuul job settings from project-config | openstack/tripleo-quickstart-extras | https://review.openstack.org/596073 | master | | switch documentation job to new PTI | openstack/tripleo-quickstart-extras | https://review.openstack.org/596074 | master | | import zuul job settings from project-config | openstack/tripleo-quickstart-extras | https://review.openstack.org/597784 | master | | switch documentation job to new PTI | openstack/tripleo-quickstart-extras | https://review.openstack.org/597785 | master | | import zuul job settings from project-config | openstack/tripleo-specs | https://review.openstack.org/596077 | master | | import zuul job settings from project-config | openstack/tripleo-specs | https://review.openstack.org/597786 | master | | import zuul job settings from project-config | openstack/tripleo-ui | https://review.openstack.org/596078 | master | | switch documentation job to new PTI | openstack/tripleo-ui | https://review.openstack.org/596079 | master | | import zuul job settings from project-config | openstack/tripleo-ui | https://review.openstack.org/596103 | stable/ocata | | import zuul job settings from project-config | openstack/tripleo-ui | https://review.openstack.org/596118 | stable/pike | | import zuul job settings from project-config | openstack/tripleo-ui | https://review.openstack.org/596135 | stable/queens | | import zuul job settings from project-config | openstack/tripleo-ui | https://review.openstack.org/597787 | master | | switch documentation job to new PTI | openstack/tripleo-ui | https://review.openstack.org/597788 | master | | import zuul job settings from project-config | openstack/tripleo-ui | https://review.openstack.org/597806 | stable/ocata | | import zuul job settings from project-config | openstack/tripleo-ui | https://review.openstack.org/597821 | stable/pike | | import zuul job settings from project-config | openstack/tripleo-ui | https://review.openstack.org/597839 | stable/queens | | import zuul job settings from project-config | openstack/tripleo-ui | https://review.openstack.org/597855 | stable/rocky | | import zuul job settings from project-config | openstack/tripleo-upgrade | https://review.openstack.org/596080 | master | | switch documentation job to new PTI | openstack/tripleo-upgrade | https://review.openstack.org/596081 | master | | import zuul job settings from project-config | openstack/tripleo-upgrade | https://review.openstack.org/596119 | stable/pike | | import zuul job settings from project-config | openstack/tripleo-upgrade | https://review.openstack.org/596136 | stable/queens | | import zuul job settings from project-config | openstack/tripleo-upgrade | https://review.openstack.org/597789 | master | | switch documentation job to new PTI | openstack/tripleo-upgrade | https://review.openstack.org/597790 | master | | import zuul job settings from project-config | openstack/tripleo-upgrade | https://review.openstack.org/597823 | stable/pike | | import zuul job settings from project-config | openstack/tripleo-upgrade | https://review.openstack.org/597840 | stable/queens | | import zuul job settings from project-config | openstack/tripleo-validations | https://review.openstack.org/596082 | master | | switch documentation job to new PTI | openstack/tripleo-validations | https://review.openstack.org/596083 | master | | add python 3.6 unit test job | openstack/tripleo-validations | https://review.openstack.org/596084 | master | | import zuul job settings from project-config | openstack/tripleo-validations | https://review.openstack.org/596104 | stable/ocata | | import zuul job settings from project-config | openstack/tripleo-validations | https://review.openstack.org/597791 | master | | switch documentation job to new PTI | openstack/tripleo-validations | https://review.openstack.org/597792 | master | | add python 3.6 unit test job | openstack/tripleo-validations | https://review.openstack.org/597793 | master | | import zuul job settings from project-config | openstack/tripleo-validations | https://review.openstack.org/597807 | stable/ocata | | import zuul job settings from project-config | openstack/tripleo-validations | https://review.openstack.org/597824 | stable/pike | | import zuul job settings from project-config | openstack/tripleo-validations | https://review.openstack.org/597841 | stable/queens | | import zuul job settings from project-config | openstack/tripleo-validations | https://review.openstack.org/597856 | stable/rocky | +----------------------------------------------+-------------------------------------+-------------------------------------+---------------+ From doug at doughellmann.com Thu Aug 30 00:11:14 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 29 Aug 2018 20:11:14 -0400 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: <1535579316-sup-4280@lrrr.local> References: <1535398507-sup-4428@lrrr.local> <1535492703-sup-9104@lrrr.local> <20180829211411.hua6wpepu3xpnndh@bishop> <1535579316-sup-4280@lrrr.local> Message-ID: <1535587588-sup-7741@lrrr.local> Excerpts from Doug Hellmann's message of 2018-08-29 17:49:28 -0400: > Excerpts from Nate Johnston's message of 2018-08-29 17:14:11 -0400: > > On Tue, Aug 28, 2018 at 05:46:09PM -0400, Doug Hellmann wrote: > > > Excerpts from Michel Peterson's message of 2018-08-28 16:30:02 +0300: > > > > On Mon, Aug 27, 2018 at 10:37 PM, Doug Hellmann > > > > wrote: > > > > > > > > > > > > > > If your team is ready to have your zuul settings migrated, please > > > > > let us know by following up to this email. We will start with the > > > > > volunteers, and then work our way through the other teams. > > > > > > > > > > > > > The networking-odl team is willing to volunteer for this. > > > > > > networking-odl is part of the neutron team, and the tools are set up to > > > work based on full (not partial) teams. So, if the neutron team is ready > > > we can go ahead with those. > > > > Yes, the Neutron team is ready to proceed. I am firing off an email now > > to officially alert the stadium projects that this is incoming. I know > > that over time we have done a fair amount of work to get ready for the > > python3 transition, so if any anomalies come up as the patches get > > generated for Neutron please feel free to reach out to me to get them > > resolved. > > > > Nate Johnston > > > > OK, there are somewhere just over 100 patches for all of the neutron > repositories, so I'm going to wait for a quieter time of day to submit > them to avoid blocking other, smaller, bits of work. > > Doug Those patches are up for review now. - Doug +----------------------------------------------+------------------------------------+-------------------------------------+---------------+ | Subject | Repo | URL | Branch | +----------------------------------------------+------------------------------------+-------------------------------------+---------------+ | import zuul job settings from project-config | openstack/networking-bagpipe | https://review.openstack.org/597857 | master | | switch documentation job to new PTI | openstack/networking-bagpipe | https://review.openstack.org/597858 | master | | add python 3.5 unit test job | openstack/networking-bagpipe | https://review.openstack.org/597859 | master | | add python 3.6 unit test job | openstack/networking-bagpipe | https://review.openstack.org/597860 | master | | import zuul job settings from project-config | openstack/networking-bagpipe | https://review.openstack.org/597908 | stable/ocata | | import zuul job settings from project-config | openstack/networking-bagpipe | https://review.openstack.org/597920 | stable/pike | | import zuul job settings from project-config | openstack/networking-bagpipe | https://review.openstack.org/597935 | stable/queens | | import zuul job settings from project-config | openstack/networking-bagpipe | https://review.openstack.org/597950 | stable/rocky | | import zuul job settings from project-config | openstack/networking-bgpvpn | https://review.openstack.org/597861 | master | | switch documentation job to new PTI | openstack/networking-bgpvpn | https://review.openstack.org/597862 | master | | add python 3.5 unit test job | openstack/networking-bgpvpn | https://review.openstack.org/597863 | master | | add python 3.6 unit test job | openstack/networking-bgpvpn | https://review.openstack.org/597864 | master | | import zuul job settings from project-config | openstack/networking-bgpvpn | https://review.openstack.org/597909 | stable/ocata | | import zuul job settings from project-config | openstack/networking-bgpvpn | https://review.openstack.org/597921 | stable/pike | | import zuul job settings from project-config | openstack/networking-bgpvpn | https://review.openstack.org/597936 | stable/queens | | import zuul job settings from project-config | openstack/networking-bgpvpn | https://review.openstack.org/597951 | stable/rocky | | import zuul job settings from project-config | openstack/networking-midonet | https://review.openstack.org/597866 | master | | switch documentation job to new PTI | openstack/networking-midonet | https://review.openstack.org/597867 | master | | add python 3.5 unit test job | openstack/networking-midonet | https://review.openstack.org/597868 | master | | add python 3.6 unit test job | openstack/networking-midonet | https://review.openstack.org/597869 | master | | import zuul job settings from project-config | openstack/networking-midonet | https://review.openstack.org/597910 | stable/ocata | | import zuul job settings from project-config | openstack/networking-midonet | https://review.openstack.org/597922 | stable/pike | | import zuul job settings from project-config | openstack/networking-midonet | https://review.openstack.org/597937 | stable/queens | | import zuul job settings from project-config | openstack/networking-midonet | https://review.openstack.org/597952 | stable/rocky | | import zuul job settings from project-config | openstack/networking-odl | https://review.openstack.org/597870 | master | | switch documentation job to new PTI | openstack/networking-odl | https://review.openstack.org/597871 | master | | add python 3.5 unit test job | openstack/networking-odl | https://review.openstack.org/597872 | master | | add python 3.6 unit test job | openstack/networking-odl | https://review.openstack.org/597873 | master | | import zuul job settings from project-config | openstack/networking-odl | https://review.openstack.org/597911 | stable/ocata | | import zuul job settings from project-config | openstack/networking-odl | https://review.openstack.org/597923 | stable/pike | | import zuul job settings from project-config | openstack/networking-odl | https://review.openstack.org/597938 | stable/queens | | import zuul job settings from project-config | openstack/networking-odl | https://review.openstack.org/597953 | stable/rocky | | import zuul job settings from project-config | openstack/networking-ovn | https://review.openstack.org/597874 | master | | switch documentation job to new PTI | openstack/networking-ovn | https://review.openstack.org/597875 | master | | add python 3.6 unit test job | openstack/networking-ovn | https://review.openstack.org/597876 | master | | import zuul job settings from project-config | openstack/networking-ovn | https://review.openstack.org/597912 | stable/ocata | | import zuul job settings from project-config | openstack/networking-ovn | https://review.openstack.org/597924 | stable/pike | | import zuul job settings from project-config | openstack/networking-ovn | https://review.openstack.org/597939 | stable/queens | | import zuul job settings from project-config | openstack/networking-ovn | https://review.openstack.org/597954 | stable/rocky | | import zuul job settings from project-config | openstack/networking-sfc | https://review.openstack.org/597877 | master | | switch documentation job to new PTI | openstack/networking-sfc | https://review.openstack.org/597878 | master | | add python 3.6 unit test job | openstack/networking-sfc | https://review.openstack.org/597879 | master | | import zuul job settings from project-config | openstack/networking-sfc | https://review.openstack.org/597913 | stable/ocata | | import zuul job settings from project-config | openstack/networking-sfc | https://review.openstack.org/597925 | stable/pike | | import zuul job settings from project-config | openstack/networking-sfc | https://review.openstack.org/597940 | stable/queens | | import zuul job settings from project-config | openstack/networking-sfc | https://review.openstack.org/597955 | stable/rocky | | tox: Reuse envdirs | openstack/neutron | https://review.openstack.org/582376 | master | | Make neutron-fullstack-python35 job voting | openstack/neutron | https://review.openstack.org/591081 | master | | import zuul job settings from project-config | openstack/neutron | https://review.openstack.org/597880 | master | | switch documentation job to new PTI | openstack/neutron | https://review.openstack.org/597881 | master | | add python 3.6 unit test job | openstack/neutron | https://review.openstack.org/597882 | master | | import zuul job settings from project-config | openstack/neutron | https://review.openstack.org/597914 | stable/ocata | | import zuul job settings from project-config | openstack/neutron | https://review.openstack.org/597926 | stable/pike | | import zuul job settings from project-config | openstack/neutron | https://review.openstack.org/597941 | stable/queens | | import zuul job settings from project-config | openstack/neutron | https://review.openstack.org/597956 | stable/rocky | | import zuul job settings from project-config | openstack/neutron-dynamic-routing | https://review.openstack.org/597883 | master | | switch documentation job to new PTI | openstack/neutron-dynamic-routing | https://review.openstack.org/597884 | master | | add python 3.6 unit test job | openstack/neutron-dynamic-routing | https://review.openstack.org/597885 | master | | import zuul job settings from project-config | openstack/neutron-dynamic-routing | https://review.openstack.org/597915 | stable/ocata | | import zuul job settings from project-config | openstack/neutron-dynamic-routing | https://review.openstack.org/597927 | stable/pike | | import zuul job settings from project-config | openstack/neutron-dynamic-routing | https://review.openstack.org/597942 | stable/queens | | import zuul job settings from project-config | openstack/neutron-dynamic-routing | https://review.openstack.org/597957 | stable/rocky | | import zuul job settings from project-config | openstack/neutron-fwaas | https://review.openstack.org/597886 | master | | switch documentation job to new PTI | openstack/neutron-fwaas | https://review.openstack.org/597887 | master | | add python 3.6 unit test job | openstack/neutron-fwaas | https://review.openstack.org/597888 | master | | import zuul job settings from project-config | openstack/neutron-fwaas | https://review.openstack.org/597916 | stable/ocata | | import zuul job settings from project-config | openstack/neutron-fwaas | https://review.openstack.org/597928 | stable/pike | | import zuul job settings from project-config | openstack/neutron-fwaas | https://review.openstack.org/597943 | stable/queens | | import zuul job settings from project-config | openstack/neutron-fwaas | https://review.openstack.org/597958 | stable/rocky | | import zuul job settings from project-config | openstack/neutron-fwaas-dashboard | https://review.openstack.org/597889 | master | | switch documentation job to new PTI | openstack/neutron-fwaas-dashboard | https://review.openstack.org/597890 | master | | import zuul job settings from project-config | openstack/neutron-fwaas-dashboard | https://review.openstack.org/597929 | stable/pike | | import zuul job settings from project-config | openstack/neutron-fwaas-dashboard | https://review.openstack.org/597944 | stable/queens | | import zuul job settings from project-config | openstack/neutron-fwaas-dashboard | https://review.openstack.org/597959 | stable/rocky | | import zuul job settings from project-config | openstack/neutron-lib | https://review.openstack.org/597891 | master | | switch documentation job to new PTI | openstack/neutron-lib | https://review.openstack.org/597892 | master | | add python 3.6 unit test job | openstack/neutron-lib | https://review.openstack.org/597893 | master | | add lib-forward-testing-python3 test job | openstack/neutron-lib | https://review.openstack.org/597894 | master | | import zuul job settings from project-config | openstack/neutron-lib | https://review.openstack.org/597917 | stable/ocata | | import zuul job settings from project-config | openstack/neutron-lib | https://review.openstack.org/597930 | stable/pike | | import zuul job settings from project-config | openstack/neutron-lib | https://review.openstack.org/597945 | stable/queens | | import zuul job settings from project-config | openstack/neutron-lib | https://review.openstack.org/597960 | stable/rocky | | import zuul job settings from project-config | openstack/neutron-specs | https://review.openstack.org/597895 | master | | import zuul job settings from project-config | openstack/neutron-tempest-plugin | https://review.openstack.org/597896 | master | | import zuul job settings from project-config | openstack/neutron-vpnaas | https://review.openstack.org/597897 | master | | switch documentation job to new PTI | openstack/neutron-vpnaas | https://review.openstack.org/597898 | master | | add python 3.6 unit test job | openstack/neutron-vpnaas | https://review.openstack.org/597899 | master | | import zuul job settings from project-config | openstack/neutron-vpnaas | https://review.openstack.org/597918 | stable/ocata | | import zuul job settings from project-config | openstack/neutron-vpnaas | https://review.openstack.org/597931 | stable/pike | | import zuul job settings from project-config | openstack/neutron-vpnaas | https://review.openstack.org/597946 | stable/queens | | import zuul job settings from project-config | openstack/neutron-vpnaas | https://review.openstack.org/597961 | stable/rocky | | import zuul job settings from project-config | openstack/neutron-vpnaas-dashboard | https://review.openstack.org/597900 | master | | switch documentation job to new PTI | openstack/neutron-vpnaas-dashboard | https://review.openstack.org/597901 | master | | import zuul job settings from project-config | openstack/neutron-vpnaas-dashboard | https://review.openstack.org/597932 | stable/pike | | import zuul job settings from project-config | openstack/neutron-vpnaas-dashboard | https://review.openstack.org/597947 | stable/queens | | import zuul job settings from project-config | openstack/neutron-vpnaas-dashboard | https://review.openstack.org/597962 | stable/rocky | | import zuul job settings from project-config | openstack/ovsdbapp | https://review.openstack.org/597902 | master | | add python 3.6 unit test job | openstack/ovsdbapp | https://review.openstack.org/597903 | master | | import zuul job settings from project-config | openstack/ovsdbapp | https://review.openstack.org/597933 | stable/pike | | import zuul job settings from project-config | openstack/ovsdbapp | https://review.openstack.org/597948 | stable/queens | | import zuul job settings from project-config | openstack/ovsdbapp | https://review.openstack.org/597963 | stable/rocky | | import zuul job settings from project-config | openstack/python-neutronclient | https://review.openstack.org/597904 | master | | switch documentation job to new PTI | openstack/python-neutronclient | https://review.openstack.org/597905 | master | | add python 3.6 unit test job | openstack/python-neutronclient | https://review.openstack.org/597906 | master | | add lib-forward-testing-python3 test job | openstack/python-neutronclient | https://review.openstack.org/597907 | master | | import zuul job settings from project-config | openstack/python-neutronclient | https://review.openstack.org/597919 | stable/ocata | | import zuul job settings from project-config | openstack/python-neutronclient | https://review.openstack.org/597934 | stable/pike | | import zuul job settings from project-config | openstack/python-neutronclient | https://review.openstack.org/597949 | stable/queens | | import zuul job settings from project-config | openstack/python-neutronclient | https://review.openstack.org/597964 | stable/rocky | +----------------------------------------------+------------------------------------+-------------------------------------+---------------+ From ruijing.guo at intel.com Thu Aug 30 00:15:46 2018 From: ruijing.guo at intel.com (Guo, Ruijing) Date: Thu, 30 Aug 2018 00:15:46 +0000 Subject: [openstack-dev] [nova][neutron] numa aware vswitch In-Reply-To: <880a748657d5208aba9e24b4ed3e44c0879add61.camel@redhat.com> References: <2EE296D083DF2940BF4EBB91D39BB89F3BBF05C0@shsmsx102.ccr.corp.intel.com> <492b65f562d3deb2f8fcb55b5c981f057b24cfa8.camel@redhat.com> <2EE296D083DF2940BF4EBB91D39BB89F3BBF0E3B@shsmsx102.ccr.corp.intel.com> <880a748657d5208aba9e24b4ed3e44c0879add61.camel@redhat.com> Message-ID: <2EE296D083DF2940BF4EBB91D39BB89F3BBF21CB@shsmsx102.ccr.corp.intel.com> Hi, Stephen, Sean It worked as expected. Thanks, -Ruijing -----Original Message----- From: Stephen Finucane [mailto:sfinucan at redhat.com] Sent: Monday, August 27, 2018 5:37 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova][neutron] numa aware vswitch On Mon, 2018-08-27 at 10:24 +0100, Sean Mooney wrote: > > > On Mon 27 Aug 2018, 04:20 Guo, Ruijing, wrote: > > Hi, Stephen, > > > > After setting flavor, VM was created in node 0 (expect in node1). How to debug it? > > > > Nova.conf > > [neutron] > > physnets = physnet0,physnet1 > > > > [neutron_physnet_physnet1] > > numa_nodes = 1 > > Have you enabled the numa topology filter its off by default and without it the numa aware vswitch code is disabled. Yeah, make sure this is enabled. You should turn on debug-level logging as this will give you additional information about how things are being scheduled. Also, is this a new deployment? If not, you're going to need to upgrade and restart all the nova-* services since there are object changes which will need to be propagated. Stephen > > openstack network create net1 --external > > --provider-network-type=vlan --provider-physical-network=physnet1 --provider-segment=200 ... > > openstack server create --flavor 1 --image=cirros-0.3.5-x86_64-disk > > --nic net-id=net1 vm1 > > > > > > 1024 > > > > > > > > > > available: 2 nodes (0-1) > > node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23 node 0 size: > > 64412 MB node 0 free: 47658 MB node 1 cpus: 8 9 10 11 12 13 14 15 24 > > 25 26 27 28 29 30 31 node 1 size: 64502 MB node 1 free: 44945 MB > > node distances: > > node 0 1 > > 0: 10 21 > > 1: 21 10 > > > > Thanks, > > -Ruijing > > > > -----Original Message----- > > From: Stephen Finucane [mailto:sfinucan at redhat.com] > > Sent: Saturday, August 25, 2018 12:15 AM > > To: OpenStack Development Mailing List (not for usage questions) > > > > Subject: Re: [openstack-dev] [nova][neutron] numa aware vswitch > > > > On Fri, 2018-08-24 at 09:13 -0500, Matt Riedemann wrote: > > > On 8/24/2018 8:58 AM, Stephen Finucane wrote: > > > > Using this won't add a NUMA topology - it'll just control how > > > > any topology present will be mapped to the guest. You need to > > > > enable dedicated CPUs or a explicitly request a NUMA topology > > > > for this to work. > > > > > > > > openstack flavor set --property hw:numa_nodes=1 1 > > > > > > > > > > > > > > > > openstack flavor set --property hw:cpu_policy=dedicated 1 > > > > > > > > > > > > This is perhaps something that we could change in the future, > > > > though I haven't given it much thought yet. > > > > > > Looks like the admin guide [1] should be updated to at least refer > > > to the flavor user guide on setting up these types of flavors? > > > > > > [1] > > > https://docs.openstack.org/nova/latest/admin/networking.html#numa- > > > affi > > > nity > > > > Good idea. > > > > https://review.openstack.org/596393 > > > > Stephen __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dangtrinhnt at gmail.com Thu Aug 30 02:00:57 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 30 Aug 2018 11:00:57 +0900 Subject: [openstack-dev] [Freezer] Reactivate the team In-Reply-To: <201808271025487809975@zte.com.cn> References: <201808271025487809975@zte.com.cn> Message-ID: Hi Geng, We would like to have a team meeting today at 2:00 UTC on #openstack-meeting-alt channel. The purpose is to move Freezer forward. Attendees are you, Saad Zaher (the last PTL), and me. Bests, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Mon, Aug 27, 2018 at 11:26 AM wrote: > Hi,Kendall: > > I agree to migrate freezer project from Launchpad to Storyboard, Thanks. > > By the way, When will grant privileges for gengchc2 on Launchpad and > Project Gerrit repositories? > > > > Best regards, > > gengchc2 > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiang.edison at gmail.com Thu Aug 30 03:56:56 2018 From: xiang.edison at gmail.com (Edison Xiang) Date: Thu, 30 Aug 2018 11:56:56 +0800 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: References: Message-ID: Hi Ed Leafe, Thanks your reply. Open API defines a standard interface description for REST APIs. Open API 3.0 can make a description(schema) for current OpenStack REST API. It will not change current OpenStack API. I am not a GraphQL expert. I look up something about GraphQL. In my understanding, GraphQL will get current OpenAPI together and provide another APIs based on Relay, and Open API is used to describe REST APIs and GraphQL is used to describe Relay APIs. Best Regards, Edison Xiang On Wed, Aug 29, 2018 at 9:33 PM Ed Leafe wrote: > On Aug 29, 2018, at 1:36 AM, Edison Xiang wrote: > > > > As we know, Open API 3.0 was released on July, 2017, it is about one > year ago. > > Open API 3.0 support some new features like anyof, oneof and allof than > Open API 2.0(Swagger 2.0). > > Now OpenStack projects do not support Open API. > > Also I found some old emails in the Mail List about supporting Open API > 2.0 in OpenStack. > > There is currently an effort by some developers to investigate the > possibility of using GraphQL with OpenStack APIs. What would Open API 3.0 > provide that GraphQL would not? I’m asking because I don’t know enough > about Open API to compare them. > > > -- Ed Leafe > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiang.edison at gmail.com Thu Aug 30 04:45:03 2018 From: xiang.edison at gmail.com (Edison Xiang) Date: Thu, 30 Aug 2018 12:45:03 +0800 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: <6d6d19ae-0b81-ce3d-3a7f-c8e0cc0ad0b3@gmail.com> References: <6d6d19ae-0b81-ce3d-3a7f-c8e0cc0ad0b3@gmail.com> Message-ID: Hi Jay, Thanks your reply. As we know, we can automatically generate API Documents, different language of Clients(SDK), cloud tool adapters for OpenStack based on Open API 3.0 Schema. About the other self-defined development for 3rd party developers, based on the Open API 3.0 schema, developers can develop UI for API online search, API online calling. or directly call OpenStack API that it does not need Clients(SDK), or self-defined API Documents and so on. Since there is a good ecosystem around Open API, developers can do whatever they want based on Open API 3.0 schema. Best Regards, Edison Xiang On Wed, Aug 29, 2018 at 8:12 PM Jay Pipes wrote: > On 08/29/2018 02:36 AM, Edison Xiang wrote: > > Based on Open API 3.0, it can bring lots of benefits for OpenStack > > Community and does not impact the current features the Community has. > > > 3rd party developers can also do some self-defined development. > > Hi Edison, > > Would you mind expanding on what you are referring to with the above > line about 3rd party developers doing self-defined development? > > Thanks! > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiang.edison at gmail.com Thu Aug 30 06:08:12 2018 From: xiang.edison at gmail.com (Edison Xiang) Date: Thu, 30 Aug 2018 14:08:12 +0800 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: References: Message-ID: Hey dims, Thanks your reply. Your suggestion is very important. > what would be the impact to projects? > what steps they would have to take? We can launch a project to publish OpenStack Projects APIs Schema for users and developers. But now OpenStack Projects have no APIs Schema definition. Open API will not impact OpenStack Projects features they have, but we need some volunteers to define every project APIs Schema by Open API 3.0. > Do we have a sample/mock API where we can show that the Action and Microversions can be declared to reflect reality and it can actually work with the generated code? Yeah, you can copy this yaml [1] into editor [2] to generate server or client codes or try it out. We can do more demos later. [1] https://github.com/edisonxiang/OpenAPI-Specification/blob/master/examples/v3.0/petstore.yaml [2] https://editor.swagger.io Best Regards, Edison Xiang On Wed, Aug 29, 2018 at 6:31 PM Davanum Srinivas wrote: > Edison, > > This is definitely a step in the right direction if we can pull it off. > > Given the previous experiences and the current situation of how and where > we store the information currently and how we generate the website for the > API(s), can you please outline > - what would be the impact to projects? > - what steps they would have to take? > > Also, the whole point of having these definitions is that the generated > code works. Do we have a sample/mock API where we can show that the Action > and Microversions can be declared to reflect reality and it can actually > work with the generated code? > > Thanks, > Dims > > On Wed, Aug 29, 2018 at 2:37 AM Edison Xiang > wrote: > >> Hi team, >> >> As we know, Open API 3.0 was released on July, 2017, it is about one year >> ago. >> Open API 3.0 support some new features like anyof, oneof and allof than >> Open API 2.0(Swagger 2.0). >> Now OpenStack projects do not support Open API. >> Also I found some old emails in the Mail List about supporting Open API >> 2.0 in OpenStack. >> >> Some limitations are mentioned in the Mail List for OpenStack API: >> 1. The POST */action APIs. >> These APIs are exist in lots of projects like nova, cinder. >> These APIs have the same URI but the responses will be different when >> the request is different. >> 2. Micro versions. >> These are controller via headers, which are sometimes used to >> describe behavioral changes in an API, not just request/response schema >> changes. >> >> About the first limitation, we can find the solution in the Open API 3.0. >> The example [2] shows that we can define different request/response in >> the same URI by anyof feature in Open API 3.0. >> >> About the micro versions problem, I think it is not a limitation related >> a special API Standard. >> We can list all micro versions API schema files in one directory like >> nova/V2, >> or we can list the schema changes between micro versions as tempest >> project did [3]. >> >> Based on Open API 3.0, it can bring lots of benefits for OpenStack >> Community and does not impact the current features the Community has. >> For example, we can automatically generate API documents, different >> language Clients(SDK) maybe for different micro versions, >> and generate cloud tool adapters for OpenStack, like ansible module, >> terraform providers and so on. >> Also we can make an API UI to provide an online and visible API search, >> API Calling for every OpenStack API. >> 3rd party developers can also do some self-defined development. >> >> [1] https://github.com/OAI/OpenAPI-Specification >> [2] >> https://github.com/edisonxiang/OpenAPI-Specification/blob/master/examples/v3.0/petstore.yaml#L94-L109 >> [3] >> https://github.com/openstack/tempest/tree/master/tempest/lib/api_schema/response/compute >> >> Best Regards, >> Edison Xiang >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > Davanum Srinivas :: https://twitter.com/dims > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel at redhat.com Thu Aug 30 06:43:30 2018 From: michel at redhat.com (Michel Peterson) Date: Thu, 30 Aug 2018 09:43:30 +0300 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: <1535587588-sup-7741@lrrr.local> References: <1535398507-sup-4428@lrrr.local> <1535492703-sup-9104@lrrr.local> <20180829211411.hua6wpepu3xpnndh@bishop> <1535579316-sup-4280@lrrr.local> <1535587588-sup-7741@lrrr.local> Message-ID: On Thu, Aug 30, 2018 at 3:11 AM, Doug Hellmann wrote: > > OK, there are somewhere just over 100 patches for all of the neutron > > repositories, so I'm going to wait for a quieter time of day to submit > > them to avoid blocking other, smaller, bits of work. > > > > Doug > > Those patches are up for review now. - Doug > > Doug, just a heads up the tool for python3-first is duplicating the same block of code (e.g. https://review.openstack.org/# /c/597873/1/.zuul.d/project.yaml ) and in some cases duplicating code that already exists (e.g. https://review.openstack.org/# /c/597872/1/.zuul.d/project.yaml ). Perhaps it will be good to review the tool before moving forward. Best, M P.S. I've sent you the same through #openstack-infra -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfinucan at redhat.com Thu Aug 30 08:34:20 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Thu, 30 Aug 2018 09:34:20 +0100 Subject: [openstack-dev] [nova][neutron] numa aware vswitch In-Reply-To: <2EE296D083DF2940BF4EBB91D39BB89F3BBF21CB@shsmsx102.ccr.corp.intel.com> References: <2EE296D083DF2940BF4EBB91D39BB89F3BBF05C0@shsmsx102.ccr.corp.intel.com> <492b65f562d3deb2f8fcb55b5c981f057b24cfa8.camel@redhat.com> <2EE296D083DF2940BF4EBB91D39BB89F3BBF0E3B@shsmsx102.ccr.corp.intel.com> <880a748657d5208aba9e24b4ed3e44c0879add61.camel@redhat.com> <2EE296D083DF2940BF4EBB91D39BB89F3BBF21CB@shsmsx102.ccr.corp.intel.com> Message-ID: <200a822260674525b6a03255f56d36daaf43c9d1.camel@redhat.com> On Thu, 2018-08-30 at 00:15 +0000, Guo, Ruijing wrote: > Hi, Stephen, Sean > > It worked as expected. Good to hear. What were you missing? If there are other gaps in the documentation, I'd be happy to fix them. Stephen > Thanks, > -Ruijing > > -----Original Message----- > From: Stephen Finucane [mailto:sfinucan at redhat.com] > Sent: Monday, August 27, 2018 5:37 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [nova][neutron] numa aware vswitch > > On Mon, 2018-08-27 at 10:24 +0100, Sean Mooney wrote: > > > > > > On Mon 27 Aug 2018, 04:20 Guo, Ruijing, wrote: > > > Hi, Stephen, > > > > > > After setting flavor, VM was created in node 0 (expect in node1). How to debug it? > > > > > > Nova.conf > > > [neutron] > > > physnets = physnet0,physnet1 > > > > > > [neutron_physnet_physnet1] > > > numa_nodes = 1 > > > > Have you enabled the numa topology filter its off by default and without it the numa aware vswitch code is disabled. > > Yeah, make sure this is enabled. You should turn on debug-level logging as this will give you additional information about how things are being scheduled. Also, is this a new deployment? If not, you're going to need to upgrade and restart all the nova-* services since there are object changes which will need to be propagated. > > Stephen > > > > openstack network create net1 --external > > > --provider-network-type=vlan --provider-physical-network=physnet1 --provider-segment=200 ... > > > openstack server create --flavor 1 --image=cirros-0.3.5-x86_64-disk > > > --nic net-id=net1 vm1 > > > > > > > > > 1024 > > > > > > > > > > > > > > > available: 2 nodes (0-1) > > > node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23 node 0 size: > > > 64412 MB node 0 free: 47658 MB node 1 cpus: 8 9 10 11 12 13 14 15 24 > > > 25 26 27 28 29 30 31 node 1 size: 64502 MB node 1 free: 44945 MB > > > node distances: > > > node 0 1 > > > 0: 10 21 > > > 1: 21 10 > > > > > > Thanks, > > > -Ruijing > > > > > > -----Original Message----- > > > From: Stephen Finucane [mailto:sfinucan at redhat.com] > > > Sent: Saturday, August 25, 2018 12:15 AM > > > To: OpenStack Development Mailing List (not for usage questions) > > > > > > Subject: Re: [openstack-dev] [nova][neutron] numa aware vswitch > > > > > > On Fri, 2018-08-24 at 09:13 -0500, Matt Riedemann wrote: > > > > On 8/24/2018 8:58 AM, Stephen Finucane wrote: > > > > > Using this won't add a NUMA topology - it'll just control how > > > > > any topology present will be mapped to the guest. You need to > > > > > enable dedicated CPUs or a explicitly request a NUMA topology > > > > > for this to work. > > > > > > > > > > openstack flavor set --property hw:numa_nodes=1 1 > > > > > > > > > > > > > > > > > > > > openstack flavor set --property hw:cpu_policy=dedicated 1 > > > > > > > > > > > > > > > This is perhaps something that we could change in the future, > > > > > though I haven't given it much thought yet. > > > > > > > > Looks like the admin guide [1] should be updated to at least refer > > > > to the flavor user guide on setting up these types of flavors? > > > > > > > > [1] > > > > https://docs.openstack.org/nova/latest/admin/networking.html#numa- > > > > affi > > > > nity > > > > > > Good idea. > > > > > > https://review.openstack.org/596393 > > > > > > Stephen > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From geguileo at redhat.com Thu Aug 30 08:46:08 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 30 Aug 2018 10:46:08 +0200 Subject: [openstack-dev] [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: <3cae56bb-cca3-d251-e46f-63c328f254d2@gmail.com> References: <2fd5feda-0399-6b24-7eb7-738f36e14e70@gmail.com> <9b6b87e3-024b-6035-11ec-3ab41f795a3f@gmail.com> <60e4654e-91ba-7f14-a6d9-7a588c17baee@gmail.com> <8EF14CDA-F135-4ED9-A9B0-1654CDC08D64@cern.ch> <3cae56bb-cca3-d251-e46f-63c328f254d2@gmail.com> Message-ID: <20180830084608.g76maohgpxbmqvce@localhost> On 29/08, Matt Riedemann wrote: > On 8/29/2018 3:21 PM, Tim Bell wrote: > > Sounds like a good topic for PTG/Forum? > > Yeah it's already on the PTG agenda [1][2]. I started the thread because I > wanted to get the ball rolling as early as possible, and with people that > won't attend the PTG and/or the Forum, to weigh in on not only the known > issues with cross-cell migration but also the things I'm not thinking about. > > [1] https://etherpad.openstack.org/p/nova-ptg-stein > [2] https://etherpad.openstack.org/p/nova-ptg-stein-cells > > -- > > Thanks, > > Matt > Should we also add the topic to the Thursday Cinder-Nova slot in case there are some questions where the Cinder team can assist? Cheers, Gorka. From balazs.gibizer at ericsson.com Thu Aug 30 08:55:00 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Thu, 30 Aug 2018 10:55:00 +0200 Subject: [openstack-dev] [neutron][nova] Small bandwidth demo on the PTG Message-ID: <1535619300.3600.5@smtp.office365.com> Hi, Based on the Nova PTG planning etherpad [1] there is a need to talk about the current state of the bandwidth work [2][3]. Bence (rubasov) has already planned to show a small demo to Neutron folks about the current state of the implementation. So Bence and I are wondering about bringing that demo close to the nova - neutron cross project session. That session is currently planned to happen Thursday after lunch. So we are think about showing the demo right before that session starts. It would start 30 minutes before the nova - neutron cross project session. Are Nova folks also interested in seeing such a demo? If you are interested in seeing the demo please drop us a line or ping us in IRC so we know who should we wait for. Cheers, gibi [1] https://etherpad.openstack.org/p/nova-ptg-stein [2] https://specs.openstack.org/openstack/neutron-specs/specs/rocky/minimum-bandwidth-allocation-placement-api.html [3] https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/bandwidth-resource-provider.html From glenn.van_de_water at nuagenetworks.net Thu Aug 30 09:20:37 2018 From: glenn.van_de_water at nuagenetworks.net (Glenn VAN DE WATER) Date: Thu, 30 Aug 2018 11:20:37 +0200 Subject: [openstack-dev] [neutron] [neutron-fwaas] roadmap Message-ID: Hi, The FWaaS V2 spec (https://specs.openstack.org/ openstack/neutron-specs/specs/mitaka/fwaas-api-2.0.html) describes quite some changes. Currently only some of that functionality seems to be implemented i.e. applying firewall groups on a L2/L3 port. Does a roadmap exist, describing when/whether other features will be implemented as well? Not implemented features mentioned in the spec include: - firewall group construct as well as a way to specify allowed sources and destinations in firewall rules - indirections through address group and service group - allow multiple firewall group associations for the same Neutron port Best regards, Glenn -------------- next part -------------- An HTML attachment was scrubbed... URL: From adriant at catalyst.net.nz Thu Aug 30 09:23:11 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Thu, 30 Aug 2018 21:23:11 +1200 Subject: [openstack-dev] [keystone] [barbican] Keystone's use of Barbican ? In-Reply-To: References: <67f51eb3-d278-0e43-0d2a-bd3d3f7639ae@redhat.com> Message-ID: <173ad63d-e69c-735b-c286-c8a98a024aad@catalyst.net.nz> On 30/08/18 6:29 AM, Lance Bragstad wrote: > > Is that what is being described here ?  > https://docs.openstack.org/keystone/pike/admin/identity-credential-encryption.html > > > This is a separate mechanism for storing secrets, not necessarily > passwords (although I agree the term credentials automatically makes > people assume passwords). This is used if consuming keystone's native > MFA implementation. For example, storing a shared secret between the > user and keystone that is provided as a additional authentication > method along with a username and password combination. >   Is there any interest or plans to potentially allow Keystone's credential store to use Barbican as a storage provider? Encryption already is better than nothing, but if you already have (or will be deploying) a proper secret store with a hardware backend (or at least hardware stored encryption keys) then it might make sense to throw that in Barbican. Or is this also too much of a chicken/egg problem? How safe is it to rely on Barbican availability for MFA secrets and auth? -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel at redhat.com Thu Aug 30 09:51:04 2018 From: michel at redhat.com (Michel Peterson) Date: Thu, 30 Aug 2018 12:51:04 +0300 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: <1535587588-sup-7741@lrrr.local> References: <1535398507-sup-4428@lrrr.local> <1535492703-sup-9104@lrrr.local> <20180829211411.hua6wpepu3xpnndh@bishop> <1535579316-sup-4280@lrrr.local> <1535587588-sup-7741@lrrr.local> Message-ID: On Thu, Aug 30, 2018 at 3:11 AM, Doug Hellmann wrote: > | import zuul job settings from project-config | openstack/networking-odl > | https://review.openstack.org/597870 | master | > | switch documentation job to new PTI | openstack/networking-odl > | https://review.openstack.org/597871 | master | > | add python 3.5 unit test job | openstack/networking-odl > | https://review.openstack.org/597872 | master | > | add python 3.6 unit test job | openstack/networking-odl > | https://review.openstack.org/597873 | master | > | import zuul job settings from project-config | openstack/networking-odl > | https://review.openstack.org/597911 | stable/ocata | > | import zuul job settings from project-config | openstack/networking-odl > | https://review.openstack.org/597923 | stable/pike | > | import zuul job settings from project-config | openstack/networking-odl > | https://review.openstack.org/597938 | stable/queens | > | import zuul job settings from project-config | openstack/networking-odl > | https://review.openstack.org/597953 | stable/rocky | In the case of networking-odl I know we also need a 'fix tox python3 overrides' patch but that was not generated. I'm wondering if I should do it manually or it will be generated at a later stage? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Thu Aug 30 09:55:14 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Thu, 30 Aug 2018 17:55:14 +0800 Subject: [openstack-dev] [heat][glance] Heat image resource support issue Message-ID: Hi Glance team Glance V1 image API been deprecated for a long while, and V2 has been used widely. Heat image resource would like to change to use V2 as well, but there is an unsolved issue, which blocks us from adopting V2. Right now, to image create require Heat to download the image to Heat service and re-upload it to Glance. This behavior causes heat service a great burden which in a heat team discussion (years ago), we decide to deprecated V1 Image resource in Heat and will add V2 image resource once this is resolved. So I have been wondering if there's some workaround for this issue? Or if glance can support accept URL as image import (and then reuse client lib to download to glance side)? Or if anyone got a better solution for this? -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaosorior at redhat.com Thu Aug 30 10:02:41 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Thu, 30 Aug 2018 13:02:41 +0300 Subject: [openstack-dev] [keystone] [barbican] Keystone's use of Barbican ? In-Reply-To: <173ad63d-e69c-735b-c286-c8a98a024aad@catalyst.net.nz> References: <67f51eb3-d278-0e43-0d2a-bd3d3f7639ae@redhat.com> <173ad63d-e69c-735b-c286-c8a98a024aad@catalyst.net.nz> Message-ID: FWIW, instead of barbican, castellan could be used as a key manager. On 08/30/2018 12:23 PM, Adrian Turjak wrote: > > > On 30/08/18 6:29 AM, Lance Bragstad wrote: >> >> Is that what is being described here ?  >> https://docs.openstack.org/keystone/pike/admin/identity-credential-encryption.html >> >> >> This is a separate mechanism for storing secrets, not necessarily >> passwords (although I agree the term credentials automatically makes >> people assume passwords). This is used if consuming keystone's native >> MFA implementation. For example, storing a shared secret between the >> user and keystone that is provided as a additional authentication >> method along with a username and password combination. >>   > > Is there any interest or plans to potentially allow Keystone's > credential store to use Barbican as a storage provider? Encryption > already is better than nothing, but if you already have (or will be > deploying) a proper secret store with a hardware backend (or at least > hardware stored encryption keys) then it might make sense to throw > that in Barbican. > > Or is this also too much of a chicken/egg problem? How safe is it to > rely on Barbican availability for MFA secrets and auth? > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel at redhat.com Thu Aug 30 11:38:00 2018 From: michel at redhat.com (Michel Peterson) Date: Thu, 30 Aug 2018 14:38:00 +0300 Subject: [openstack-dev] [neutron][python3] Neutron and stadium - python 3 community goal changes coming soon In-Reply-To: <20180829211424.a7skfdjygykehwga@bishop> References: <20180829211424.a7skfdjygykehwga@bishop> Message-ID: On Thu, Aug 30, 2018 at 12:14 AM, Nate Johnston wrote: > Progress is also being tracked in a wiki page [4]. > > [4] https://wiki.openstack.org/wiki/Python3 > That wiki page should only track teams or subteams should also be included there? I'm asking because it seems that some subteams appear there while the majority doesn't and perhaps we want to standarize that. -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel at redhat.com Thu Aug 30 11:49:57 2018 From: michel at redhat.com (Michel Peterson) Date: Thu, 30 Aug 2018 14:49:57 +0300 Subject: [openstack-dev] [neutron] tox-siblings alternative for local testing In-Reply-To: References: Message-ID: On Wed, Aug 29, 2018 at 9:06 AM, Takashi Yamamoto wrote: > is there any preferred solution for this? > i guess the simplest solution is to make an intermediate release of neutron > and publish it on pypi. i wonder if it's acceptable or not. > There are pre releases available in PyPI [1]. You can use those from your requirements like we did in n-odl [2]. That might be an acceptable solution. [1] https://pypi.org/project/neutron/#history [2] https://review.openstack.org/#/c/584791/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Aug 30 12:28:40 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 30 Aug 2018 08:28:40 -0400 Subject: [openstack-dev] [neutron][python3] Neutron and stadium - python 3 community goal changes coming soon In-Reply-To: References: <20180829211424.a7skfdjygykehwga@bishop> Message-ID: <1535631951-sup-3189@lrrr.local> Excerpts from Michel Peterson's message of 2018-08-30 14:38:00 +0300: > On Thu, Aug 30, 2018 at 12:14 AM, Nate Johnston > wrote: > > > Progress is also being tracked in a wiki page [4]. > > > > [4] https://wiki.openstack.org/wiki/Python3 > > > > That wiki page should only track teams or subteams should also be included > there? I'm asking because it seems that some subteams appear there while > the majority doesn't and perhaps we want to standarize that. It would be great to include information about sub-teams. If there are several, like in the neutron case, it probably makes sense to create a separate section of the page with a table for all of them, just to keep things organized. Doug From jaypipes at gmail.com Thu Aug 30 12:42:45 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 30 Aug 2018 08:42:45 -0400 Subject: [openstack-dev] [neutron][nova] Small bandwidth demo on the PTG In-Reply-To: <1535619300.3600.5@smtp.office365.com> References: <1535619300.3600.5@smtp.office365.com> Message-ID: On 08/30/2018 04:55 AM, Balázs Gibizer wrote: > Hi, > > Based on the Nova PTG planning etherpad [1] there is a need to talk > about the current state of the bandwidth work [2][3]. Bence (rubasov) > has already planned to show a small demo to Neutron folks about the > current state of the implementation. So Bence and I are wondering about > bringing that demo close to the nova - neutron cross project session. > That session is currently planned to happen Thursday after lunch. So we > are think about showing the demo right before that session starts. It > would start 30 minutes before the nova - neutron cross project session. > > Are Nova folks also interested in seeing such a demo? > > If you are interested in seeing the demo please drop us a line or ping > us in IRC so we know who should we wait for. +1 from me. I'd be very interested in seeing it. Best, -jay From tpb at dyncloud.net Thu Aug 30 13:32:47 2018 From: tpb at dyncloud.net (Tom Barron) Date: Thu, 30 Aug 2018 09:32:47 -0400 Subject: [openstack-dev] [Manila] no meeting today Message-ID: <20180830133247.w7mcjypoiiwitqty@barron.net> I've had to travel unexpectedly, won't be able to chair the meeting today, and no one posted any agenda topics this week. The rocky release is imminent so we'll open up stable/rocky for backports soon. Stein specs repo is open. Please put PTG planning ideas in the etherpad [1]. PTG is less than two weeks away! -- Tom Barron (tbarron) [1] https://etherpad.openstack.org/p/manila-ptg-planning-denver-2018 From lbragstad at gmail.com Thu Aug 30 13:50:48 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 30 Aug 2018 08:50:48 -0500 Subject: [openstack-dev] [keystone] [barbican] Keystone's use of Barbican ? In-Reply-To: References: <67f51eb3-d278-0e43-0d2a-bd3d3f7639ae@redhat.com> <173ad63d-e69c-735b-c286-c8a98a024aad@catalyst.net.nz> Message-ID: This topic has surfaced intermittently ever since keystone implemented fernet tokens in Kilo. An initial idea was written down shortly afterwords [0], then we targeted it to Ocata [1], and removed from the backlog around the Pike timeframe [2]. The commit message of [2] includes meeting links. The discussion usually tripped attempting to abstract enough of the details about rotation and setup of keys to work in all cases. [0] https://review.openstack.org/#/c/311268/ [1] https://review.openstack.org/#/c/363065/ [2] https://review.openstack.org/#/c/439194/ On Thu, Aug 30, 2018 at 5:02 AM Juan Antonio Osorio Robles < jaosorior at redhat.com> wrote: > FWIW, instead of barbican, castellan could be used as a key manager. > > On 08/30/2018 12:23 PM, Adrian Turjak wrote: > > > On 30/08/18 6:29 AM, Lance Bragstad wrote: > > Is that what is being described here ? >> https://docs.openstack.org/keystone/pike/admin/identity-credential-encryption.html >> > > This is a separate mechanism for storing secrets, not necessarily > passwords (although I agree the term credentials automatically makes people > assume passwords). This is used if consuming keystone's native MFA > implementation. For example, storing a shared secret between the user and > keystone that is provided as a additional authentication method along with > a username and password combination. > > > Is there any interest or plans to potentially allow Keystone's credential > store to use Barbican as a storage provider? Encryption already is better > than nothing, but if you already have (or will be deploying) a proper > secret store with a hardware backend (or at least hardware stored > encryption keys) then it might make sense to throw that in Barbican. > > Or is this also too much of a chicken/egg problem? How safe is it to rely > on Barbican availability for MFA secrets and auth? > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bodenvmw at gmail.com Thu Aug 30 13:53:30 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Thu, 30 Aug 2018 07:53:30 -0600 Subject: [openstack-dev] [neutron] tox-siblings alternative for local testing In-Reply-To: References: Message-ID: <2a8c1ad6-2b43-3806-45dc-8468758b7260@gmail.com> On 8/30/18 5:49 AM, Michel Peterson wrote: > > There are pre releases available in PyPI [1]. You can use those from > your requirements like we did in n-odl [2]. > > That might be an acceptable solution. > > [1] https://pypi.org/project/neutron/#history > [2] https://review.openstack.org/#/c/584791/ > IIUC, I don't think consuming pre-releases is really a solution; it's a work-around for this particular case. Any solution should be pulling the appropriate dependencies from source; mimicking what tox-siblings does. This ensures the local testing is done on the latest source regardless of what's on PYPI (a point-in-time snapshot of the code). I believe the same applies to [2] in your email; things aren't going to work as expected locally until you account for pulling from source. We can discuss this more at the PTG, but moving forward I think any projects wanting to get the neutron-lib consumption patches (for free) will need to make sure they have tox/zuul setup properly for both local and gate testing. From dangtrinhnt at gmail.com Thu Aug 30 14:21:38 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 30 Aug 2018 23:21:38 +0900 Subject: [openstack-dev] [Freezer] Set up the Freezer team meeting Message-ID: Hi Geng, Could you please help us to set up the team meeting as you're the new PTL? It supposes to be now on #openstack-meeting-alt. I sent you messages about this but haven't got anything back from you. Thanks, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.oliveras at gmail.com Thu Aug 30 14:23:20 2018 From: mike.oliveras at gmail.com (Mike Oliveras) Date: Thu, 30 Aug 2018 10:23:20 -0400 Subject: [openstack-dev] [L2-Gateway] Message-ID: Hello, I am trying to get the l2gw feature installed on an openstack install on centos 7 using the rdo queens repo. Initially I installed the l2gw rpms form the repo: yum install openstack-neutron-l2gw-agent ====================================================================================== Package Arch Version Repository ====================================================================================== Installing: openstack-neutron-l2gw-agent noarch 1:12.0.1-1.el7 centos-openstack-queens Installing for dependencies: python2-networking-l2gw noarch 1:12.0.1-1.el7 centos-openstack-queens I enabled the service plugin in /etc/neutron/neutron.conf: [root at gullwing-controller neutron]# grep service_plugin neutron.conf service_plugins = router,networking_l2gw.services.l2gateway.plugin.L2GatewayPlugin I pointed my l2gw plugin to my gw in l2gateway_agent.ini: [root at gullwing-controller neutron]# grep ovsdb1 l2gateway_agent.ini ovsdb_hosts = 'ovsdb1:10.0.0.31:6632' I upgraded the neutron DB systemctl stop neutron-server neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l2gw_plugin.ini upgrade head systemctl start neutron-server I then added the "--config-file /etc/neutron/l2gw_plugin.ini" to the ExecStart of the nova-server systemd unit file. When I try to restart nova, it fails with: 2018-08-30 09:59:32.605 21059 ERROR neutron Invalid: Driver networking_l2gw.services.l2gateway.service_drivers.rpc_l2gw.L2gwRpcDriver is not unique across providers One web page https://lonelypacket.co.uk/post/openstack-pike-l2gw-setup-on-ubuntu-16-04/ suggested adding a service_providers section to neutron.conf but it made no difference. [service_providers] service_provider=L2GW:l2gw:networking_l2gw.services.l2gateway.service_drivers.rpc_l2gw.L2gwRpcDriver:default I also tried uninstalling the l2gw rpms and used a clone of https://github.com/openstack/networking-l2gw, also tried branch stable/queens. When I tried that version, I got an error "AttributeError: 'module' object has no attribute 'L3'" which seems to be related to https://lists.launchpad.net/yahoo-eng-team/msg73944.html I would appreciate any insight that you can give me and would be happy to provide any requested info. Thanks, Mike Oliveras -------------- next part -------------- An HTML attachment was scrubbed... URL: From honza at redhat.com Thu Aug 30 14:28:21 2018 From: honza at redhat.com (Honza Pokorny) Date: Thu, 30 Aug 2018 16:28:21 +0200 Subject: [openstack-dev] [tripleo] quickstart for humans Message-ID: <20180830142821.gw76edbscvhh3afp@localhost.localdomain> Hello! Over the last few months, it seems that tripleo-quickstart has evolved into a CI tool. It's primarily used by computers, and not humans. tripleo-quickstart is a helpful set of ansible playbooks, and a collection of feature sets. However, it's become less useful for setting up development environments by humans. For example, devmode.sh was recently deprecated without a user-friendly replacement. Moreover, during some informal irc conversations in #oooq, some developers even mentioned the plan to merge tripleo-quickstart and tripleo-ci. I think it would be beneficial to create a set of defaults for tripleo-quickstart that can be used to spin up new environments; a set of defaults for humans. This can either be a well-maintained script in tripleo-quickstart itself, or a brand new project, e.g. tripleo-quickstart-humans. The number of settings, knobs, and flags should be kept to a minimum. This would accomplish two goals: 1. It would bring uniformity to the team. Each environment is installed the same way. When something goes wrong, we can eliminate differences in setup when debugging. This should save a lot of time. 2. Quicker and more reliable environment setup. If the set of defaults is used by many people, it should container fewer bugs because more people using something should translate into more bug reports, and more bug fixes. These thoughts are coming from the context of tripleo-ui development. I need an environment in order to develop, but I don't necessarily always care about how it's installed. I want something that works for most scenarios. What do you think? Does this make sense? Does something like this already exist? Thanks for listening! Honza From sbauza at redhat.com Thu Aug 30 14:43:43 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 30 Aug 2018 16:43:43 +0200 Subject: [openstack-dev] [stable][nova] Nominating melwitt for nova stable core In-Reply-To: <20180829024252.GP26778@thor.bakeyournoodle.com> References: <4e8e03b4-175a-96dc-7aa4-d89ddbad2aa5@gmail.com> <20180829024252.GP26778@thor.bakeyournoodle.com> Message-ID: On Wed, Aug 29, 2018 at 4:42 AM, Tony Breeds wrote: > On Tue, Aug 28, 2018 at 03:26:02PM -0500, Matt Riedemann wrote: > > I hereby nominate Melanie Witt for nova stable core. Mel has shown that > she > > knows the stable branch policy and is also an active reviewer of nova > stable > > changes. > > > > +1/-1 comes from the stable-maint-core team [1] and then after a week > with > > no negative votes I think it's a done deal. Of course +1/-1 from existing > > nova-stable-maint [2] is also good feedback. > > > > [1] https://review.openstack.org/#/admin/groups/530,members > > [2] https://review.openstack.org/#/admin/groups/540,members > > +1 from me! > > +1 (just depiling emails) Yours Tony. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yossi.boaron.1234 at gmail.com Thu Aug 30 14:54:24 2018 From: yossi.boaron.1234 at gmail.com (Yossi Boaron) Date: Thu, 30 Aug 2018 17:54:24 +0300 Subject: [openstack-dev] Update global upper constraint of Kubernetes from 7.0.0 to 6.0.0 Message-ID: Hi All, Kubernetes upper constraint was changed lately from 6.0.0 to 7.0.0 [1]. Currently, the Openshift python client can't work with Kubernetes 7.0.0, this caused by a version pinning issue (pulled in Kubernetes 7.0.0). As a result of that, we are unable to run some of our tempest tests in kuryr-kubernetes project. As a temporary (till an Openshift version that supports kubernets 7.0.0 will be released) we would like to suggest to set back kubernetes upper constraint to 6.0.0 [2]. Do you see any problem with this approach? Best regards Yossi [1] - https://review.openstack.org/#/c/594495/ [2] - https://review.openstack.org/#/c/595569/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrist at redhat.com Thu Aug 30 14:59:11 2018 From: jrist at redhat.com (Jason E. Rist) Date: Thu, 30 Aug 2018 08:59:11 -0600 Subject: [openstack-dev] [tripleo] quickstart for humans In-Reply-To: <20180830142821.gw76edbscvhh3afp@localhost.localdomain> References: <20180830142821.gw76edbscvhh3afp@localhost.localdomain> Message-ID: On 08/30/2018 08:28 AM, Honza Pokorny wrote: > Hello! > > Over the last few months, it seems that tripleo-quickstart has evolved > into a CI tool.  It's primarily used by computers, and not humans. > tripleo-quickstart is a helpful set of ansible playbooks, and a > collection of feature sets.  However, it's become less useful for > setting up development environments by humans.  For example, devmode.sh > was recently deprecated without a user-friendly replacement. Moreover, > during some informal irc conversations in #oooq, some developers even > mentioned the plan to merge tripleo-quickstart and tripleo-ci. > > I think it would be beneficial to create a set of defaults for > tripleo-quickstart that can be used to spin up new environments; a set > of defaults for humans.  This can either be a well-maintained script in > tripleo-quickstart itself, or a brand new project, e.g. > tripleo-quickstart-humans.  The number of settings, knobs, and flags > should be kept to a minimum. > > This would accomplish two goals: > > 1.  It would bring uniformity to the team.  Each environment is >     installed the same way.  When something goes wrong, we can >     eliminate differences in setup when debugging.  This should save a >     lot of time. > > 2.  Quicker and more reliable environment setup.  If the set of defaults >     is used by many people, it should container fewer bugs because more >     people using something should translate into more bug reports, and >     more bug fixes. > > These thoughts are coming from the context of tripleo-ui development.  I > need an environment in order to develop, but I don't necessarily always > care about how it's installed.  I want something that works for most > scenarios. > > What do you think?  Does this make sense?  Does something like this > already exist? > > Thanks for listening! > > Honza > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Thanks for bringing this up, Honza.  If something like this does exist, please share.  Otherwise, we're moving further and further away from Quickstart being useful for Devs which is a problem for the entire community, in my opinion. -J -- Jason E. Rist Senior Software Engineer OpenStack User Interfaces Red Hat, Inc. Freenode: jrist github/twitter: knowncitizen From nate.johnston at redhat.com Thu Aug 30 15:16:29 2018 From: nate.johnston at redhat.com (Nate Johnston) Date: Thu, 30 Aug 2018 11:16:29 -0400 Subject: [openstack-dev] [neutron][python3] Neutron and stadium - python 3 community goal changes coming soon In-Reply-To: <1535631951-sup-3189@lrrr.local> References: <20180829211424.a7skfdjygykehwga@bishop> <1535631951-sup-3189@lrrr.local> Message-ID: <20180830151629.2ygn3i2eshbmn33a@bishop> On Thu, Aug 30, 2018 at 08:28:40AM -0400, Doug Hellmann wrote: > Excerpts from Michel Peterson's message of 2018-08-30 14:38:00 +0300: > > On Thu, Aug 30, 2018 at 12:14 AM, Nate Johnston > > wrote: > > > > > Progress is also being tracked in a wiki page [4]. > > > > > > [4] https://wiki.openstack.org/wiki/Python3 > > > > > > > That wiki page should only track teams or subteams should also be included > > there? I'm asking because it seems that some subteams appear there while > > the majority doesn't and perhaps we want to standarize that. > > It would be great to include information about sub-teams. If there > are several, like in the neutron case, it probably makes sense to > create a separate section of the page with a table for all of them, > just to keep things organized. Great! I'll do that today. Nate P.S. By the way, at the top there is a note encouraging people to join #openstack-python3, but when I try to do so I get rejected: 11:13 Error(473): #openstack-python3 Cannot join channel (+i) - you must be invited I figure either the wiki page or the channel is out of sync, but I am not sure which one. From aj at suse.com Thu Aug 30 15:32:46 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 30 Aug 2018 17:32:46 +0200 Subject: [openstack-dev] [neutron][python3] Neutron and stadium - python 3 community goal changes coming soon In-Reply-To: <20180830151629.2ygn3i2eshbmn33a@bishop> References: <20180829211424.a7skfdjygykehwga@bishop> <1535631951-sup-3189@lrrr.local> <20180830151629.2ygn3i2eshbmn33a@bishop> Message-ID: On 2018-08-30 17:16, Nate Johnston wrote: > On Thu, Aug 30, 2018 at 08:28:40AM -0400, Doug Hellmann wrote: >> Excerpts from Michel Peterson's message of 2018-08-30 14:38:00 +0300: >>> On Thu, Aug 30, 2018 at 12:14 AM, Nate Johnston >>> wrote: >>> >>>> Progress is also being tracked in a wiki page [4]. >>>> >>>> [4] https://wiki.openstack.org/wiki/Python3 >>>> >>> >>> That wiki page should only track teams or subteams should also be included >>> there? I'm asking because it seems that some subteams appear there while >>> the majority doesn't and perhaps we want to standarize that. >> >> It would be great to include information about sub-teams. If there >> are several, like in the neutron case, it probably makes sense to >> create a separate section of the page with a table for all of them, >> just to keep things organized. > > Great! I'll do that today. > > Nate > > P.S. By the way, at the top there is a note encouraging people to join > #openstack-python3, but when I try to do so I get rejected: > > 11:13 Error(473): #openstack-python3 Cannot join channel (+i) - you must be invited > > I figure either the wiki page or the channel is out of sync, but I am > not sure which one. wiki page is wrong - the channel is dead. I'll update the wiki page, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From gfidente at redhat.com Thu Aug 30 16:10:51 2018 From: gfidente at redhat.com (Giulio Fidente) Date: Thu, 30 Aug 2018 18:10:51 +0200 Subject: [openstack-dev] [tripleo] PTG topics and agenda In-Reply-To: <0c407d93-8809-8c1c-4d1b-11a9e797cb90@redhat.com> References: <0c407d93-8809-8c1c-4d1b-11a9e797cb90@redhat.com> Message-ID: On 8/28/18 2:50 PM, Juan Antonio Osorio Robles wrote: > Hello folks! > > > With the PTG being quite soon, I just wanted to remind folks to add your > topics on the etherpad: https://etherpad.openstack.org/p/tripleo-ptg-stein thanks Juan, I think the Edge (line 53) and Split Control Plane (line 74) sessions can probably be merged into a single one. I'd be fine with James driving it, I think it'd be fine to discuss the "control plane updates" issue [1] in that same session. 1. http://lists.openstack.org/pipermail/openstack-dev/2018-August/133247.html -- Giulio Fidente GPG KEY: 08D733BA From prometheanfire at gentoo.org Thu Aug 30 16:12:38 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 30 Aug 2018 11:12:38 -0500 Subject: [openstack-dev] Update global upper constraint of Kubernetes from 7.0.0 to 6.0.0 In-Reply-To: References: Message-ID: <20180830161238.n2upurrjgfxwv3r5@gentoo.org> On 18-08-30 17:54:24, Yossi Boaron wrote: > Hi All, > > Kubernetes upper constraint was changed lately from 6.0.0 to 7.0.0 [1]. > Currently, the Openshift python client can't work with Kubernetes 7.0.0, > this caused by a version pinning issue (pulled in Kubernetes 7.0.0). > As a result of that, we are unable to run some of our tempest tests in > kuryr-kubernetes project. > > As a temporary (till an Openshift version that supports kubernets 7.0.0 > will be released) we would like to suggest to set back kubernetes upper > constraint to 6.0.0 [2]. How long til a version that supports >=7.0.0 comes out? > > Do you see any problem with this approach? > > Best regards > Yossi > > [1] - https://review.openstack.org/#/c/594495/ > [2] - https://review.openstack.org/#/c/595569/ -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gagehugo at gmail.com Thu Aug 30 16:13:43 2018 From: gagehugo at gmail.com (Gage Hugo) Date: Thu, 30 Aug 2018 11:13:43 -0500 Subject: [openstack-dev] Stepping down as keystone core In-Reply-To: References: Message-ID: Thanks for all the help Samuel. I remember a couple instances when I first started contributing to keystone where you helped me out and I am extremely grateful. It was great working with you, and hopefully we will still see you around! On Wed, Aug 29, 2018 at 2:33 PM Lance Bragstad wrote: > Samuel, > > Thanks for all the dedication and hard work upstream. I'm relieved that > you won't be too far away and that you're still involved with the Outreachy > programs. You played an instrumental role in getting keystone involved with > that community. > > As always, we'd be happy to have you back in the event your work involves > keystone again. > > Best, > > Lance > > On Wed, Aug 29, 2018 at 2:25 PM Samuel de Medeiros Queiroz < > samueldmq at gmail.com> wrote: > >> Hi Stackers! >> >> It has been both an honor and privilege to serve this community as a >> keystone core. >> >> I am in a position that does not allow me enough time to devote reviewing >> code and participating of the development process in keystone. As a >> consequence, I am stepping down as a core reviewer. >> >> A big thank you for your trust and for helping me to grow both as a >> person and as professional during this time in service. >> >> I will stay around: I am doing research on interoperability for my >> masters degree, which means I am around the SDK project. In addition to >> that, I recently became the Outreachy coordinator for OpenStack. >> >> Let me know if you are interested on one of those things. >> >> Get in touch on #openstack-outreachy, #openstack-sdks or >> #openstack-keystone. >> >> Thanks, >> Samuel de Medeiros Queiroz (samueldmq) >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Thu Aug 30 16:20:19 2018 From: lyarwood at redhat.com (Lee Yarwood) Date: Thu, 30 Aug 2018 17:20:19 +0100 Subject: [openstack-dev] [stable][nova] Nominating melwitt for nova stable core In-Reply-To: <4e8e03b4-175a-96dc-7aa4-d89ddbad2aa5@gmail.com> References: <4e8e03b4-175a-96dc-7aa4-d89ddbad2aa5@gmail.com> Message-ID: <20180830162019.slkhqxdbdtrtxnuj@lyarwood.usersys.redhat.com> On 28-08-18 15:26:02, Matt Riedemann wrote: > I hereby nominate Melanie Witt for nova stable core. Mel has shown that she > knows the stable branch policy and is also an active reviewer of nova stable > changes. > > +1/-1 comes from the stable-maint-core team [1] and then after a week with > no negative votes I think it's a done deal. Of course +1/-1 from existing > nova-stable-maint [2] is also good feedback. > > [1] https://review.openstack.org/#/admin/groups/530,members > [2] https://review.openstack.org/#/admin/groups/540,members +1 from me FWIW. Thanks, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From anikethgireesh at gmail.com Thu Aug 30 16:31:09 2018 From: anikethgireesh at gmail.com (Aniketh Gireesh) Date: Thu, 30 Aug 2018 22:01:09 +0530 Subject: [openstack-dev] Inquiry about an opportunity to do a thesis project in collaboration with a OpenStack project. Message-ID: Hi, I am Aniketh Girish, a Junior year Computer Science Engineering student at the Amrita University, Kerala, India. I’m writing to you to inquire about the possibility to do my thesis project in accordance with the OpenStack community and a project in the community. I had initiated to contribute towards OpenStack a few months back. I took some time off since I was selected to participate in Google Summer of code with GNU Linux organization. For the last few months, I have been mainly focusing on implementing and learning about advanced internet protocols. My primary interest leans towards network security, with a particular interest in the Networking protocol and cloud computing infrastructures. A selected project in my field of interest would be when, I was selected as a Google Summer of Code 2018, where I am working on the project Wget2 under GNU Linux organisation. This project involves adding support for DNS over HTTPS in Wget2. DNS over HTTPS(DoH) is a web protocol that argues for sending DNS requests and receiving DNS responses via HTTPS connections, hence providing query confidentiality. Therefore to provide such a name resolution, I devised a library where I implemented the DNS protocol by facilitating the library to create the DNS packet/request, queried A, AAAA, CNAME records and implemented the DoH protocol by parsing the DNS wire format from the HTTPS response body. It is about time for me to look for a promising bachelor thesis project in my field of interest. I would like to know if there are any possibilities for me to work together with OpenStack in a project as a part of my thesis research. Hope to hear back soon. Cheers. -- Aniketh Girish Member at FOSS at Amrita Amrita University Github | GitLab | Blog | Website "For the Love of Code." -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Aug 30 16:33:39 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 30 Aug 2018 11:33:39 -0500 Subject: [openstack-dev] [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: <20180830084608.g76maohgpxbmqvce@localhost> References: <2fd5feda-0399-6b24-7eb7-738f36e14e70@gmail.com> <9b6b87e3-024b-6035-11ec-3ab41f795a3f@gmail.com> <60e4654e-91ba-7f14-a6d9-7a588c17baee@gmail.com> <8EF14CDA-F135-4ED9-A9B0-1654CDC08D64@cern.ch> <3cae56bb-cca3-d251-e46f-63c328f254d2@gmail.com> <20180830084608.g76maohgpxbmqvce@localhost> Message-ID: <20180830163338.GB19523@sm-workstation> > > > > Yeah it's already on the PTG agenda [1][2]. I started the thread because I > > wanted to get the ball rolling as early as possible, and with people that > > won't attend the PTG and/or the Forum, to weigh in on not only the known > > issues with cross-cell migration but also the things I'm not thinking about. > > > > [1] https://etherpad.openstack.org/p/nova-ptg-stein > > [2] https://etherpad.openstack.org/p/nova-ptg-stein-cells > > > > -- > > > > Thanks, > > > > Matt > > > > Should we also add the topic to the Thursday Cinder-Nova slot in case > there are some questions where the Cinder team can assist? > > Cheers, > Gorka. > Good idea. That will be a good time to circle back between the teams to see if any Cinder needs come up that we can still have time to talk through and see if we can get work started. From openstack at fried.cc Thu Aug 30 16:34:03 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 30 Aug 2018 11:34:03 -0500 Subject: [openstack-dev] [nova][placement] Freezing placement for extraction Message-ID: Greetings. The captains of placement extraction have declared readiness to begin the process of seeding the new repository (once [1] has finished merging). As such, we are freezing development in the affected portions of the openstack/nova repository until this process is completed. We're relying on our active placement reviewers noticing any patches that touch these "affected portions" and, if that reviewer is not a nova core, bringing them to the attention of one, so we can put a -2 on it. Once the extraction is complete [2], any such frozen patches should be abandoned and reproposed to the openstack/placement repository. Since there will be an interval during which placement code will exist in both repositories, but before $world has cut over to using openstack/placement, it is possible that some crucial fix will still need to be merged into the openstack/nova side. In this case, the fix must be proposed to *both* repositories, and the justification for its existence in openstack/nova made clear. For more details on the technical aspects of the extraction process, refer to this thread [3]. For information on the procedural/governance process we will be following, see [4]. Please let us know if you have any questions or concerns, either via this thread or in #openstack-placement. [1] https://review.openstack.org/#/c/597220/ [2] meaning that we've merged the initial glut of patches necessary to repath everything and get tests passing [3] http://lists.openstack.org/pipermail/openstack-dev/2018-August/133781.html [4] https://docs.openstack.org/infra/manual/creators.html From juliaashleykreger at gmail.com Thu Aug 30 16:39:53 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 30 Aug 2018 09:39:53 -0700 Subject: [openstack-dev] Inquiry about an opportunity to do a thesis project in collaboration with a OpenStack project. In-Reply-To: References: Message-ID: Greetings! Welcome to the community! Your interests seem to span quite a bit of the OpenStack community, so I think it might be a good idea for you possibly look at the individual teams that interest you the most, and reach out to those teams and engage in discussion from there. What may be a good idea is to look at the Rocky cycle release highlights[1] as they provide high level summaries and what the recently added features were for each project as part of this past development cycle. We're just now starting the Stein cycle, so now is really the perfect time to join in. -Julia [1]: https://releases.openstack.org/rocky/highlights.html On Thu, Aug 30, 2018 at 9:31 AM Aniketh Gireesh wrote: > > Hi, > > I am Aniketh Girish, a Junior year Computer Science Engineering student at the Amrita University, Kerala, India. I’m writing to you to inquire about the possibility to do my thesis project in accordance with the OpenStack community and a project in the community. > > I had initiated to contribute towards OpenStack a few months back. I took some time off since I was selected to participate in Google Summer of code with GNU Linux organization. For the last few months, I have been mainly focusing on implementing and learning about advanced internet protocols. My primary interest leans towards network security, with a particular interest in the Networking protocol and cloud computing infrastructures. > > A selected project in my field of interest would be when, I was selected as a Google Summer of Code 2018, where I am working on the project Wget2 under GNU Linux organisation. This project involves adding support for DNS over HTTPS in Wget2. DNS over HTTPS(DoH) is a web protocol that argues for sending DNS requests and receiving DNS responses via HTTPS connections, hence providing query confidentiality. Therefore to provide such a name resolution, I devised a library where I implemented the DNS protocol by facilitating the library to create the DNS packet/request, queried A, AAAA, CNAME records and implemented the DoH protocol by parsing the DNS wire format from the HTTPS response body. > > It is about time for me to look for a promising bachelor thesis project in my field of interest. I would like to know if there are any possibilities for me to work together with OpenStack in a project as a part of my thesis research. > > Hope to hear back soon. > > Cheers. > -- > Aniketh Girish > Member at FOSS at Amrita > Amrita University > Github | GitLab | Blog | Website > > "For the Love of Code." > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jimmy at openstack.org Thu Aug 30 16:49:40 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 30 Aug 2018 11:49:40 -0500 Subject: [openstack-dev] Inquiry about an opportunity to do a thesis project in collaboration with a OpenStack project. In-Reply-To: References: Message-ID: <5B882024.4030402@openstack.org> Thank you for your intereset :) Another great place to start is here: https://www.openstack.org/community This page has links to the Contributor Guide, as well as info on all of the OpenStack projects. Another great way to catch up with the community is the OpenStack Summit. We have one coming up in Berlin and we also offer travel support (this is the last day to apply): https://www.openstack.org/summit/berlin-2018/ https://www.openstack.org/summit/berlin-2018/travel/#travel-support In addition to travel support we offer very generous discounts to students. Please email summitreg at openstack.org if you want more info on this. If you aren't able to make it to the summit, you can catch up on most of the presentations here: https://www.openstack.org/videos/summits We typically post all the videos within a week or so of the end of the event. Cheers and welcome to the OpenStack Community! Jimmy Julia Kreger wrote: > Greetings! > > Welcome to the community! > > Your interests seem to span quite a bit of the OpenStack community, so > I think it might be a good idea for you possibly look at the > individual teams that interest you the most, and reach out to those > teams and engage in discussion from there. > > What may be a good idea is to look at the Rocky cycle release > highlights[1] as they provide high level summaries and what the > recently added features were for each project as part of this past > development cycle. We're just now starting the Stein cycle, so now is > really the perfect time to join in. > > -Julia > > [1]: https://releases.openstack.org/rocky/highlights.html > On Thu, Aug 30, 2018 at 9:31 AM Aniketh Gireesh > wrote: >> Hi, >> >> I am Aniketh Girish, a Junior year Computer Science Engineering student at the Amrita University, Kerala, India. I’m writing to you to inquire about the possibility to do my thesis project in accordance with the OpenStack community and a project in the community. >> >> I had initiated to contribute towards OpenStack a few months back. I took some time off since I was selected to participate in Google Summer of code with GNU Linux organization. For the last few months, I have been mainly focusing on implementing and learning about advanced internet protocols. My primary interest leans towards network security, with a particular interest in the Networking protocol and cloud computing infrastructures. >> >> A selected project in my field of interest would be when, I was selected as a Google Summer of Code 2018, where I am working on the project Wget2 under GNU Linux organisation. This project involves adding support for DNS over HTTPS in Wget2. DNS over HTTPS(DoH) is a web protocol that argues for sending DNS requests and receiving DNS responses via HTTPS connections, hence providing query confidentiality. Therefore to provide such a name resolution, I devised a library where I implemented the DNS protocol by facilitating the library to create the DNS packet/request, queried A, AAAA, CNAME records and implemented the DoH protocol by parsing the DNS wire format from the HTTPS response body. >> >> It is about time for me to look for a promising bachelor thesis project in my field of interest. I would like to know if there are any possibilities for me to work together with OpenStack in a project as a part of my thesis research. >> >> Hope to hear back soon. >> >> Cheers. >> -- >> Aniketh Girish >> Member at FOSS at Amrita >> Amrita University >> Github | GitLab | Blog | Website >> >> "For the Love of Code." >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Thu Aug 30 17:00:19 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 30 Aug 2018 18:00:19 +0100 (BST) Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, There was nothing specific on the agenda this week, so much of the API-SIG meeting was spent discussing API-related topics that we'd encountered recently. One was: K8s Custom Resources [9] Cool or Chaos? The answer is, of course, "it depends". Another was a recent thread asking about the relevance of Open API 3.0 in the OpenStack environment [10]. We had trouble deciding what the desired outcome is, so for now are merely tracking the thread. In the world of guidelines and bugs, not a lot of recent action. Some approved changes need to be rebased to actually get published, and the stack about version discovery [11] needs to be refreshed and potentially adopted by someone who is not Monty. If you're reading, Monty, and have thoughts on that, share them. Next week we will be actively planning [7] for the PTG [8]. We have a room on Monday. We always have interesting and fun discussions when we're at the PTG, join us. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * None # API Guidelines Proposed for Freeze * None # Guidelines that are ready for wider review by the whole community. * None # Guidelines Currently Under Review [3] * Add an api-design doc with design advice https://review.openstack.org/592003 * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-sig,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://storyboard.openstack.org/#!/project/1039 [6] https://git.openstack.org/cgit/openstack/api-sig [7] https://etherpad.openstack.org/p/api-sig-stein-ptg [8] https://www.openstack.org/ptg/ [9] https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/ [10] http://lists.openstack.org/pipermail/openstack-dev/2018-August/133960.html [11] https://review.openstack.org/#/c/459405/ Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From fungi at yuggoth.org Thu Aug 30 17:03:50 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 30 Aug 2018 17:03:50 +0000 Subject: [openstack-dev] [all] Bringing the community together (combine the lists!) Message-ID: <20180830170350.wrz4wlanb276kncb@yuggoth.org> The openstack, openstack-dev, openstack-sigs and openstack-operators mailing lists on lists.openstack.org see an increasing amount of cross-posting and thread fragmentation as conversants attempt to reach various corners of our community with topics of interest to one or more (and sometimes all) of those overlapping groups of subscribers. For some time we've been discussing and trying ways to bring our developers, distributors, operators and end users together into a less isolated, more cohesive community. An option which keeps coming up is to combine these different but overlapping mailing lists into one single discussion list. As we covered[1] in Vancouver at the last Forum there are a lot of potential up-sides: 1. People with questions are no longer asking them in a different place than many of the people who have the answers to those questions (the "not for usage questions" in the openstack-dev ML title only serves to drive the wedge between developers and users deeper). 2. The openstack-sigs mailing list hasn't seem much uptake (an order of magnitude fewer subscribers and posts) compared to the other three lists, yet it was intended to bridge the communication gap between them; combining those lists would have been a better solution to the problem than adding yet another turned out to be. 3. At least one out of every ten messages to any of these lists is cross-posted to one or more of the others, because we have topics that span across these divided groups yet nobody is quite sure which one is the best venue for them; combining would eliminate the fragmented/duplicative/divergent discussion which results from participants following up on the different subsets of lists to which they're subscribed, 4. Half of the people who are actively posting to at least one of the four lists subscribe to two or more, and a quarter to three if not all four; they would no longer be receiving multiple copies of the various cross-posts if these lists were combined. The proposal is simple: create a new openstack-discuss mailing list to cover all the above sorts of discussion and stop using the other four. As the OpenStack ecosystem continues to mature and its software and services stabilize, the nature of our discourse is changing (becoming increasingly focused with fewer heated debates, distilling to a more manageable volume), so this option is looking much more attractive than in the past. That's not to say it's quiet (we're looking at roughly 40 messages a day across them on average, after deduplicating the cross-posts), but we've grown accustomed to tagging the subjects of these messages to make it easier for other participants to quickly filter topics which are relevant to them and so would want a good set of guidelines on how to do so for the combined list (a suggested set is already being brainstormed[2]). None of this is set in stone of course, and I expect a lot of continued discussion across these lists (oh, the irony) while we try to settle on a plan, so definitely please follow up with your questions, concerns, ideas, et cetera. As an aside, some of you have probably also seen me talking about experiments I've been doing with Mailman 3... I'm hoping new features in its Hyperkitty and Postorius WebUIs make some of this easier or more accessible to casual participants (particularly in light of the combined list scenario), but none of the plan above hinges on MM3 and should be entirely doable with the MM2 version we're currently using. Also, in case you were wondering, no the irony of cross-posting this message to four mailing lists is not lost on me. ;) [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community [2] https://etherpad.openstack.org/p/common-openstack-ml-topics -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rico.lin.guanyu at gmail.com Thu Aug 30 17:13:58 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 31 Aug 2018 01:13:58 +0800 Subject: [openstack-dev] [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830170350.wrz4wlanb276kncb@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: +1 on this idea, people been posting around for the exactly same topic and got feedback from ops or devs, but never together, this will help people do discussion on the same table. What needs to be done for this is full topic categories support under `options` page so people get to filter emails properly. On Fri, Aug 31, 2018 at 1:04 AM Jeremy Stanley wrote: > The openstack, openstack-dev, openstack-sigs and openstack-operators > mailing lists on lists.openstack.org see an increasing amount of > cross-posting and thread fragmentation as conversants attempt to > reach various corners of our community with topics of interest to > one or more (and sometimes all) of those overlapping groups of > subscribers. For some time we've been discussing and trying ways to > bring our developers, distributors, operators and end users together > into a less isolated, more cohesive community. An option which keeps > coming up is to combine these different but overlapping mailing > lists into one single discussion list. As we covered[1] in Vancouver > at the last Forum there are a lot of potential up-sides: > > 1. People with questions are no longer asking them in a different > place than many of the people who have the answers to those > questions (the "not for usage questions" in the openstack-dev ML > title only serves to drive the wedge between developers and users > deeper). > > 2. The openstack-sigs mailing list hasn't seem much uptake (an order > of magnitude fewer subscribers and posts) compared to the other > three lists, yet it was intended to bridge the communication gap > between them; combining those lists would have been a better > solution to the problem than adding yet another turned out to be. > > 3. At least one out of every ten messages to any of these lists is > cross-posted to one or more of the others, because we have topics > that span across these divided groups yet nobody is quite sure which > one is the best venue for them; combining would eliminate the > fragmented/duplicative/divergent discussion which results from > participants following up on the different subsets of lists to which > they're subscribed, > > 4. Half of the people who are actively posting to at least one of > the four lists subscribe to two or more, and a quarter to three if > not all four; they would no longer be receiving multiple copies of > the various cross-posts if these lists were combined. > > The proposal is simple: create a new openstack-discuss mailing list > to cover all the above sorts of discussion and stop using the other > four. As the OpenStack ecosystem continues to mature and its > software and services stabilize, the nature of our discourse is > changing (becoming increasingly focused with fewer heated debates, > distilling to a more manageable volume), so this option is looking > much more attractive than in the past. That's not to say it's quiet > (we're looking at roughly 40 messages a day across them on average, > after deduplicating the cross-posts), but we've grown accustomed to > tagging the subjects of these messages to make it easier for other > participants to quickly filter topics which are relevant to them and > so would want a good set of guidelines on how to do so for the > combined list (a suggested set is already being brainstormed[2]). > None of this is set in stone of course, and I expect a lot of > continued discussion across these lists (oh, the irony) while we try > to settle on a plan, so definitely please follow up with your > questions, concerns, ideas, et cetera. > > As an aside, some of you have probably also seen me talking about > experiments I've been doing with Mailman 3... I'm hoping new > features in its Hyperkitty and Postorius WebUIs make some of this > easier or more accessible to casual participants (particularly in > light of the combined list scenario), but none of the plan above > hinges on MM3 and should be entirely doable with the MM2 version > we're currently using. > > Also, in case you were wondering, no the irony of cross-posting this > message to four mailing lists is not lost on me. ;) > > [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community > [2] https://etherpad.openstack.org/p/common-openstack-ml-topics > -- > Jeremy Stanley > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Aug 30 17:17:14 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 30 Aug 2018 13:17:14 -0400 Subject: [openstack-dev] [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830170350.wrz4wlanb276kncb@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: <1535649366-sup-1027@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-08-30 17:03:50 +0000: > The openstack, openstack-dev, openstack-sigs and openstack-operators > mailing lists on lists.openstack.org see an increasing amount of > cross-posting and thread fragmentation as conversants attempt to > reach various corners of our community with topics of interest to > one or more (and sometimes all) of those overlapping groups of > subscribers. For some time we've been discussing and trying ways to > bring our developers, distributors, operators and end users together > into a less isolated, more cohesive community. An option which keeps > coming up is to combine these different but overlapping mailing > lists into one single discussion list. As we covered[1] in Vancouver > at the last Forum there are a lot of potential up-sides: > > 1. People with questions are no longer asking them in a different > place than many of the people who have the answers to those > questions (the "not for usage questions" in the openstack-dev ML > title only serves to drive the wedge between developers and users > deeper). > > 2. The openstack-sigs mailing list hasn't seem much uptake (an order > of magnitude fewer subscribers and posts) compared to the other > three lists, yet it was intended to bridge the communication gap > between them; combining those lists would have been a better > solution to the problem than adding yet another turned out to be. > > 3. At least one out of every ten messages to any of these lists is > cross-posted to one or more of the others, because we have topics > that span across these divided groups yet nobody is quite sure which > one is the best venue for them; combining would eliminate the > fragmented/duplicative/divergent discussion which results from > participants following up on the different subsets of lists to which > they're subscribed, > > 4. Half of the people who are actively posting to at least one of > the four lists subscribe to two or more, and a quarter to three if > not all four; they would no longer be receiving multiple copies of > the various cross-posts if these lists were combined. > > The proposal is simple: create a new openstack-discuss mailing list > to cover all the above sorts of discussion and stop using the other > four. As the OpenStack ecosystem continues to mature and its > software and services stabilize, the nature of our discourse is > changing (becoming increasingly focused with fewer heated debates, > distilling to a more manageable volume), so this option is looking > much more attractive than in the past. That's not to say it's quiet > (we're looking at roughly 40 messages a day across them on average, > after deduplicating the cross-posts), but we've grown accustomed to > tagging the subjects of these messages to make it easier for other > participants to quickly filter topics which are relevant to them and > so would want a good set of guidelines on how to do so for the > combined list (a suggested set is already being brainstormed[2]). > None of this is set in stone of course, and I expect a lot of > continued discussion across these lists (oh, the irony) while we try > to settle on a plan, so definitely please follow up with your > questions, concerns, ideas, et cetera. > > As an aside, some of you have probably also seen me talking about > experiments I've been doing with Mailman 3... I'm hoping new > features in its Hyperkitty and Postorius WebUIs make some of this > easier or more accessible to casual participants (particularly in > light of the combined list scenario), but none of the plan above > hinges on MM3 and should be entirely doable with the MM2 version > we're currently using. > > Also, in case you were wondering, no the irony of cross-posting this > message to four mailing lists is not lost on me. ;) > > [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community > [2] https://etherpad.openstack.org/p/common-openstack-ml-topics I fully support the idea of merging the lists. Doug From chris at openstack.org Thu Aug 30 17:19:50 2018 From: chris at openstack.org (Chris Hoge) Date: Thu, 30 Aug 2018 10:19:50 -0700 Subject: [openstack-dev] [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830170350.wrz4wlanb276kncb@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: I also propose that we merge the interop-wg mailing list also, as the volume on that list is small but topics posted to it are of general interest to the community. Chris Hoge (Interop WG Secretary, amongst other things) > On Aug 30, 2018, at 10:03 AM, Jeremy Stanley wrote: > > The openstack, openstack-dev, openstack-sigs and openstack-operators > mailing lists on lists.openstack.org see an increasing amount of > cross-posting and thread fragmentation as conversants attempt to > reach various corners of our community with topics of interest to > one or more (and sometimes all) of those overlapping groups of > subscribers. For some time we've been discussing and trying ways to > bring our developers, distributors, operators and end users together > into a less isolated, more cohesive community. An option which keeps > coming up is to combine these different but overlapping mailing > lists into one single discussion list. As we covered[1] in Vancouver > at the last Forum there are a lot of potential up-sides: > > 1. People with questions are no longer asking them in a different > place than many of the people who have the answers to those > questions (the "not for usage questions" in the openstack-dev ML > title only serves to drive the wedge between developers and users > deeper). > > 2. The openstack-sigs mailing list hasn't seem much uptake (an order > of magnitude fewer subscribers and posts) compared to the other > three lists, yet it was intended to bridge the communication gap > between them; combining those lists would have been a better > solution to the problem than adding yet another turned out to be. > > 3. At least one out of every ten messages to any of these lists is > cross-posted to one or more of the others, because we have topics > that span across these divided groups yet nobody is quite sure which > one is the best venue for them; combining would eliminate the > fragmented/duplicative/divergent discussion which results from > participants following up on the different subsets of lists to which > they're subscribed, > > 4. Half of the people who are actively posting to at least one of > the four lists subscribe to two or more, and a quarter to three if > not all four; they would no longer be receiving multiple copies of > the various cross-posts if these lists were combined. > > The proposal is simple: create a new openstack-discuss mailing list > to cover all the above sorts of discussion and stop using the other > four. As the OpenStack ecosystem continues to mature and its > software and services stabilize, the nature of our discourse is > changing (becoming increasingly focused with fewer heated debates, > distilling to a more manageable volume), so this option is looking > much more attractive than in the past. That's not to say it's quiet > (we're looking at roughly 40 messages a day across them on average, > after deduplicating the cross-posts), but we've grown accustomed to > tagging the subjects of these messages to make it easier for other > participants to quickly filter topics which are relevant to them and > so would want a good set of guidelines on how to do so for the > combined list (a suggested set is already being brainstormed[2]). > None of this is set in stone of course, and I expect a lot of > continued discussion across these lists (oh, the irony) while we try > to settle on a plan, so definitely please follow up with your > questions, concerns, ideas, et cetera. > > As an aside, some of you have probably also seen me talking about > experiments I've been doing with Mailman 3... I'm hoping new > features in its Hyperkitty and Postorius WebUIs make some of this > easier or more accessible to casual participants (particularly in > light of the combined list scenario), but none of the plan above > hinges on MM3 and should be entirely doable with the MM2 version > we're currently using. > > Also, in case you were wondering, no the irony of cross-posting this > message to four mailing lists is not lost on me. ;) > > [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community > [2] https://etherpad.openstack.org/p/common-openstack-ml-topics > -- > Jeremy Stanley > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs From jimmy at openstack.org Thu Aug 30 17:19:55 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 30 Aug 2018 12:19:55 -0500 Subject: [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830170350.wrz4wlanb276kncb@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: <5B88273B.3000206@openstack.org> Absolutely support merging. Jeremy Stanley wrote: > The openstack, openstack-dev, openstack-sigs and openstack-operators > mailing lists on lists.openstack.org see an increasing amount of > cross-posting and thread fragmentation as conversants attempt to > reach various corners of our community with topics of interest to > one or more (and sometimes all) of those overlapping groups of > subscribers. For some time we've been discussing and trying ways to > bring our developers, distributors, operators and end users together > into a less isolated, more cohesive community. An option which keeps > coming up is to combine these different but overlapping mailing > lists into one single discussion list. As we covered[1] in Vancouver > at the last Forum there are a lot of potential up-sides: > > 1. People with questions are no longer asking them in a different > place than many of the people who have the answers to those > questions (the "not for usage questions" in the openstack-dev ML > title only serves to drive the wedge between developers and users > deeper). > > 2. The openstack-sigs mailing list hasn't seem much uptake (an order > of magnitude fewer subscribers and posts) compared to the other > three lists, yet it was intended to bridge the communication gap > between them; combining those lists would have been a better > solution to the problem than adding yet another turned out to be. > > 3. At least one out of every ten messages to any of these lists is > cross-posted to one or more of the others, because we have topics > that span across these divided groups yet nobody is quite sure which > one is the best venue for them; combining would eliminate the > fragmented/duplicative/divergent discussion which results from > participants following up on the different subsets of lists to which > they're subscribed, > > 4. Half of the people who are actively posting to at least one of > the four lists subscribe to two or more, and a quarter to three if > not all four; they would no longer be receiving multiple copies of > the various cross-posts if these lists were combined. > > The proposal is simple: create a new openstack-discuss mailing list > to cover all the above sorts of discussion and stop using the other > four. As the OpenStack ecosystem continues to mature and its > software and services stabilize, the nature of our discourse is > changing (becoming increasingly focused with fewer heated debates, > distilling to a more manageable volume), so this option is looking > much more attractive than in the past. That's not to say it's quiet > (we're looking at roughly 40 messages a day across them on average, > after deduplicating the cross-posts), but we've grown accustomed to > tagging the subjects of these messages to make it easier for other > participants to quickly filter topics which are relevant to them and > so would want a good set of guidelines on how to do so for the > combined list (a suggested set is already being brainstormed[2]). > None of this is set in stone of course, and I expect a lot of > continued discussion across these lists (oh, the irony) while we try > to settle on a plan, so definitely please follow up with your > questions, concerns, ideas, et cetera. > > As an aside, some of you have probably also seen me talking about > experiments I've been doing with Mailman 3... I'm hoping new > features in its Hyperkitty and Postorius WebUIs make some of this > easier or more accessible to casual participants (particularly in > light of the combined list scenario), but none of the plan above > hinges on MM3 and should be entirely doable with the MM2 version > we're currently using. > > Also, in case you were wondering, no the irony of cross-posting this > message to four mailing lists is not lost on me. ;) > > [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community > [2] https://etherpad.openstack.org/p/common-openstack-ml-topics > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From juliaashleykreger at gmail.com Thu Aug 30 17:20:33 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 30 Aug 2018 10:20:33 -0700 Subject: [openstack-dev] [ironic][tripleo][edge] Discussing ironic federation and distributed deployments In-Reply-To: References: Message-ID: Greetings everyone, It looks like the most agreeable time on the doodle[1] seems to be Tuesday September 4th at 13:00 UTC. Are there any objections to using this time? If not, I'll go ahead and create an etherpad, and setup a bluejeans call for that time to enable high bandwidth discussion. -Julia [1]: https://doodle.com/poll/y355wt97heffvp3m On Mon, Aug 27, 2018 at 9:53 AM Julia Kreger wrote: > > Greetings everyone! > > We in Ironic land would like to go into the PTG with some additional > thoughts, requirements, and ideas as it relates to distributed and > geographically distributed deployments. > [trim] From juliaashleykreger at gmail.com Thu Aug 30 17:24:49 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 30 Aug 2018 10:24:49 -0700 Subject: [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830170350.wrz4wlanb276kncb@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: I fully support merging the lists proposed, as well as the interop-wg list that Chris Hodge proposed. I look forward to the day when cross posting is no longer a necessary evil. -Julia On Thu, Aug 30, 2018 at 10:04 AM Jeremy Stanley wrote: > > The openstack, openstack-dev, openstack-sigs and openstack-operators > mailing lists on lists.openstack.org see an increasing amount of > cross-posting and thread fragmentation as conversants attempt to > reach various corners of our community with topics of interest to > one or more (and sometimes all) of those overlapping groups of > subscribers. For some time we've been discussing and trying ways to > bring our developers, distributors, operators and end users together > into a less isolated, more cohesive community. An option which keeps > coming up is to combine these different but overlapping mailing > lists into one single discussion list. As we covered[1] in Vancouver > at the last Forum there are a lot of potential up-sides: > > 1. People with questions are no longer asking them in a different > place than many of the people who have the answers to those > questions (the "not for usage questions" in the openstack-dev ML > title only serves to drive the wedge between developers and users > deeper). > > 2. The openstack-sigs mailing list hasn't seem much uptake (an order > of magnitude fewer subscribers and posts) compared to the other > three lists, yet it was intended to bridge the communication gap > between them; combining those lists would have been a better > solution to the problem than adding yet another turned out to be. > > 3. At least one out of every ten messages to any of these lists is > cross-posted to one or more of the others, because we have topics > that span across these divided groups yet nobody is quite sure which > one is the best venue for them; combining would eliminate the > fragmented/duplicative/divergent discussion which results from > participants following up on the different subsets of lists to which > they're subscribed, > > 4. Half of the people who are actively posting to at least one of > the four lists subscribe to two or more, and a quarter to three if > not all four; they would no longer be receiving multiple copies of > the various cross-posts if these lists were combined. > > The proposal is simple: create a new openstack-discuss mailing list > to cover all the above sorts of discussion and stop using the other > four. As the OpenStack ecosystem continues to mature and its > software and services stabilize, the nature of our discourse is > changing (becoming increasingly focused with fewer heated debates, > distilling to a more manageable volume), so this option is looking > much more attractive than in the past. That's not to say it's quiet > (we're looking at roughly 40 messages a day across them on average, > after deduplicating the cross-posts), but we've grown accustomed to > tagging the subjects of these messages to make it easier for other > participants to quickly filter topics which are relevant to them and > so would want a good set of guidelines on how to do so for the > combined list (a suggested set is already being brainstormed[2]). > None of this is set in stone of course, and I expect a lot of > continued discussion across these lists (oh, the irony) while we try > to settle on a plan, so definitely please follow up with your > questions, concerns, ideas, et cetera. > > As an aside, some of you have probably also seen me talking about > experiments I've been doing with Mailman 3... I'm hoping new > features in its Hyperkitty and Postorius WebUIs make some of this > easier or more accessible to casual participants (particularly in > light of the combined list scenario), but none of the plan above > hinges on MM3 and should be entirely doable with the MM2 version > we're currently using. > > Also, in case you were wondering, no the irony of cross-posting this > message to four mailing lists is not lost on me. ;) > > [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community > [2] https://etherpad.openstack.org/p/common-openstack-ml-topics > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Thu Aug 30 17:26:40 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 30 Aug 2018 13:26:40 -0400 Subject: [openstack-dev] [neutron][python3] Neutron and stadium - python 3 community goal changes coming soon In-Reply-To: References: <20180829211424.a7skfdjygykehwga@bishop> <1535631951-sup-3189@lrrr.local> <20180830151629.2ygn3i2eshbmn33a@bishop> Message-ID: <1535649994-sup-6611@lrrr.local> Excerpts from Andreas Jaeger's message of 2018-08-30 17:32:46 +0200: > On 2018-08-30 17:16, Nate Johnston wrote: > > On Thu, Aug 30, 2018 at 08:28:40AM -0400, Doug Hellmann wrote: > >> Excerpts from Michel Peterson's message of 2018-08-30 14:38:00 +0300: > >>> On Thu, Aug 30, 2018 at 12:14 AM, Nate Johnston > >>> wrote: > >>> > >>>> Progress is also being tracked in a wiki page [4]. > >>>> > >>>> [4] https://wiki.openstack.org/wiki/Python3 > >>>> > >>> > >>> That wiki page should only track teams or subteams should also be included > >>> there? I'm asking because it seems that some subteams appear there while > >>> the majority doesn't and perhaps we want to standarize that. > >> > >> It would be great to include information about sub-teams. If there > >> are several, like in the neutron case, it probably makes sense to > >> create a separate section of the page with a table for all of them, > >> just to keep things organized. > > > > Great! I'll do that today. > > > > Nate > > > > P.S. By the way, at the top there is a note encouraging people to join > > #openstack-python3, but when I try to do so I get rejected: > > > > 11:13 Error(473): #openstack-python3 Cannot join channel (+i) - you must be invited > > > > I figure either the wiki page or the channel is out of sync, but I am > > not sure which one. > > wiki page is wrong - the channel is dead. I'll update the wiki page, > > Andreas Thanks, Andreas! From emilien at redhat.com Thu Aug 30 17:29:12 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 30 Aug 2018 13:29:12 -0400 Subject: [openstack-dev] [Openstack-operators] [ironic][tripleo][edge] Discussing ironic federation and distributed deployments In-Reply-To: References: Message-ID: On Thu, Aug 30, 2018 at 1:21 PM Julia Kreger wrote: > Greetings everyone, > > It looks like the most agreeable time on the doodle[1] seems to be > Tuesday September 4th at 13:00 UTC. Are there any objections to using > this time? > > If not, I'll go ahead and create an etherpad, and setup a bluejeans > call for that time to enable high bandwidth discussion. > TripleO sessions start on Wednesday, so +1 from us (unless I missed something). -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Aug 30 17:31:46 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 30 Aug 2018 13:31:46 -0400 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: References: <1535398507-sup-4428@lrrr.local> <1535492703-sup-9104@lrrr.local> <20180829211411.hua6wpepu3xpnndh@bishop> <1535579316-sup-4280@lrrr.local> <1535587588-sup-7741@lrrr.local> Message-ID: <1535650262-sup-6755@lrrr.local> Excerpts from Michel Peterson's message of 2018-08-30 12:51:04 +0300: > On Thu, Aug 30, 2018 at 3:11 AM, Doug Hellmann > wrote: > > > | import zuul job settings from project-config | openstack/networking-odl > > | https://review.openstack.org/597870 | master | > > | switch documentation job to new PTI | openstack/networking-odl > > | https://review.openstack.org/597871 | master | > > | add python 3.5 unit test job | openstack/networking-odl > > | https://review.openstack.org/597872 | master | > > | add python 3.6 unit test job | openstack/networking-odl > > | https://review.openstack.org/597873 | master | > > | import zuul job settings from project-config | openstack/networking-odl > > | https://review.openstack.org/597911 | stable/ocata | > > | import zuul job settings from project-config | openstack/networking-odl > > | https://review.openstack.org/597923 | stable/pike | > > | import zuul job settings from project-config | openstack/networking-odl > > | https://review.openstack.org/597938 | stable/queens | > > | import zuul job settings from project-config | openstack/networking-odl > > | https://review.openstack.org/597953 | stable/rocky | > > > In the case of networking-odl I know we also need a 'fix tox python3 > overrides' patch but that was not generated. I'm wondering if I should do > it manually or it will be generated at a later stage? Those patches were done a while ago, and are not part of the set of patches generated by the current scripts. I'm not sure why networking-odl didn't receive one. I suggest going ahead and creating that patch by hand. Doug From doug at doughellmann.com Thu Aug 30 17:33:52 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 30 Aug 2018 13:33:52 -0400 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: References: <1535398507-sup-4428@lrrr.local> <1535492703-sup-9104@lrrr.local> <20180829211411.hua6wpepu3xpnndh@bishop> <1535579316-sup-4280@lrrr.local> <1535587588-sup-7741@lrrr.local> Message-ID: <1535650315-sup-9273@lrrr.local> Excerpts from Michel Peterson's message of 2018-08-30 09:43:30 +0300: > On Thu, Aug 30, 2018 at 3:11 AM, Doug Hellmann > wrote: > > > > OK, there are somewhere just over 100 patches for all of the neutron > > > repositories, so I'm going to wait for a quieter time of day to submit > > > them to avoid blocking other, smaller, bits of work. > > > > > > Doug > > > > Those patches are up for review now. - Doug > > > > > Doug, just a heads up the tool for python3-first is duplicating the same > block of code (e.g. https://review.openstack.org/# > /c/597873/1/.zuul.d/project.yaml ) and in some cases duplicating code that > already exists (e.g. https://review.openstack.org/# > /c/597872/1/.zuul.d/project.yaml ). > > Perhaps it will be good to review the tool before moving forward. Thanks for the heads-up. We ran into similar issues in one or two other places. The tool is, frankly, not all that smart. *Most* teams don't have any settings like this in their local zuul config, and given the relatively few cases where it's a problem I think it's going to be easier to just fix the patches by hand than to make the tool smarter. Doug > > Best, > M > > P.S. I've sent you the same through #openstack-infra From miguel at mlavalle.com Thu Aug 30 17:39:09 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 30 Aug 2018 12:39:09 -0500 Subject: [openstack-dev] [neutron][nova] Small bandwidth demo on the PTG In-Reply-To: <1535619300.3600.5@smtp.office365.com> References: <1535619300.3600.5@smtp.office365.com> Message-ID: Gibi, Bence, Thanks for putting this demo together. Please count me in Regards On Thu, Aug 30, 2018 at 3:55 AM, Balázs Gibizer wrote: > Hi, > > Based on the Nova PTG planning etherpad [1] there is a need to talk about > the current state of the bandwidth work [2][3]. Bence (rubasov) has already > planned to show a small demo to Neutron folks about the current state of > the implementation. So Bence and I are wondering about bringing that demo > close to the nova - neutron cross project session. That session is > currently planned to happen Thursday after lunch. So we are think about > showing the demo right before that session starts. It would start 30 > minutes before the nova - neutron cross project session. > > Are Nova folks also interested in seeing such a demo? > > If you are interested in seeing the demo please drop us a line or ping us > in IRC so we know who should we wait for. > > Cheers, > gibi > > [1] https://etherpad.openstack.org/p/nova-ptg-stein > [2] https://specs.openstack.org/openstack/neutron-specs/specs/ro > cky/minimum-bandwidth-allocation-placement-api.html > [3] https://specs.openstack.org/openstack/nova-specs/specs/rocky > /approved/bandwidth-resource-provider.html > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andy.mccrae at gmail.com Thu Aug 30 17:40:49 2018 From: andy.mccrae at gmail.com (Andy McCrae) Date: Thu, 30 Aug 2018 18:40:49 +0100 Subject: [openstack-dev] [openstack-ansible] Stepping down from OpenStack-Ansible core Message-ID: Now that Rocky is all but ready it seems like a good time! Since changing roles I've not been able to keep up enough focus on reviews and other obligations - so I think it's time to step aside as a core reviewer. I want to say thanks to everybody in the community, I'm really proud to see the work we've done and how the OSA team has grown. I've learned a tonne from all of you - it's definitely been a great experience. Thanks, Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From yossi.boaron.1234 at gmail.com Thu Aug 30 17:42:38 2018 From: yossi.boaron.1234 at gmail.com (Yossi Boaron) Date: Thu, 30 Aug 2018 20:42:38 +0300 Subject: [openstack-dev] Update global upper constraint of Kubernetes from 7.0.0 to 6.0.0 In-Reply-To: <20180830161238.n2upurrjgfxwv3r5@gentoo.org> References: <20180830161238.n2upurrjgfxwv3r5@gentoo.org> Message-ID: בתאריך יום ה׳, 30 באוג׳ 2018, 19:12, מאת Matthew Thode ‏< prometheanfire at gentoo.org>: > On 18-08-30 17:54:24, Yossi Boaron wrote: > > Hi All, > > > > Kubernetes upper constraint was changed lately from 6.0.0 to 7.0.0 [1]. > > Currently, the Openshift python client can't work with Kubernetes 7.0.0, > > this caused by a version pinning issue (pulled in Kubernetes 7.0.0). > > As a result of that, we are unable to run some of our tempest tests in > > kuryr-kubernetes project. > > > > As a temporary (till an Openshift version that supports kubernets 7.0.0 > > will be released) we would like to suggest to set back kubernetes upper > > constraint to 6.0.0 [2]. > > How long til a version that supports >=7.0.0 comes out? > > > > > Do you see any problem with this approach? > > > > Best regards > > Yossi > > > > [1] - https://review.openstack.org/#/c/594495/ > > [2] - https://review.openstack.org/#/c/595569/ > > -- > Matthew Thode (prometheanfire) > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Thu Aug 30 17:43:06 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 30 Aug 2018 12:43:06 -0500 Subject: [openstack-dev] [neutron][nova] Small bandwidth demo on the PTG In-Reply-To: <1535619300.3600.5@smtp.office365.com> References: <1535619300.3600.5@smtp.office365.com> Message-ID: Gibi, Bence, In fact, I added the demo explicitly to the Neutron PTG agenda from 1:30 to 2, to give it visiblilty Cheers On Thu, Aug 30, 2018 at 3:55 AM, Balázs Gibizer wrote: > Hi, > > Based on the Nova PTG planning etherpad [1] there is a need to talk about > the current state of the bandwidth work [2][3]. Bence (rubasov) has already > planned to show a small demo to Neutron folks about the current state of > the implementation. So Bence and I are wondering about bringing that demo > close to the nova - neutron cross project session. That session is > currently planned to happen Thursday after lunch. So we are think about > showing the demo right before that session starts. It would start 30 > minutes before the nova - neutron cross project session. > > Are Nova folks also interested in seeing such a demo? > > If you are interested in seeing the demo please drop us a line or ping us > in IRC so we know who should we wait for. > > Cheers, > gibi > > [1] https://etherpad.openstack.org/p/nova-ptg-stein > [2] https://specs.openstack.org/openstack/neutron-specs/specs/ro > cky/minimum-bandwidth-allocation-placement-api.html > [3] https://specs.openstack.org/openstack/nova-specs/specs/rocky > /approved/bandwidth-resource-provider.html > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Aug 30 17:46:18 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 30 Aug 2018 13:46:18 -0400 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: References: Message-ID: <1535651089-sup-4093@lrrr.local> Excerpts from Edison Xiang's message of 2018-08-30 14:08:12 +0800: > Hey dims, > > Thanks your reply. Your suggestion is very important. > > > what would be the impact to projects? > > what steps they would have to take? > > We can launch a project to publish OpenStack Projects APIs Schema for users > and developers. > But now OpenStack Projects have no APIs Schema definition. > Open API will not impact OpenStack Projects features they have, > but we need some volunteers to define every project APIs Schema by Open API > 3.0. > > > Do we have a sample/mock API where we can show that the Action and > Microversions can be declared to reflect reality and it can actually work > with the generated code? > Yeah, you can copy this yaml [1] into editor [2] to generate server or > client codes or try it out. > We can do more demos later. > > [1] > https://github.com/edisonxiang/OpenAPI-Specification/blob/master/examples/v3.0/petstore.yaml > [2] https://editor.swagger.io > > Best Regards, > Edison Xiang How does this proposal relate to the work that has already been done to build the API guide https://developer.openstack.org/api-guide/quick-start/ documentation? Doug > > On Wed, Aug 29, 2018 at 6:31 PM Davanum Srinivas wrote: > > > Edison, > > > > This is definitely a step in the right direction if we can pull it off. > > > > Given the previous experiences and the current situation of how and where > > we store the information currently and how we generate the website for the > > API(s), can you please outline > > - what would be the impact to projects? > > - what steps they would have to take? > > > > Also, the whole point of having these definitions is that the generated > > code works. Do we have a sample/mock API where we can show that the Action > > and Microversions can be declared to reflect reality and it can actually > > work with the generated code? > > > > Thanks, > > Dims > > > > On Wed, Aug 29, 2018 at 2:37 AM Edison Xiang > > wrote: > > > >> Hi team, > >> > >> As we know, Open API 3.0 was released on July, 2017, it is about one year > >> ago. > >> Open API 3.0 support some new features like anyof, oneof and allof than > >> Open API 2.0(Swagger 2.0). > >> Now OpenStack projects do not support Open API. > >> Also I found some old emails in the Mail List about supporting Open API > >> 2.0 in OpenStack. > >> > >> Some limitations are mentioned in the Mail List for OpenStack API: > >> 1. The POST */action APIs. > >> These APIs are exist in lots of projects like nova, cinder. > >> These APIs have the same URI but the responses will be different when > >> the request is different. > >> 2. Micro versions. > >> These are controller via headers, which are sometimes used to > >> describe behavioral changes in an API, not just request/response schema > >> changes. > >> > >> About the first limitation, we can find the solution in the Open API 3.0. > >> The example [2] shows that we can define different request/response in > >> the same URI by anyof feature in Open API 3.0. > >> > >> About the micro versions problem, I think it is not a limitation related > >> a special API Standard. > >> We can list all micro versions API schema files in one directory like > >> nova/V2, > >> or we can list the schema changes between micro versions as tempest > >> project did [3]. > >> > >> Based on Open API 3.0, it can bring lots of benefits for OpenStack > >> Community and does not impact the current features the Community has. > >> For example, we can automatically generate API documents, different > >> language Clients(SDK) maybe for different micro versions, > >> and generate cloud tool adapters for OpenStack, like ansible module, > >> terraform providers and so on. > >> Also we can make an API UI to provide an online and visible API search, > >> API Calling for every OpenStack API. > >> 3rd party developers can also do some self-defined development. > >> > >> [1] https://github.com/OAI/OpenAPI-Specification > >> [2] > >> https://github.com/edisonxiang/OpenAPI-Specification/blob/master/examples/v3.0/petstore.yaml#L94-L109 > >> [3] > >> https://github.com/openstack/tempest/tree/master/tempest/lib/api_schema/response/compute > >> > >> Best Regards, > >> Edison Xiang > >> > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > > > -- > > Davanum Srinivas :: https://twitter.com/dims > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From yossi.boaron.1234 at gmail.com Thu Aug 30 17:48:39 2018 From: yossi.boaron.1234 at gmail.com (Yossi Boaron) Date: Thu, 30 Aug 2018 20:48:39 +0300 Subject: [openstack-dev] Update global upper constraint of Kubernetes from 7.0.0 to 6.0.0 In-Reply-To: <20180830161238.n2upurrjgfxwv3r5@gentoo.org> References: <20180830161238.n2upurrjgfxwv3r5@gentoo.org> Message-ID: Hi Matthew, Seems that Openshift version 0.7 was released few hours ago, this version should work properly with kubernetes. I"ll update my PR to change openshift upper constraint to 0.7 and leave K8S at 7.0.0 10x Yossi בתאריך יום ה׳, 30 באוג׳ 2018, 19:12, מאת Matthew Thode ‏< prometheanfire at gentoo.org>: > On 18-08-30 17:54:24, Yossi Boaron wrote: > > Hi All, > > > > Kubernetes upper constraint was changed lately from 6.0.0 to 7.0.0 [1]. > > Currently, the Openshift python client can't work with Kubernetes 7.0.0, > > this caused by a version pinning issue (pulled in Kubernetes 7.0.0). > > As a result of that, we are unable to run some of our tempest tests in > > kuryr-kubernetes project. > > > > As a temporary (till an Openshift version that supports kubernets 7.0.0 > > will be released) we would like to suggest to set back kubernetes upper > > constraint to 6.0.0 [2]. > > How long til a version that supports >=7.0.0 comes out? > > > > > Do you see any problem with this approach? > > > > Best regards > > Yossi > > > > [1] - https://review.openstack.org/#/c/594495/ > > [2] - https://review.openstack.org/#/c/595569/ > > -- > Matthew Thode (prometheanfire) > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ianyrchoi at gmail.com Thu Aug 30 18:05:35 2018 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Fri, 31 Aug 2018 03:05:35 +0900 Subject: [openstack-dev] [openstack-ansible] Stepping down from OpenStack-Ansible core In-Reply-To: References: Message-ID: <7558af44-e1aa-3fe9-4cf8-d9588f9d64a5@gmail.com> Hello Andy, Thanks a lot for your work on OpenStack-Ansible team. It was very happy to collaborate with you as different teams (me: I18n team) during Ocata and Pike release cycles, and I think I18n team now has better insight on OpenStack-Ansible thanks to the help from you and so many kind contributors. With many thanks, /Ian Andy McCrae wrote on 8/31/2018 2:40 AM: > Now that Rocky is all but ready it seems like a good time! Since > changing roles I've not been able to keep up enough focus on reviews > and other obligations - so I think it's time to step aside as a core > reviewer. > > I want to say thanks to everybody in the community, I'm really proud > to see the work we've done and how the OSA team has grown. I've > learned a tonne from all of you - it's definitely been a great experience. > > Thanks, > Andy > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From melwittt at gmail.com Thu Aug 30 18:13:17 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 30 Aug 2018 11:13:17 -0700 Subject: [openstack-dev] [neutron][nova] Small bandwidth demo on the PTG In-Reply-To: References: <1535619300.3600.5@smtp.office365.com> Message-ID: <4bb21c51-0092-70f3-a535-8fa59adae7ae@gmail.com> On Thu, 30 Aug 2018 12:43:06 -0500, Miguel Lavalle wrote: > Gibi, Bence, > > In fact, I added the demo explicitly to the Neutron PTG agenda from 1:30 > to 2, to give it visiblilty I'm interested in seeing the demo too. Will the demo be shown at the Neutron room or the Nova room? Historically, lunch has ended at 1:30, so this will be during the same time as the Neutron/Nova cross project time. Should we just co-locate together for the demo and the session? I expect anyone watching the demo will want to participate in the Neutron/Nova session as well. Either room is fine by me. -melanie > On Thu, Aug 30, 2018 at 3:55 AM, Balázs Gibizer > > wrote: > > Hi, > > Based on the Nova PTG planning etherpad [1] there is a need to talk > about the current state of the bandwidth work [2][3]. Bence > (rubasov) has already planned to show a small demo to Neutron folks > about the current state of the implementation. So Bence and I are > wondering about bringing that demo close to the nova - neutron cross > project session. That session is currently planned to happen > Thursday after lunch. So we are think about showing the demo right > before that session starts. It would start 30 minutes before the > nova - neutron cross project session. > > Are Nova folks also interested in seeing such a demo? > > If you are interested in seeing the demo please drop us a line or > ping us in IRC so we know who should we wait for. > > Cheers, > gibi > > [1] https://etherpad.openstack.org/p/nova-ptg-stein > > [2] > https://specs.openstack.org/openstack/neutron-specs/specs/rocky/minimum-bandwidth-allocation-placement-api.html > > [3] > https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/bandwidth-resource-provider.html > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Thu Aug 30 18:14:43 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 30 Aug 2018 14:14:43 -0400 Subject: [openstack-dev] [goals][python3] modifying the zuul setting import patches Message-ID: <1535652724-sup-4333@lrrr.local> A few reviewers have suggested using some YAML features that allow repeated sections to be inserted by reference, instead of copying and pasting content in different parts of the Zuul configuration. That's a great idea! However, please do that AFTER the migration is complete. For now, to make the transition smooth, please just take the patches as they are (unless there is something wrong with them, of course). If significant changes to existing jobs are made before the cleanup is done, then those versions of the job variants might not be used when the patch is tested, and when the clean-up patch lands your project might be broken. When we approve the change in project-config to remove the settings there, then any other changes to the job settings in-tree can be self-testing, and even if it's not self-testing it will be easier to see that a job setting changed right before the job started failing. So, again, please don't block the migration work on aesthetic concerns. This will all go much more smoothly and quickly if we save those changes for later. Doug From prometheanfire at gentoo.org Thu Aug 30 18:56:14 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 30 Aug 2018 13:56:14 -0500 Subject: [openstack-dev] Update global upper constraint of Kubernetes from 7.0.0 to 6.0.0 In-Reply-To: References: <20180830161238.n2upurrjgfxwv3r5@gentoo.org> Message-ID: <20180830185614.kc4nnjoyn7pztp4b@gentoo.org> On 18-08-30 20:48:39, Yossi Boaron wrote: > Hi Matthew, > > Seems that Openshift version 0.7 was released few hours ago, this version > should work properly with kubernetes. > I"ll update my PR to change openshift upper constraint to 0.7 and leave K8S > at 7.0.0 > > 10x > Yossi > > בתאריך יום ה׳, 30 באוג׳ 2018, 19:12, מאת Matthew Thode ‏< > prometheanfire at gentoo.org>: > > > On 18-08-30 17:54:24, Yossi Boaron wrote: > > > Hi All, > > > > > > Kubernetes upper constraint was changed lately from 6.0.0 to 7.0.0 [1]. > > > Currently, the Openshift python client can't work with Kubernetes 7.0.0, > > > this caused by a version pinning issue (pulled in Kubernetes 7.0.0). > > > As a result of that, we are unable to run some of our tempest tests in > > > kuryr-kubernetes project. > > > > > > As a temporary (till an Openshift version that supports kubernets 7.0.0 > > > will be released) we would like to suggest to set back kubernetes upper > > > constraint to 6.0.0 [2]. > > > > How long til a version that supports >=7.0.0 comes out? > > > > > > > > Do you see any problem with this approach? > > > Sounds great, thanks -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From chris.friesen at windriver.com Thu Aug 30 18:57:31 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 30 Aug 2018 12:57:31 -0600 Subject: [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830170350.wrz4wlanb276kncb@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: <5B883E1B.2070101@windriver.com> On 08/30/2018 11:03 AM, Jeremy Stanley wrote: > The proposal is simple: create a new openstack-discuss mailing list > to cover all the above sorts of discussion and stop using the other > four. Do we want to merge usage and development onto one list? That could be a busy list for someone who's just asking a simple usage question. Alternately, if we are going to merge everything then why not just use the "openstack" mailing list since it already exists and there are references to it on the web. (Or do you want to force people to move to something new to make them recognize that something has changed?) Chris From doug at doughellmann.com Thu Aug 30 19:07:39 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 30 Aug 2018 15:07:39 -0400 Subject: [openstack-dev] [goals][python3] small change of schedule for changing packaging jobs Message-ID: <1535655899-sup-5725@lrrr.local> I had originally been planning to wait for the cycle-trailing projects to finish Rocky before updating the release jobs to use the python3 versions. However, Sean reminded me today that we extended the deadline for cycle-trailing projects to 2 months from now, and I don't think we want to wait that long to change the other projects. I have prepared a patch [1] to switch all official python projects to the python3 publishing job right away, so we have time to resolve any issues that causes before the first milestone of the Stein cycle. As I mentioned before, when that patch lands, it will add a new check job to all projects so that if any packaging-related files are changed the ability to package the project is tested. Let me know if you run into any difficulties. Doug [1] https://review.openstack.org/598323 From james.slagle at gmail.com Thu Aug 30 19:53:34 2018 From: james.slagle at gmail.com (James Slagle) Date: Thu, 30 Aug 2018 15:53:34 -0400 Subject: [openstack-dev] [TripleO]Addressing Edge/Multi-site/Multi-cloud deployment use cases (new squad) In-Reply-To: References: Message-ID: On Mon, Aug 20, 2018 at 4:47 PM James Slagle wrote: > https://etherpad.openstack.org/p/tripleo-edge-squad-status Several folks have signed up for the squad, so I've added a poll in the etherpad to pick a meeting time. > -- > -- James Slagle > -- -- -- James Slagle -- From knikolla at bu.edu Thu Aug 30 20:09:12 2018 From: knikolla at bu.edu (Kristi Nikolla) Date: Thu, 30 Aug 2018 16:09:12 -0400 Subject: [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <5B883E1B.2070101@windriver.com> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> Message-ID: <12A7C26B-D5D1-4AD3-9FDE-35B31FE706E3@bu.edu> I’m strongly in support of merging the lists. > On Aug 30, 2018, at 2:57 PM, Chris Friesen wrote: > > On 08/30/2018 11:03 AM, Jeremy Stanley wrote: > >> The proposal is simple: create a new openstack-discuss mailing list >> to cover all the above sorts of discussion and stop using the other >> four. > > Do we want to merge usage and development onto one list? That could be a busy list for someone who's just asking a simple usage question. True, but it would bring more visibility to the developers about troubles that users are having. > > Alternately, if we are going to merge everything then why not just use the "openstack" mailing list since it already exists and there are references to it on the web. > > (Or do you want to force people to move to something new to make them recognize that something has changed?) > > Chris > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From doug at doughellmann.com Thu Aug 30 20:12:41 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 30 Aug 2018 16:12:41 -0400 Subject: [openstack-dev] [goal][python3] week 3 update In-Reply-To: <1535587356-sup-2136@lrrr.local> References: <1535398507-sup-4428@lrrr.local> <1535550637-sup-5597@lrrr.local> <1535561981-sup-3301@lrrr.local> <1535570540-sup-7062@lrrr.local> <1535587356-sup-2136@lrrr.local> Message-ID: <1535659912-sup-2670@lrrr.local> Excerpts from Doug Hellmann's message of 2018-08-29 20:04:16 -0400: > Excerpts from Doug Hellmann's message of 2018-08-29 15:22:56 -0400: > > Excerpts from David Peacock's message of 2018-08-29 15:12:03 -0400: > > > On Wed, Aug 29, 2018 at 1:02 PM Doug Hellmann wrote: > > > > > > > Excerpts from Doug Hellmann's message of 2018-08-29 09:50:58 -0400: > > > > > Excerpts from David Peacock's message of 2018-08-29 08:53:26 -0400: > > > > > > On Mon, Aug 27, 2018 at 3:38 PM Doug Hellmann > > > > wrote: > > > > > > > > > > > > > If your team is ready to have your zuul settings migrated, please > > > > > > > let us know by following up to this email. We will start with the > > > > > > > volunteers, and then work our way through the other teams. > > > > > > > > > > > > > > > > > > > TripleO team is ready to participate. I'll coordinate on our end. > > > > > > > > > > I will generate the patches today and watch for a time when the CI load > > > > > is low to submit them. > > > > > > > > > > Doug > > > > > > > > > > > > > It appears that someone who is not listed as a goal champion has > > > > already submitted a bunch of patches for importing the zuul settings > > > > into the TripleO repositories without updating our tracking story. > > > > The keystone team elected to abandon a similar set of patches because > > > > some of them were incorrect. I don't know whether that applies to > > > > these. > > > > > > > > Do you want to review the ones that are open, or would you like for me > > > > to generate a new batch? > > > > > > > > Doug > > > > > > > > > > Please would you mind pasting me the reviews in question, then I'll take a > > > look and let you know which direction. > > > > > > Thanks! > > > > Here's the list of open changes I see right now: > > > > +----------------------------------------------+-------------------------------------+------------+--------------+-------------------------------------+---------------+ > > | Subject | Repo | Tests | Workflow | URL | Branch | > > +----------------------------------------------+-------------------------------------+------------+--------------+-------------------------------------+---------------+ > > | fix tox python3 overrides | openstack-infra/tripleo-ci | PASS | REVIEWED | https://review.openstack.org/588587 | master | > > | import zuul job settings from project-config | openstack/ansible-role-k8s-glance | FAILED | NEW | https://review.openstack.org/596021 | master | > > | import zuul job settings from project-config | openstack/ansible-role-k8s-keystone | FAILED | NEW | https://review.openstack.org/596022 | master | > > | import zuul job settings from project-config | openstack/ansible-role-k8s-mariadb | FAILED | NEW | https://review.openstack.org/596023 | master | > > | import zuul job settings from project-config | openstack/dib-utils | PASS | NEW | https://review.openstack.org/596024 | master | > > | fix tox python3 overrides | openstack/instack | PASS | REVIEWED | https://review.openstack.org/572904 | master | > > | import zuul job settings from project-config | openstack/instack | PASS | NEW | https://review.openstack.org/596025 | master | > > | add python 3.6 unit test job | openstack/instack | PASS | NEW | https://review.openstack.org/596026 | master | > > | add python 3.6 unit test job | openstack/instack | PASS | NEW | https://review.openstack.org/596027 | master | > > | import zuul job settings from project-config | openstack/instack | PASS | NEW | https://review.openstack.org/596085 | stable/ocata | > > | import zuul job settings from project-config | openstack/instack | PASS | NEW | https://review.openstack.org/596105 | stable/pike | > > | import zuul job settings from project-config | openstack/instack | PASS | NEW | https://review.openstack.org/596121 | stable/queens | > > | import zuul job settings from project-config | openstack/instack | PASS | NEW | https://review.openstack.org/596138 | stable/rocky | > > | import zuul job settings from project-config | openstack/instack-undercloud | FAILED | NEW | https://review.openstack.org/596086 | stable/ocata | > > | import zuul job settings from project-config | openstack/instack-undercloud | PASS | NEW | https://review.openstack.org/596106 | stable/pike | > > | import zuul job settings from project-config | openstack/instack-undercloud | FAILED | NEW | https://review.openstack.org/596122 | stable/queens | > > | import zuul job settings from project-config | openstack/os-apply-config | FAILED | NEW | https://review.openstack.org/596087 | stable/ocata | > > | import zuul job settings from project-config | openstack/os-apply-config | PASS | NEW | https://review.openstack.org/596107 | stable/pike | > > | import zuul job settings from project-config | openstack/os-apply-config | PASS | NEW | https://review.openstack.org/596123 | stable/queens | > > | import zuul job settings from project-config | openstack/os-collect-config | FAILED | NEW | https://review.openstack.org/596094 | stable/ocata | > > | import zuul job settings from project-config | openstack/os-collect-config | PASS | NEW | https://review.openstack.org/596108 | stable/pike | > > | import zuul job settings from project-config | openstack/os-collect-config | PASS | NEW | https://review.openstack.org/596124 | stable/queens | > > | import zuul job settings from project-config | openstack/os-net-config | FAILED | NEW | https://review.openstack.org/596095 | stable/ocata | > > | import zuul job settings from project-config | openstack/os-net-config | PASS | REVIEWED | https://review.openstack.org/596109 | stable/pike | > > | import zuul job settings from project-config | openstack/os-net-config | PASS | REVIEWED | https://review.openstack.org/596125 | stable/queens | > > | import zuul job settings from project-config | openstack/os-refresh-config | PASS | NEW | https://review.openstack.org/596096 | stable/ocata | > > | import zuul job settings from project-config | openstack/os-refresh-config | PASS | NEW | https://review.openstack.org/596110 | stable/pike | > > | import zuul job settings from project-config | openstack/os-refresh-config | PASS | NEW | https://review.openstack.org/596126 | stable/queens | > > | import zuul job settings from project-config | openstack/paunch | FAILED | NEW | https://review.openstack.org/596041 | master | > > | switch documentation job to new PTI | openstack/paunch | FAILED | NEW | https://review.openstack.org/596042 | master | > > | add python 3.6 unit test job | openstack/paunch | FAILED | NEW | https://review.openstack.org/596043 | master | > > | import zuul job settings from project-config | openstack/paunch | PASS | NEW | https://review.openstack.org/596111 | stable/pike | > > | import zuul job settings from project-config | openstack/paunch | FAILED | NEW | https://review.openstack.org/596127 | stable/queens | > > | import zuul job settings from project-config | openstack/puppet-pacemaker | FAILED | NEW | https://review.openstack.org/596044 | master | > > | switch documentation job to new PTI | openstack/puppet-pacemaker | FAILED | NEW | https://review.openstack.org/596045 | master | > > | import zuul job settings from project-config | openstack/puppet-tripleo | FAILED | NEW | https://review.openstack.org/596097 | stable/ocata | > > | import zuul job settings from project-config | openstack/puppet-tripleo | PASS | NEW | https://review.openstack.org/596112 | stable/pike | > > | import zuul job settings from project-config | openstack/puppet-tripleo | FAILED | NEW | https://review.openstack.org/596128 | stable/queens | > > | import zuul job settings from project-config | openstack/python-tripleoclient | FAILED | REVIEWED | https://review.openstack.org/596048 | master | > > | switch documentation job to new PTI | openstack/python-tripleoclient | PASS | REVIEWED | https://review.openstack.org/596049 | master | > > | add python 3.6 unit test job | openstack/python-tripleoclient | PASS | REVIEWED | https://review.openstack.org/596050 | master | > > | import zuul job settings from project-config | openstack/python-tripleoclient | FAILED | REVIEWED | https://review.openstack.org/596098 | stable/ocata | > > | import zuul job settings from project-config | openstack/python-tripleoclient | PASS | REVIEWED | https://review.openstack.org/596113 | stable/pike | > > | import zuul job settings from project-config | openstack/python-tripleoclient | FAILED | NEW | https://review.openstack.org/596129 | stable/queens | > > | import zuul job settings from project-config | openstack/python-tripleoclient | PASS | REVIEWED | https://review.openstack.org/596139 | stable/rocky | > > | import zuul job settings from project-config | openstack/tempest-tripleo-ui | UNKNOWN | APPROVED | https://review.openstack.org/596051 | master | > > | switch documentation job to new PTI | openstack/tempest-tripleo-ui | UNKNOWN | APPROVED | https://review.openstack.org/596052 | master | > > | import zuul job settings from project-config | openstack/tripleo-common | FAILED | NEW | https://review.openstack.org/596053 | master | > > | switch documentation job to new PTI | openstack/tripleo-common | FAILED | NEW | https://review.openstack.org/596054 | master | > > | add python 3.6 unit test job | openstack/tripleo-common | FAILED | NEW | https://review.openstack.org/596055 | master | > > | import zuul job settings from project-config | openstack/tripleo-common | FAILED | NEW | https://review.openstack.org/596099 | stable/ocata | > > | import zuul job settings from project-config | openstack/tripleo-common | FAILED | NEW | https://review.openstack.org/596114 | stable/pike | > > | import zuul job settings from project-config | openstack/tripleo-common | FAILED | NEW | https://review.openstack.org/596130 | stable/queens | > > | import zuul job settings from project-config | openstack/tripleo-docs | PASS | REVIEWED | https://review.openstack.org/596058 | master | > > | switch documentation job to new PTI | openstack/tripleo-docs | PASS | NEW | https://review.openstack.org/596059 | master | > > | switch documentation job to new PTI | openstack/tripleo-heat-templates | FAILED | NEW | https://review.openstack.org/596061 | master | > > | import zuul job settings from project-config | openstack/tripleo-heat-templates | FAILED | NEW | https://review.openstack.org/596100 | stable/ocata | > > | import zuul job settings from project-config | openstack/tripleo-heat-templates | PASS | NEW | https://review.openstack.org/596115 | stable/pike | > > | import zuul job settings from project-config | openstack/tripleo-heat-templates | FAILED | NEW | https://review.openstack.org/596131 | stable/queens | > > | add python 3.6 unit test job | openstack/tripleo-image-elements | FAILED | NEW | https://review.openstack.org/596064 | master | > > | import zuul job settings from project-config | openstack/tripleo-image-elements | FAILED | NEW | https://review.openstack.org/596101 | stable/ocata | > > | import zuul job settings from project-config | openstack/tripleo-image-elements | PASS | NEW | https://review.openstack.org/596116 | stable/pike | > > | import zuul job settings from project-config | openstack/tripleo-image-elements | PASS | NEW | https://review.openstack.org/596132 | stable/queens | > > | import zuul job settings from project-config | openstack/tripleo-ipsec | PASS | NEW | https://review.openstack.org/596133 | stable/queens | > > | import zuul job settings from project-config | openstack/tripleo-puppet-elements | FAILED | NEW | https://review.openstack.org/596102 | stable/ocata | > > | import zuul job settings from project-config | openstack/tripleo-puppet-elements | FAILED | NEW | https://review.openstack.org/596117 | stable/pike | > > | import zuul job settings from project-config | openstack/tripleo-puppet-elements | FAILED | NEW | https://review.openstack.org/596134 | stable/queens | > > | import zuul job settings from project-config | openstack/tripleo-quickstart | FAILED | NEW | https://review.openstack.org/596071 | master | > > | switch documentation job to new PTI | openstack/tripleo-quickstart | FAILED | NEW | https://review.openstack.org/596072 | master | > > | import zuul job settings from project-config | openstack/tripleo-quickstart-extras | FAILED | NEW | https://review.openstack.org/596073 | master | > > | switch documentation job to new PTI | openstack/tripleo-quickstart-extras | FAILED | NEW | https://review.openstack.org/596074 | master | > > | import zuul job settings from project-config | openstack/tripleo-specs | PASS | NEW | https://review.openstack.org/596077 | master | > > | import zuul job settings from project-config | openstack/tripleo-ui | PASS | REVIEWED | https://review.openstack.org/596078 | master | > > | switch documentation job to new PTI | openstack/tripleo-ui | PASS | REVIEWED | https://review.openstack.org/596079 | master | > > | import zuul job settings from project-config | openstack/tripleo-ui | PASS | REVIEWED | https://review.openstack.org/596103 | stable/ocata | > > | import zuul job settings from project-config | openstack/tripleo-ui | PASS | REVIEWED | https://review.openstack.org/596118 | stable/pike | > > | import zuul job settings from project-config | openstack/tripleo-ui | PASS | REVIEWED | https://review.openstack.org/596135 | stable/queens | > > | import zuul job settings from project-config | openstack/tripleo-upgrade | PASS | NEW | https://review.openstack.org/596080 | master | > > | switch documentation job to new PTI | openstack/tripleo-upgrade | PASS | NEW | https://review.openstack.org/596081 | master | > > | import zuul job settings from project-config | openstack/tripleo-upgrade | PASS | NEW | https://review.openstack.org/596119 | stable/pike | > > | import zuul job settings from project-config | openstack/tripleo-upgrade | PASS | NEW | https://review.openstack.org/596136 | stable/queens | > > | import zuul job settings from project-config | openstack/tripleo-validations | PASS | REVIEWED | https://review.openstack.org/596082 | master | > > | switch documentation job to new PTI | openstack/tripleo-validations | PASS | REVIEWED | https://review.openstack.org/596083 | master | > > | add python 3.6 unit test job | openstack/tripleo-validations | PASS | REVIEWED | https://review.openstack.org/596084 | master | > > | import zuul job settings from project-config | openstack/tripleo-validations | FAILED | REVIEWED | https://review.openstack.org/596104 | stable/ocata | > > | import zuul job settings from project-config | openstack/tripleo-validations | PASS | REVIEWED | https://review.openstack.org/596120 | stable/pike | > > | import zuul job settings from project-config | openstack/tripleo-validations | PASS | REVIEWED | https://review.openstack.org/596137 | stable/queens | > > | | | | | | | > > | | | FAILED: 38 | APPROVED: 2 | | | > > | | | PASS: 47 | NEW: 63 | | | > > | | | UNKNOWN: 2 | REVIEWED: 22 | | | > > +----------------------------------------------+-------------------------------------+------------+--------------+-------------------------------------+---------------+ > > I went ahead and regenerated those, just to be safe. The full list > is below. I think it's probably better to take the new ones. > > +----------------------------------------------+-------------------------------------+-------------------------------------+---------------+ > | Subject | Repo | URL | Branch | > +----------------------------------------------+-------------------------------------+-------------------------------------+---------------+ > | fix tox python3 overrides | openstack-infra/tripleo-ci | https://review.openstack.org/588587 | master | > | import zuul job settings from project-config | openstack/ansible-role-k8s-glance | https://review.openstack.org/596021 | master | > | import zuul job settings from project-config | openstack/ansible-role-k8s-glance | https://review.openstack.org/597746 | master | > | import zuul job settings from project-config | openstack/ansible-role-k8s-keystone | https://review.openstack.org/596022 | master | > | import zuul job settings from project-config | openstack/ansible-role-k8s-keystone | https://review.openstack.org/597747 | master | > | import zuul job settings from project-config | openstack/ansible-role-k8s-mariadb | https://review.openstack.org/596023 | master | > | import zuul job settings from project-config | openstack/ansible-role-k8s-mariadb | https://review.openstack.org/597748 | master | > | import zuul job settings from project-config | openstack/dib-utils | https://review.openstack.org/596024 | master | > | import zuul job settings from project-config | openstack/dib-utils | https://review.openstack.org/597749 | master | > | fix tox python3 overrides | openstack/instack | https://review.openstack.org/572904 | master | > | import zuul job settings from project-config | openstack/instack | https://review.openstack.org/597750 | master | > | add python 3.5 unit test job | openstack/instack | https://review.openstack.org/597751 | master | > | add python 3.6 unit test job | openstack/instack | https://review.openstack.org/597752 | master | > | import zuul job settings from project-config | openstack/instack | https://review.openstack.org/597794 | stable/ocata | > | import zuul job settings from project-config | openstack/instack | https://review.openstack.org/597808 | stable/pike | > | import zuul job settings from project-config | openstack/instack | https://review.openstack.org/597825 | stable/queens | > | import zuul job settings from project-config | openstack/instack | https://review.openstack.org/597842 | stable/rocky | > | import zuul job settings from project-config | openstack/instack-undercloud | https://review.openstack.org/596086 | stable/ocata | > | import zuul job settings from project-config | openstack/instack-undercloud | https://review.openstack.org/596106 | stable/pike | > | import zuul job settings from project-config | openstack/instack-undercloud | https://review.openstack.org/596122 | stable/queens | > | import zuul job settings from project-config | openstack/instack-undercloud | https://review.openstack.org/597753 | master | > | switch documentation job to new PTI | openstack/instack-undercloud | https://review.openstack.org/597754 | master | > | add python 3.6 unit test job | openstack/instack-undercloud | https://review.openstack.org/597755 | master | > | import zuul job settings from project-config | openstack/instack-undercloud | https://review.openstack.org/597795 | stable/ocata | > | import zuul job settings from project-config | openstack/instack-undercloud | https://review.openstack.org/597809 | stable/pike | > | import zuul job settings from project-config | openstack/instack-undercloud | https://review.openstack.org/597826 | stable/queens | > | import zuul job settings from project-config | openstack/instack-undercloud | https://review.openstack.org/597843 | stable/rocky | > | import zuul job settings from project-config | openstack/os-apply-config | https://review.openstack.org/596087 | stable/ocata | > | import zuul job settings from project-config | openstack/os-apply-config | https://review.openstack.org/596107 | stable/pike | > | import zuul job settings from project-config | openstack/os-apply-config | https://review.openstack.org/596123 | stable/queens | > | import zuul job settings from project-config | openstack/os-apply-config | https://review.openstack.org/597796 | stable/ocata | > | import zuul job settings from project-config | openstack/os-apply-config | https://review.openstack.org/597810 | stable/pike | > | import zuul job settings from project-config | openstack/os-apply-config | https://review.openstack.org/597827 | stable/queens | > | import zuul job settings from project-config | openstack/os-apply-config | https://review.openstack.org/597844 | stable/rocky | > | import zuul job settings from project-config | openstack/os-collect-config | https://review.openstack.org/596094 | stable/ocata | > | import zuul job settings from project-config | openstack/os-collect-config | https://review.openstack.org/596108 | stable/pike | > | import zuul job settings from project-config | openstack/os-collect-config | https://review.openstack.org/596124 | stable/queens | > | import zuul job settings from project-config | openstack/os-collect-config | https://review.openstack.org/597797 | stable/ocata | > | import zuul job settings from project-config | openstack/os-collect-config | https://review.openstack.org/597811 | stable/pike | > | import zuul job settings from project-config | openstack/os-collect-config | https://review.openstack.org/597828 | stable/queens | > | import zuul job settings from project-config | openstack/os-collect-config | https://review.openstack.org/597845 | stable/rocky | > | import zuul job settings from project-config | openstack/os-net-config | https://review.openstack.org/596095 | stable/ocata | > | import zuul job settings from project-config | openstack/os-net-config | https://review.openstack.org/596109 | stable/pike | > | import zuul job settings from project-config | openstack/os-net-config | https://review.openstack.org/596125 | stable/queens | > | import zuul job settings from project-config | openstack/os-net-config | https://review.openstack.org/597756 | master | > | switch documentation job to new PTI | openstack/os-net-config | https://review.openstack.org/597757 | master | > | import zuul job settings from project-config | openstack/os-net-config | https://review.openstack.org/597798 | stable/ocata | > | import zuul job settings from project-config | openstack/os-net-config | https://review.openstack.org/597812 | stable/pike | > | import zuul job settings from project-config | openstack/os-net-config | https://review.openstack.org/597829 | stable/queens | > | import zuul job settings from project-config | openstack/os-net-config | https://review.openstack.org/597846 | stable/rocky | > | import zuul job settings from project-config | openstack/os-refresh-config | https://review.openstack.org/596096 | stable/ocata | > | import zuul job settings from project-config | openstack/os-refresh-config | https://review.openstack.org/596110 | stable/pike | > | import zuul job settings from project-config | openstack/os-refresh-config | https://review.openstack.org/596126 | stable/queens | > | import zuul job settings from project-config | openstack/os-refresh-config | https://review.openstack.org/597799 | stable/ocata | > | import zuul job settings from project-config | openstack/os-refresh-config | https://review.openstack.org/597813 | stable/pike | > | import zuul job settings from project-config | openstack/os-refresh-config | https://review.openstack.org/597830 | stable/queens | > | import zuul job settings from project-config | openstack/os-refresh-config | https://review.openstack.org/597847 | stable/rocky | > | import zuul job settings from project-config | openstack/paunch | https://review.openstack.org/596041 | master | > | switch documentation job to new PTI | openstack/paunch | https://review.openstack.org/596042 | master | > | add python 3.6 unit test job | openstack/paunch | https://review.openstack.org/596043 | master | > | import zuul job settings from project-config | openstack/paunch | https://review.openstack.org/596111 | stable/pike | > | import zuul job settings from project-config | openstack/paunch | https://review.openstack.org/596127 | stable/queens | > | import zuul job settings from project-config | openstack/paunch | https://review.openstack.org/597758 | master | > | switch documentation job to new PTI | openstack/paunch | https://review.openstack.org/597759 | master | > | add python 3.6 unit test job | openstack/paunch | https://review.openstack.org/597760 | master | > | import zuul job settings from project-config | openstack/paunch | https://review.openstack.org/597814 | stable/pike | > | import zuul job settings from project-config | openstack/paunch | https://review.openstack.org/597831 | stable/queens | > | import zuul job settings from project-config | openstack/paunch | https://review.openstack.org/597848 | stable/rocky | > | import zuul job settings from project-config | openstack/puppet-pacemaker | https://review.openstack.org/596044 | master | > | switch documentation job to new PTI | openstack/puppet-pacemaker | https://review.openstack.org/596045 | master | > | import zuul job settings from project-config | openstack/puppet-pacemaker | https://review.openstack.org/597761 | master | > | switch documentation job to new PTI | openstack/puppet-pacemaker | https://review.openstack.org/597762 | master | > | import zuul job settings from project-config | openstack/puppet-tripleo | https://review.openstack.org/596097 | stable/ocata | > | import zuul job settings from project-config | openstack/puppet-tripleo | https://review.openstack.org/596112 | stable/pike | > | import zuul job settings from project-config | openstack/puppet-tripleo | https://review.openstack.org/596128 | stable/queens | > | import zuul job settings from project-config | openstack/puppet-tripleo | https://review.openstack.org/597763 | master | > | switch documentation job to new PTI | openstack/puppet-tripleo | https://review.openstack.org/597764 | master | > | import zuul job settings from project-config | openstack/puppet-tripleo | https://review.openstack.org/597800 | stable/ocata | > | import zuul job settings from project-config | openstack/puppet-tripleo | https://review.openstack.org/597815 | stable/pike | > | import zuul job settings from project-config | openstack/puppet-tripleo | https://review.openstack.org/597832 | stable/queens | > | import zuul job settings from project-config | openstack/puppet-tripleo | https://review.openstack.org/597849 | stable/rocky | > | import zuul job settings from project-config | openstack/python-tripleoclient | https://review.openstack.org/596048 | master | > | switch documentation job to new PTI | openstack/python-tripleoclient | https://review.openstack.org/596049 | master | > | add python 3.6 unit test job | openstack/python-tripleoclient | https://review.openstack.org/596050 | master | > | import zuul job settings from project-config | openstack/python-tripleoclient | https://review.openstack.org/596098 | stable/ocata | > | import zuul job settings from project-config | openstack/python-tripleoclient | https://review.openstack.org/596113 | stable/pike | > | import zuul job settings from project-config | openstack/python-tripleoclient | https://review.openstack.org/596129 | stable/queens | > | import zuul job settings from project-config | openstack/python-tripleoclient | https://review.openstack.org/596139 | stable/rocky | > | import zuul job settings from project-config | openstack/python-tripleoclient | https://review.openstack.org/597765 | master | > | switch documentation job to new PTI | openstack/python-tripleoclient | https://review.openstack.org/597766 | master | > | add python 3.6 unit test job | openstack/python-tripleoclient | https://review.openstack.org/597767 | master | > | import zuul job settings from project-config | openstack/python-tripleoclient | https://review.openstack.org/597801 | stable/ocata | > | import zuul job settings from project-config | openstack/python-tripleoclient | https://review.openstack.org/597816 | stable/pike | > | import zuul job settings from project-config | openstack/python-tripleoclient | https://review.openstack.org/597833 | stable/queens | > | import zuul job settings from project-config | openstack/python-tripleoclient | https://review.openstack.org/597850 | stable/rocky | > | import zuul job settings from project-config | openstack/tempest-tripleo-ui | https://review.openstack.org/596051 | master | > | switch documentation job to new PTI | openstack/tempest-tripleo-ui | https://review.openstack.org/596052 | master | > | import zuul job settings from project-config | openstack/tempest-tripleo-ui | https://review.openstack.org/597768 | master | > | switch documentation job to new PTI | openstack/tempest-tripleo-ui | https://review.openstack.org/597769 | master | > | import zuul job settings from project-config | openstack/tripleo-common | https://review.openstack.org/596053 | master | > | switch documentation job to new PTI | openstack/tripleo-common | https://review.openstack.org/596054 | master | > | add python 3.6 unit test job | openstack/tripleo-common | https://review.openstack.org/596055 | master | > | import zuul job settings from project-config | openstack/tripleo-common | https://review.openstack.org/596099 | stable/ocata | > | import zuul job settings from project-config | openstack/tripleo-common | https://review.openstack.org/596114 | stable/pike | > | import zuul job settings from project-config | openstack/tripleo-common | https://review.openstack.org/596130 | stable/queens | > | import zuul job settings from project-config | openstack/tripleo-common | https://review.openstack.org/597770 | master | > | switch documentation job to new PTI | openstack/tripleo-common | https://review.openstack.org/597771 | master | > | add python 3.6 unit test job | openstack/tripleo-common | https://review.openstack.org/597772 | master | > | import zuul job settings from project-config | openstack/tripleo-common | https://review.openstack.org/597802 | stable/ocata | > | import zuul job settings from project-config | openstack/tripleo-common | https://review.openstack.org/597817 | stable/pike | > | import zuul job settings from project-config | openstack/tripleo-common | https://review.openstack.org/597834 | stable/queens | > | import zuul job settings from project-config | openstack/tripleo-common | https://review.openstack.org/597851 | stable/rocky | > | switch documentation job to new PTI | openstack/tripleo-docs | https://review.openstack.org/596059 | master | > | import zuul job settings from project-config | openstack/tripleo-docs | https://review.openstack.org/597773 | master | > | switch documentation job to new PTI | openstack/tripleo-docs | https://review.openstack.org/597774 | master | > | switch documentation job to new PTI | openstack/tripleo-heat-templates | https://review.openstack.org/596061 | master | > | import zuul job settings from project-config | openstack/tripleo-heat-templates | https://review.openstack.org/596100 | stable/ocata | > | import zuul job settings from project-config | openstack/tripleo-heat-templates | https://review.openstack.org/596115 | stable/pike | > | import zuul job settings from project-config | openstack/tripleo-heat-templates | https://review.openstack.org/596131 | stable/queens | > | import zuul job settings from project-config | openstack/tripleo-heat-templates | https://review.openstack.org/597775 | master | > | import zuul job settings from project-config | openstack/tripleo-heat-templates | https://review.openstack.org/597803 | stable/ocata | > | import zuul job settings from project-config | openstack/tripleo-heat-templates | https://review.openstack.org/597818 | stable/pike | > | import zuul job settings from project-config | openstack/tripleo-heat-templates | https://review.openstack.org/597835 | stable/queens | > | import zuul job settings from project-config | openstack/tripleo-heat-templates | https://review.openstack.org/597852 | stable/rocky | > | add python 3.6 unit test job | openstack/tripleo-image-elements | https://review.openstack.org/596064 | master | > | import zuul job settings from project-config | openstack/tripleo-image-elements | https://review.openstack.org/596101 | stable/ocata | > | import zuul job settings from project-config | openstack/tripleo-image-elements | https://review.openstack.org/596116 | stable/pike | > | import zuul job settings from project-config | openstack/tripleo-image-elements | https://review.openstack.org/596132 | stable/queens | > | import zuul job settings from project-config | openstack/tripleo-image-elements | https://review.openstack.org/597776 | master | > | switch documentation job to new PTI | openstack/tripleo-image-elements | https://review.openstack.org/597777 | master | > | add python 3.5 unit test job | openstack/tripleo-image-elements | https://review.openstack.org/597778 | master | > | add python 3.6 unit test job | openstack/tripleo-image-elements | https://review.openstack.org/597779 | master | > | import zuul job settings from project-config | openstack/tripleo-image-elements | https://review.openstack.org/597804 | stable/ocata | > | import zuul job settings from project-config | openstack/tripleo-image-elements | https://review.openstack.org/597819 | stable/pike | > | import zuul job settings from project-config | openstack/tripleo-image-elements | https://review.openstack.org/597836 | stable/queens | > | import zuul job settings from project-config | openstack/tripleo-image-elements | https://review.openstack.org/597853 | stable/rocky | > | import zuul job settings from project-config | openstack/tripleo-ipsec | https://review.openstack.org/597837 | stable/queens | > | import zuul job settings from project-config | openstack/tripleo-puppet-elements | https://review.openstack.org/596102 | stable/ocata | > | import zuul job settings from project-config | openstack/tripleo-puppet-elements | https://review.openstack.org/596117 | stable/pike | > | import zuul job settings from project-config | openstack/tripleo-puppet-elements | https://review.openstack.org/596134 | stable/queens | > | import zuul job settings from project-config | openstack/tripleo-puppet-elements | https://review.openstack.org/597780 | master | > | switch documentation job to new PTI | openstack/tripleo-puppet-elements | https://review.openstack.org/597781 | master | > | import zuul job settings from project-config | openstack/tripleo-puppet-elements | https://review.openstack.org/597805 | stable/ocata | > | import zuul job settings from project-config | openstack/tripleo-puppet-elements | https://review.openstack.org/597820 | stable/pike | > | import zuul job settings from project-config | openstack/tripleo-puppet-elements | https://review.openstack.org/597838 | stable/queens | > | import zuul job settings from project-config | openstack/tripleo-puppet-elements | https://review.openstack.org/597854 | stable/rocky | > | import zuul job settings from project-config | openstack/tripleo-quickstart | https://review.openstack.org/596071 | master | > | switch documentation job to new PTI | openstack/tripleo-quickstart | https://review.openstack.org/596072 | master | > | import zuul job settings from project-config | openstack/tripleo-quickstart | https://review.openstack.org/597782 | master | > | switch documentation job to new PTI | openstack/tripleo-quickstart | https://review.openstack.org/597783 | master | > | import zuul job settings from project-config | openstack/tripleo-quickstart-extras | https://review.openstack.org/596073 | master | > | switch documentation job to new PTI | openstack/tripleo-quickstart-extras | https://review.openstack.org/596074 | master | > | import zuul job settings from project-config | openstack/tripleo-quickstart-extras | https://review.openstack.org/597784 | master | > | switch documentation job to new PTI | openstack/tripleo-quickstart-extras | https://review.openstack.org/597785 | master | > | import zuul job settings from project-config | openstack/tripleo-specs | https://review.openstack.org/596077 | master | > | import zuul job settings from project-config | openstack/tripleo-specs | https://review.openstack.org/597786 | master | > | import zuul job settings from project-config | openstack/tripleo-ui | https://review.openstack.org/596078 | master | > | switch documentation job to new PTI | openstack/tripleo-ui | https://review.openstack.org/596079 | master | > | import zuul job settings from project-config | openstack/tripleo-ui | https://review.openstack.org/596103 | stable/ocata | > | import zuul job settings from project-config | openstack/tripleo-ui | https://review.openstack.org/596118 | stable/pike | > | import zuul job settings from project-config | openstack/tripleo-ui | https://review.openstack.org/596135 | stable/queens | > | import zuul job settings from project-config | openstack/tripleo-ui | https://review.openstack.org/597787 | master | > | switch documentation job to new PTI | openstack/tripleo-ui | https://review.openstack.org/597788 | master | > | import zuul job settings from project-config | openstack/tripleo-ui | https://review.openstack.org/597806 | stable/ocata | > | import zuul job settings from project-config | openstack/tripleo-ui | https://review.openstack.org/597821 | stable/pike | > | import zuul job settings from project-config | openstack/tripleo-ui | https://review.openstack.org/597839 | stable/queens | > | import zuul job settings from project-config | openstack/tripleo-ui | https://review.openstack.org/597855 | stable/rocky | > | import zuul job settings from project-config | openstack/tripleo-upgrade | https://review.openstack.org/596080 | master | > | switch documentation job to new PTI | openstack/tripleo-upgrade | https://review.openstack.org/596081 | master | > | import zuul job settings from project-config | openstack/tripleo-upgrade | https://review.openstack.org/596119 | stable/pike | > | import zuul job settings from project-config | openstack/tripleo-upgrade | https://review.openstack.org/596136 | stable/queens | > | import zuul job settings from project-config | openstack/tripleo-upgrade | https://review.openstack.org/597789 | master | > | switch documentation job to new PTI | openstack/tripleo-upgrade | https://review.openstack.org/597790 | master | > | import zuul job settings from project-config | openstack/tripleo-upgrade | https://review.openstack.org/597823 | stable/pike | > | import zuul job settings from project-config | openstack/tripleo-upgrade | https://review.openstack.org/597840 | stable/queens | > | import zuul job settings from project-config | openstack/tripleo-validations | https://review.openstack.org/596082 | master | > | switch documentation job to new PTI | openstack/tripleo-validations | https://review.openstack.org/596083 | master | > | add python 3.6 unit test job | openstack/tripleo-validations | https://review.openstack.org/596084 | master | > | import zuul job settings from project-config | openstack/tripleo-validations | https://review.openstack.org/596104 | stable/ocata | > | import zuul job settings from project-config | openstack/tripleo-validations | https://review.openstack.org/597791 | master | > | switch documentation job to new PTI | openstack/tripleo-validations | https://review.openstack.org/597792 | master | > | add python 3.6 unit test job | openstack/tripleo-validations | https://review.openstack.org/597793 | master | > | import zuul job settings from project-config | openstack/tripleo-validations | https://review.openstack.org/597807 | stable/ocata | > | import zuul job settings from project-config | openstack/tripleo-validations | https://review.openstack.org/597824 | stable/pike | > | import zuul job settings from project-config | openstack/tripleo-validations | https://review.openstack.org/597841 | stable/queens | > | import zuul job settings from project-config | openstack/tripleo-validations | https://review.openstack.org/597856 | stable/rocky | > +----------------------------------------------+-------------------------------------+-------------------------------------+---------------+ I found one error in the logs from proposing those patches so there are 2 more patches to consider for openstack/ansible-role-k8s-tripleo: https://review.openstack.org/598340 add .gitreview file https://review.openstack.org/598341 import zuul job settings from project-config From kennelson11 at gmail.com Thu Aug 30 20:13:49 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 30 Aug 2018 13:13:49 -0700 Subject: [openstack-dev] [TC][All] Guidelines for Organisations Contributing to OpenStack Message-ID: Hello, Backstory on this topic: conversations started before the Vancouver Summit about drafting a base set of necessities that we could give to companies to help them understand what their employees will need if they are going to be effective contributors to OpenStack. There was initial brainstorming on this list of guidelines[1]. Then the Forum discussion in Vancouver[2] happened. Next there was the traditional, post-forum summary that went out to the dev list[3]. Lastly, we finally have a draft patch that needs some attention[4] to keep it moving forward. This is where we'd love some help: What are we missing? Were there any stumbling blocks for you and your workplace? Anything we should wordsmith better? The hope is that once this is complete, its something we can present to the board, new sponsors, or other companies interested in getting involved upstream on OpenStack along with the contributor guide[5] and contributor portal[6] to help people integrate into the community faster. Your thoughts and input would be greatly appreciated! -Kendall (diablo_rojo) [1] https://etherpad.openstack.org/p/Contributing_Organization_Guide [2] https://etherpad.openstack.org/p/Reqs-for-Organisations-Contributing-to-OpenStack [3] http://lists.openstack.org/pipermail/openstack-sigs/2018-June/000410.html [4] https://review.openstack.org/#/c/578676/ [5] https://docs.openstack.org/contributors/ [6] https://www.openstack.org/community/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From duc.openstack at gmail.com Thu Aug 30 20:18:53 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Thu, 30 Aug 2018 13:18:53 -0700 Subject: [openstack-dev] [senlin] weekly meeting Message-ID: Hi everyone, We'll be having our weekly meeting today at 0530 UTC in the #senlin channel. The meeting agenda has been posted: https://wiki.openstack.org/wiki/Meetings/SenlinAgenda#Agenda_.282018-08-31_0530_UTC.29 Regards, Duc From zigo at debian.org Thu Aug 30 20:49:26 2018 From: zigo at debian.org (Thomas Goirand) Date: Thu, 30 Aug 2018 22:49:26 +0200 Subject: [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <5B883E1B.2070101@windriver.com> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> Message-ID: <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> On 08/30/2018 08:57 PM, Chris Friesen wrote: > On 08/30/2018 11:03 AM, Jeremy Stanley wrote: > >> The proposal is simple: create a new openstack-discuss mailing list >> to cover all the above sorts of discussion and stop using the other >> four. > > Do we want to merge usage and development onto one list? I really don't want this. I'm happy with things being sorted in multiple lists, even though I'm subscribed to multiples. Thomas From jimmy at openstack.org Thu Aug 30 20:54:42 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 30 Aug 2018 15:54:42 -0500 Subject: [openstack-dev] [openstack-ansible] Stepping down from OpenStack-Ansible core In-Reply-To: References: Message-ID: <5B885992.90403@openstack.org> Thanks for all you do and have done for the OpenStack Community, Andy :) Andy McCrae wrote: > Now that Rocky is all but ready it seems like a good time! Since > changing roles I've not been able to keep up enough focus on reviews > and other obligations - so I think it's time to step aside as a core > reviewer. > > I want to say thanks to everybody in the community, I'm really proud > to see the work we've done and how the OSA team has grown. I've > learned a tonne from all of you - it's definitely been a great experience. > > Thanks, > Andy > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From amy at demarco.com Thu Aug 30 21:00:25 2018 From: amy at demarco.com (Amy) Date: Thu, 30 Aug 2018 14:00:25 -0700 Subject: [openstack-dev] [openstack-ansible] Stepping down from OpenStack-Ansible core In-Reply-To: <7558af44-e1aa-3fe9-4cf8-d9588f9d64a5@gmail.com> References: <7558af44-e1aa-3fe9-4cf8-d9588f9d64a5@gmail.com> Message-ID: Andy, We’ll miss you! Thanks so much for all your hard work and leadership. Don’t be a stranger! Amy (spotz) Sent from my iPhone > On Aug 30, 2018, at 11:05 AM, Ian Y. Choi wrote: > > Hello Andy, > > Thanks a lot for your work on OpenStack-Ansible team. > > It was very happy to collaborate with you as different teams (me: I18n team) during Ocata and Pike release cycles, > and I think I18n team now has better insight on OpenStack-Ansible thanks to the help from you and so many kind contributors. > > > With many thanks, > > /Ian > > Andy McCrae wrote on 8/31/2018 2:40 AM: >> Now that Rocky is all but ready it seems like a good time! Since changing roles I've not been able to keep up enough focus on reviews and other obligations - so I think it's time to step aside as a core reviewer. >> >> I want to say thanks to everybody in the community, I'm really proud to see the work we've done and how the OSA team has grown. I've learned a tonne from all of you - it's definitely been a great experience. >> >> Thanks, >> Andy >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From fungi at yuggoth.org Thu Aug 30 21:12:57 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 30 Aug 2018 21:12:57 +0000 Subject: [openstack-dev] [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: <20180830211257.oa6hxd4pningzqf4@yuggoth.org> On 2018-08-31 01:13:58 +0800 (+0800), Rico Lin wrote: [...] > What needs to be done for this is full topic categories support > under `options` page so people get to filter emails properly. [...] Unfortunately, topic filtering is one of the MM2 features the Mailman community decided nobody used (or at least not enough to warrant preserving it in MM3). I do think we need to be consistent about tagging subjects to make client-side filtering more effective for people who want that, but if we _do_ want to be able to upgrade we shouldn't continue to rely on server-side filtering support in Mailman unless we can somehow work with them to help in reimplementing it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Thu Aug 30 21:25:37 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 30 Aug 2018 21:25:37 +0000 Subject: [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <5B883E1B.2070101@windriver.com> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> Message-ID: <20180830212536.yzirmxzxiqhciyby@yuggoth.org> On 2018-08-30 12:57:31 -0600 (-0600), Chris Friesen wrote: [...] > Do we want to merge usage and development onto one list? That > could be a busy list for someone who's just asking a simple usage > question. A counterargument though... projecting the number of unique posts to all four lists combined for this year (both based on trending for the past several years and also simply scaling the count of messages this year so far based on how many days are left) comes out roughly equal to the number of posts which were made to the general openstack mailing list in 2012. > Alternately, if we are going to merge everything then why not just > use the "openstack" mailing list since it already exists and there > are references to it on the web. This was an option we discussed in the "One Community" forum session as well. There seemed to be a slight preference for making a new -disscuss list and retiring the old general one. I see either as an potential solution here. > (Or do you want to force people to move to something new to make them > recognize that something has changed?) That was one of the arguments made. Also I believe we have a *lot* of "black hole" subscribers who aren't actually following that list but whose addresses aren't bouncing new posts we send them for any of a number of possible reasons. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Thu Aug 30 21:33:41 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 30 Aug 2018 21:33:41 +0000 Subject: [openstack-dev] [Openstack-operators] [all] Bringing the community together (combine the lists!) In-Reply-To: <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> Message-ID: <20180830213341.yuxyen2elx2c3is4@yuggoth.org> On 2018-08-30 22:49:26 +0200 (+0200), Thomas Goirand wrote: [...] > I really don't want this. I'm happy with things being sorted in > multiple lists, even though I'm subscribed to multiples. I understand where you're coming from, and I used to feel similarly. I was accustomed to communities where developers had one mailing list, users had another, and whenever a user asked a question on the developer mailing list they were told to go away and bother the user mailing list instead (not even a good, old-fashioned "RTFM" for their trouble). You're probably intimately familiar with at least one of these communities. ;) As the years went by, it's become apparent to me that this is actually an antisocial behavior pattern, and actively harmful to the user base. I believe OpenStack actually wants users to see the development work which is underway, come to understand it, and become part of that process. Requiring them to have their conversations elsewhere sends the opposite message. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jimmy at openstack.org Thu Aug 30 21:45:17 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 30 Aug 2018 16:45:17 -0500 Subject: [openstack-dev] [Openstack-operators] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830213341.yuxyen2elx2c3is4@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> <20180830213341.yuxyen2elx2c3is4@yuggoth.org> Message-ID: <5B88656D.1020209@openstack.org> Jeremy Stanley wrote: > On 2018-08-30 22:49:26 +0200 (+0200), Thomas Goirand wrote: > [...] >> I really don't want this. I'm happy with things being sorted in >> multiple lists, even though I'm subscribed to multiples. IMO this is easily solved by tagging. If emails are properly tagged (which they typically are), most email clients will properly sort on rules and you can just auto-delete if you're 100% not interested in a particular topic. > SNIP > As the years went by, it's become apparent to me that this is > actually an antisocial behavior pattern, and actively harmful to the > user base. I believe OpenStack actually wants users to see the > development work which is underway, come to understand it, and > become part of that process. Requiring them to have their > conversations elsewhere sends the opposite message. I really and truly believe that it has become a blocker for our community. Conversations sent to multiple lists inherently splinter and we end up with different groups coming up with different solutions for a single problem. Literally the opposite desired result of sending things to multiple lists. I believe bringing these groups together, with tags, will solve a lot of immediate problems. It will also have an added bonus of allowing people "catching up" on the community to look to a single place for a thread i/o 1-5 separate lists. It's better in both the short and long term. Cheers, Jimmy > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Thu Aug 30 23:08:56 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Thu, 30 Aug 2018 18:08:56 -0500 Subject: [openstack-dev] [Openstack-operators] [all] Bringing the community together (combine the lists!) In-Reply-To: <5B88656D.1020209@openstack.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> <20180830213341.yuxyen2elx2c3is4@yuggoth.org> <5B88656D.1020209@openstack.org> Message-ID: I think the more we can reduce the ML sprawl the better. I also recall us discussing having some documentation or way of notifying net new signups of how to interact with the ML successfully. An example was having some general guidelines around tagging. Also as a maintainer for at least one of the mailing lists over the past 6+ months I have to inquire about how that will happen going forward which again could be part of this documentation/initial message. Also there are many times I miss messages that for one reason or another do not hit the proper mailing list. I mean we could dive into the minutia or start up the mountain of why keeping things the way they are is worst than making this change and vice versa but I am willing to bet there are more advantages than disadvantages. On Thu, Aug 30, 2018 at 4:45 PM Jimmy McArthur wrote: > > > Jeremy Stanley wrote: > > On 2018-08-30 22:49:26 +0200 (+0200), Thomas Goirand wrote: > [...] > > I really don't want this. I'm happy with things being sorted in > multiple lists, even though I'm subscribed to multiples. > > IMO this is easily solved by tagging. If emails are properly tagged > (which they typically are), most email clients will properly sort on rules > and you can just auto-delete if you're 100% not interested in a particular > topic. > Yes, there are definitely ways to go about discarding unwanted mail automagically or not seeing it at all. And to be honest I think if we are relying on so many separate MLs to do that for us it is better community wide for the responsibility for that to be on individuals. It becomes very tiring and inefficient time wise to have to go through the various issues of the way things are now; cross-posting is a great example that is steadily getting worse. > SNIP > > As the years went by, it's become apparent to me that this is > actually an antisocial behavior pattern, and actively harmful to the > user base. I believe OpenStack actually wants users to see the > development work which is underway, come to understand it, and > become part of that process. Requiring them to have their > conversations elsewhere sends the opposite message. > > I really and truly believe that it has become a blocker for our > community. Conversations sent to multiple lists inherently splinter and we > end up with different groups coming up with different solutions for a > single problem. Literally the opposite desired result of sending things to > multiple lists. I believe bringing these groups together, with tags, will > solve a lot of immediate problems. It will also have an added bonus of > allowing people "catching up" on the community to look to a single place > for a thread i/o 1-5 separate lists. It's better in both the short and > long term. > +1 > > Cheers, > Jimmy > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Aug 30 23:24:23 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 30 Aug 2018 19:24:23 -0400 Subject: [openstack-dev] [goals][python3][adjutant][barbican][chef][cinder][cloudkitty][i18n][infra][loci][nova][charms][rpm][puppet][qa][telemetry][trove] join the bandwagon! Message-ID: <1535671314-sup-5525@lrrr.local> Below is the list of project teams that have not yet started migrating their zuul configuration. If you're ready to go, please respond to this email to let us know so we can start proposing patches. Doug | adjutant | 3 repos | | barbican | 5 repos | | Chef OpenStack | 19 repos | | cinder | 6 repos | | cloudkitty | 5 repos | | I18n | 2 repos | | Infrastructure | 158 repos | | loci | 1 repos | | nova | 6 repos | | OpenStack Charms | 80 repos | | Packaging-rpm | 4 repos | | Puppet OpenStack | 47 repos | | Quality Assurance | 22 repos | | Telemetry | 8 repos | | trove | 5 repos | From s at cassiba.com Thu Aug 30 23:50:30 2018 From: s at cassiba.com (Samuel Cassiba) Date: Thu, 30 Aug 2018 16:50:30 -0700 Subject: [openstack-dev] [goals][python3][adjutant][barbican][chef][cinder][cloudkitty][i18n][infra][loci][nova][charms][rpm][puppet][qa][telemetry][trove] join the bandwagon! In-Reply-To: <1535671314-sup-5525@lrrr.local> References: <1535671314-sup-5525@lrrr.local> Message-ID: On Thu, Aug 30, 2018 at 4:24 PM, Doug Hellmann wrote: > Below is the list of project teams that have not yet started migrating > their zuul configuration. If you're ready to go, please respond to this > email to let us know so we can start proposing patches. > > Doug > > | adjutant | 3 repos | > | barbican | 5 repos | > | Chef OpenStack | 19 repos | > | cinder | 6 repos | > | cloudkitty | 5 repos | > | I18n | 2 repos | > | Infrastructure | 158 repos | > | loci | 1 repos | > | nova | 6 repos | > | OpenStack Charms | 80 repos | > | Packaging-rpm | 4 repos | > | Puppet OpenStack | 47 repos | > | Quality Assurance | 22 repos | > | Telemetry | 8 repos | > | trove | 5 repos | > On behalf of Chef OpenStack, that one is good to go. Best, Samuel (scas) From adriant at catalyst.net.nz Fri Aug 31 00:02:08 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Fri, 31 Aug 2018 12:02:08 +1200 Subject: [openstack-dev] [keystone] [barbican] Keystone's use of Barbican ? In-Reply-To: References: <67f51eb3-d278-0e43-0d2a-bd3d3f7639ae@redhat.com> <173ad63d-e69c-735b-c286-c8a98a024aad@catalyst.net.nz> Message-ID: <9afb81e7-2727-cb49-417a-46b6e0ae3110@catalyst.net.nz> Oh I was literally just thinking about the 'credential' type key value items we store in the Keystone DB. Rather than storing them in the Keystone db and worrying about encryption (and encryption keys) in Keystone around what is otherwise a plaintext secret, just offload that to a service specific for handling those (which Keystone isn't). My only really worry then is if tying MFA credential values to an external service is a great idea as now Keystone and Barbican have to be alive for auth to occur (plus auth could be marginally slower). Although by using an external service security could potentially be enhanced and deployers don't need to worry about credential encryption key rotation (and re-encryption of credentials) in Keystone. As for fernet keys in Barbican... that that does sound like a fairly terrifying chicken and egg problem. Although Castellan with a Vault plugin sounds doable (not tied back to Keystone's own auth), and could actually be useful for multi-host keystone deployments since Vault now handles your Key replication/distribution provided Keystone rotates keys into it. On 31/08/18 1:50 AM, Lance Bragstad wrote: > This topic has surfaced intermittently ever since keystone implemented > fernet tokens in Kilo. An initial idea was written down shortly > afterwords [0], then we targeted it to Ocata [1], and removed from the > backlog around the Pike timeframe [2]. The commit message of [2] > includes meeting links. The discussion usually tripped attempting to > abstract enough of the details about rotation and setup of keys to > work in all cases. > > [0] https://review.openstack.org/#/c/311268/ > [1] https://review.openstack.org/#/c/363065/ > [2] https://review.openstack.org/#/c/439194/ > > On Thu, Aug 30, 2018 at 5:02 AM Juan Antonio Osorio Robles > > wrote: > > FWIW, instead of barbican, castellan could be used as a key manager. > > > On 08/30/2018 12:23 PM, Adrian Turjak wrote: >> >> >> On 30/08/18 6:29 AM, Lance Bragstad wrote: >>> >>> Is that what is being described here ?  >>> https://docs.openstack.org/keystone/pike/admin/identity-credential-encryption.html >>> >>> >>> This is a separate mechanism for storing secrets, not >>> necessarily passwords (although I agree the term credentials >>> automatically makes people assume passwords). This is used if >>> consuming keystone's native MFA implementation. For example, >>> storing a shared secret between the user and keystone that is >>> provided as a additional authentication method along with a >>> username and password combination. >>>   >> >> Is there any interest or plans to potentially allow Keystone's >> credential store to use Barbican as a storage provider? >> Encryption already is better than nothing, but if you already >> have (or will be deploying) a proper secret store with a hardware >> backend (or at least hardware stored encryption keys) then it >> might make sense to throw that in Barbican. >> >> Or is this also too much of a chicken/egg problem? How safe is it >> to rely on Barbican availability for MFA secrets and auth? >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Fri Aug 31 00:03:35 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 31 Aug 2018 10:03:35 +1000 Subject: [openstack-dev] [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830211257.oa6hxd4pningzqf4@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180830211257.oa6hxd4pningzqf4@yuggoth.org> Message-ID: <20180831000334.GR26778@thor.bakeyournoodle.com> On Thu, Aug 30, 2018 at 09:12:57PM +0000, Jeremy Stanley wrote: > On 2018-08-31 01:13:58 +0800 (+0800), Rico Lin wrote: > [...] > > What needs to be done for this is full topic categories support > > under `options` page so people get to filter emails properly. > [...] > > Unfortunately, topic filtering is one of the MM2 features the > Mailman community decided nobody used (or at least not enough to > warrant preserving it in MM3). I do think we need to be consistent > about tagging subjects to make client-side filtering more effective > for people who want that, but if we _do_ want to be able to upgrade > we shouldn't continue to rely on server-side filtering support in > Mailman unless we can somehow work with them to help in > reimplementing it. The suggestion is to implement it as a 3rd party plugin or work with the mm community to implement: https://wiki.mailman.psf.io/DEV/Dynamic%20Sublists So if we decide we really want that in mm3 we have options. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Fri Aug 31 00:10:34 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 31 Aug 2018 10:10:34 +1000 Subject: [openstack-dev] [all][tc] Nominations now open! Message-ID: <20180831001034.GS26778@thor.bakeyournoodle.com> Nominations for the Technical Committee positions (6 positions) are now open and will remain open until Sep 06, 2018 23:45 UTC. All nominations must be submitted as a text file to the openstack/election repository as explained on the election website[1]. Please note that the name of the file should match an email address in the foundation member profile of the candidate. Also for TC candidates election officials refer to the community member profiles at [2] please take this opportunity to ensure that your profile contains current information. Candidates for the Technical Committee Positions: Any Foundation individual member can propose their candidacy for an available, directly-elected TC seat. The election will be held from Sep 18, 2018 23:45 UTC through to Sep 27, 2018 23:45 UTC. The electorate are the Foundation individual members that are also committers for one of the official teams[3] over the Aug 11, 2017 00:00 UTC - Aug 30, 2018 00:00 UTC timeframe (Queens to Rocky, as well as the extra-ATCs who are acknowledged by the TC[4]. Please see the website[5] for additional details about this election. Please find below the timeline: TC nomination starts @ Aug 30, 2018 23:45 UTC TC nomination ends @ Sep 06, 2018 23:45 UTC TC campaigning starts @ Sep 06, 2018 23:45 UTC TC campaigning ends @ Sep 18, 2018 23:45 UTC TC elections starts @ Sep 18, 2018 23:45 UTC TC elections ends @ Sep 27, 2018 23:45 UTC If you have any questions please be sure to either ask them on the mailing list or to the elections officials[6]. Thank you, [1] http://governance.openstack.org/election/#how-to-submit-your-candidacy [2] http://www.openstack.org/community/members/ [3] https://governance.openstack.org/tc/reference/projects/ [4] https://releases.openstack.org/rocky/schedule.html#p-extra-atcs [5] https://governance.openstack.org/election/ [6] http://governance.openstack.org/election/#election-officials Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From fungi at yuggoth.org Fri Aug 31 00:21:22 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 31 Aug 2018 00:21:22 +0000 Subject: [openstack-dev] [Openstack-sigs] [Openstack-operators] [all] Bringing the community together (combine the lists!) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> <20180830213341.yuxyen2elx2c3is4@yuggoth.org> <5B88656D.1020209@openstack.org> Message-ID: <20180831002121.ch76mvqeskplqew2@yuggoth.org> On 2018-08-30 18:08:56 -0500 (-0500), Melvin Hillsman wrote: [...] > I also recall us discussing having some documentation or way of > notifying net new signups of how to interact with the ML > successfully. An example was having some general guidelines around > tagging. Also as a maintainer for at least one of the mailing > lists over the past 6+ months I have to inquire about how that > will happen going forward which again could be part of this > documentation/initial message. [...] Mailman supports customizable welcome messages for new subscribers, so the *technical* implementation there is easy. I do think (and failed to highlight it explicitly earlier I'm afraid) that this proposal comes with an expectation that we provide recommended guidelines for mailing list use/etiquette appropriate to our community. It could be contained entirely within the welcome message, or merely linked to a published document (and whether that's best suited for the Infra Manual or New Contributor Guide or somewhere else entirely is certainly up for debate), or even potentially both. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From adriant at catalyst.net.nz Fri Aug 31 01:16:41 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Fri, 31 Aug 2018 13:16:41 +1200 Subject: [openstack-dev] [keystone] [barbican] Keystone's use of Barbican ? In-Reply-To: <9afb81e7-2727-cb49-417a-46b6e0ae3110@catalyst.net.nz> References: <67f51eb3-d278-0e43-0d2a-bd3d3f7639ae@redhat.com> <173ad63d-e69c-735b-c286-c8a98a024aad@catalyst.net.nz> <9afb81e7-2727-cb49-417a-46b6e0ae3110@catalyst.net.nz> Message-ID: <1ab9d425-2b2d-1ea6-4dd3-de111ba33e51@catalyst.net.nz> Actually now that I think about it, another problem is that (at least in our case) Keystone is really a cluster wide service present across regions, so if it was to use Barbican (or Vault for that matter) then the secret store service would too need to be cluster wide and across all regions. Our current plan for our deployment of Barbican is per region. Is that the norm? Because if so, then it kind of means using it for Keystone becomes less useful. On 31/08/18 12:02 PM, Adrian Turjak wrote: > > Oh I was literally just thinking about the 'credential' type key value > items we store in the Keystone DB. Rather than storing them in the > Keystone db and worrying about encryption (and encryption keys) in > Keystone around what is otherwise a plaintext secret, just offload > that to a service specific for handling those (which Keystone isn't). > > My only really worry then is if tying MFA credential values to an > external service is a great idea as now Keystone and Barbican have to > be alive for auth to occur (plus auth could be marginally slower). > Although by using an external service security could potentially be > enhanced and deployers don't need to worry about credential encryption > key rotation (and re-encryption of credentials) in Keystone. > > As for fernet keys in Barbican... that that does sound like a fairly > terrifying chicken and egg problem. Although Castellan with a Vault > plugin sounds doable (not tied back to Keystone's own auth), and could > actually be useful for multi-host keystone deployments since Vault now > handles your Key replication/distribution provided Keystone rotates > keys into it. > > On 31/08/18 1:50 AM, Lance Bragstad wrote: >> This topic has surfaced intermittently ever since keystone >> implemented fernet tokens in Kilo. An initial idea was written down >> shortly afterwords [0], then we targeted it to Ocata [1], and removed >> from the backlog around the Pike timeframe [2]. The commit message of >> [2] includes meeting links. The discussion usually tripped attempting >> to abstract enough of the details about rotation and setup of keys to >> work in all cases. >> >> [0] https://review.openstack.org/#/c/311268/ >> [1] https://review.openstack.org/#/c/363065/ >> [2] https://review.openstack.org/#/c/439194/ >> >> On Thu, Aug 30, 2018 at 5:02 AM Juan Antonio Osorio Robles >> > wrote: >> >> FWIW, instead of barbican, castellan could be used as a key manager. >> >> >> On 08/30/2018 12:23 PM, Adrian Turjak wrote: >>> >>> >>> On 30/08/18 6:29 AM, Lance Bragstad wrote: >>>> >>>> Is that what is being described here ?  >>>> https://docs.openstack.org/keystone/pike/admin/identity-credential-encryption.html >>>> >>>> >>>> This is a separate mechanism for storing secrets, not >>>> necessarily passwords (although I agree the term credentials >>>> automatically makes people assume passwords). This is used if >>>> consuming keystone's native MFA implementation. For example, >>>> storing a shared secret between the user and keystone that is >>>> provided as a additional authentication method along with a >>>> username and password combination. >>>>   >>> >>> Is there any interest or plans to potentially allow Keystone's >>> credential store to use Barbican as a storage provider? >>> Encryption already is better than nothing, but if you already >>> have (or will be deploying) a proper secret store with a >>> hardware backend (or at least hardware stored encryption keys) >>> then it might make sense to throw that in Barbican. >>> >>> Or is this also too much of a chicken/egg problem? How safe is >>> it to rely on Barbican availability for MFA secrets and auth? >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiang.edison at gmail.com Fri Aug 31 01:27:54 2018 From: xiang.edison at gmail.com (Edison Xiang) Date: Fri, 31 Aug 2018 09:27:54 +0800 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: <1535651089-sup-4093@lrrr.local> References: <1535651089-sup-4093@lrrr.local> Message-ID: Hey doug, Thanks your reply. Very good question. It is not a conflict with current API Documents work that has already been done. We can use some tools to generate Open API schema from existing machine readable API-def in every project like nova [1] We can still use the existing tools to generate the API Documents website. [1] https://github.com/openstack/nova/tree/master/api-ref/source Best Regards, Edison Xiang On Fri, Aug 31, 2018 at 1:46 AM Doug Hellmann wrote: > Excerpts from Edison Xiang's message of 2018-08-30 14:08:12 +0800: > > Hey dims, > > > > Thanks your reply. Your suggestion is very important. > > > > > what would be the impact to projects? > > > what steps they would have to take? > > > > We can launch a project to publish OpenStack Projects APIs Schema for > users > > and developers. > > But now OpenStack Projects have no APIs Schema definition. > > Open API will not impact OpenStack Projects features they have, > > but we need some volunteers to define every project APIs Schema by Open > API > > 3.0. > > > > > Do we have a sample/mock API where we can show that the Action and > > Microversions can be declared to reflect reality and it can actually work > > with the generated code? > > Yeah, you can copy this yaml [1] into editor [2] to generate server or > > client codes or try it out. > > We can do more demos later. > > > > [1] > > > https://github.com/edisonxiang/OpenAPI-Specification/blob/master/examples/v3.0/petstore.yaml > > [2] https://editor.swagger.io > > > > Best Regards, > > Edison Xiang > > How does this proposal relate to the work that has already been > done to build the API guide > https://developer.openstack.org/api-guide/quick-start/ documentation? > > Doug > > > > > On Wed, Aug 29, 2018 at 6:31 PM Davanum Srinivas > wrote: > > > > > Edison, > > > > > > This is definitely a step in the right direction if we can pull it off. > > > > > > Given the previous experiences and the current situation of how and > where > > > we store the information currently and how we generate the website for > the > > > API(s), can you please outline > > > - what would be the impact to projects? > > > - what steps they would have to take? > > > > > > Also, the whole point of having these definitions is that the generated > > > code works. Do we have a sample/mock API where we can show that the > Action > > > and Microversions can be declared to reflect reality and it can > actually > > > work with the generated code? > > > > > > Thanks, > > > Dims > > > > > > On Wed, Aug 29, 2018 at 2:37 AM Edison Xiang > > > wrote: > > > > > >> Hi team, > > >> > > >> As we know, Open API 3.0 was released on July, 2017, it is about one > year > > >> ago. > > >> Open API 3.0 support some new features like anyof, oneof and allof > than > > >> Open API 2.0(Swagger 2.0). > > >> Now OpenStack projects do not support Open API. > > >> Also I found some old emails in the Mail List about supporting Open > API > > >> 2.0 in OpenStack. > > >> > > >> Some limitations are mentioned in the Mail List for OpenStack API: > > >> 1. The POST */action APIs. > > >> These APIs are exist in lots of projects like nova, cinder. > > >> These APIs have the same URI but the responses will be different > when > > >> the request is different. > > >> 2. Micro versions. > > >> These are controller via headers, which are sometimes used to > > >> describe behavioral changes in an API, not just request/response > schema > > >> changes. > > >> > > >> About the first limitation, we can find the solution in the Open API > 3.0. > > >> The example [2] shows that we can define different request/response in > > >> the same URI by anyof feature in Open API 3.0. > > >> > > >> About the micro versions problem, I think it is not a limitation > related > > >> a special API Standard. > > >> We can list all micro versions API schema files in one directory like > > >> nova/V2, > > >> or we can list the schema changes between micro versions as tempest > > >> project did [3]. > > >> > > >> Based on Open API 3.0, it can bring lots of benefits for OpenStack > > >> Community and does not impact the current features the Community has. > > >> For example, we can automatically generate API documents, different > > >> language Clients(SDK) maybe for different micro versions, > > >> and generate cloud tool adapters for OpenStack, like ansible module, > > >> terraform providers and so on. > > >> Also we can make an API UI to provide an online and visible API > search, > > >> API Calling for every OpenStack API. > > >> 3rd party developers can also do some self-defined development. > > >> > > >> [1] https://github.com/OAI/OpenAPI-Specification > > >> [2] > > >> > https://github.com/edisonxiang/OpenAPI-Specification/blob/master/examples/v3.0/petstore.yaml#L94-L109 > > >> [3] > > >> > https://github.com/openstack/tempest/tree/master/tempest/lib/api_schema/response/compute > > >> > > >> Best Regards, > > >> Edison Xiang > > >> > > >> > __________________________________________________________________________ > > >> OpenStack Development Mailing List (not for usage questions) > > >> Unsubscribe: > > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >> > > > > > > > > > -- > > > Davanum Srinivas :: https://twitter.com/dims > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Fri Aug 31 01:52:46 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 30 Aug 2018 20:52:46 -0500 Subject: [openstack-dev] Bumping eventlet to 0.24.1 In-Reply-To: <20180823145013.vzt46kgd7d7lkmkj@gentoo.org> References: <20180823145013.vzt46kgd7d7lkmkj@gentoo.org> Message-ID: <20180831015246.z4zvjp3lkb2yegis@gentoo.org> On 18-08-23 09:50:13, Matthew Thode wrote: > This is your warning, if you have concerns please comment in > https://review.openstack.org/589382 . cross tests pass, so that's a > good sign... atm this is only for stein. > Consider yourself on notice, https://review.openstack.org/589382 is planned to be merged on monday. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From prometheanfire at gentoo.org Fri Aug 31 03:10:04 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 30 Aug 2018 22:10:04 -0500 Subject: [openstack-dev] [kolla][tripleo][oslo][all] Bumping eventlet to 0.24.1 In-Reply-To: <20180831015246.z4zvjp3lkb2yegis@gentoo.org> References: <20180823145013.vzt46kgd7d7lkmkj@gentoo.org> <20180831015246.z4zvjp3lkb2yegis@gentoo.org> Message-ID: <20180831031004.nou6m2y3dfcoxadk@gentoo.org> On 18-08-30 20:52:46, Matthew Thode wrote: > On 18-08-23 09:50:13, Matthew Thode wrote: > > This is your warning, if you have concerns please comment in > > https://review.openstack.org/589382 . cross tests pass, so that's a > > good sign... atm this is only for stein. > > > > Consider yourself on notice, https://review.openstack.org/589382 is > planned to be merged on monday. > A bit more of follow up since that was so dry. There are some projects that have not branched (mainly cycle-trailing and plugins). There has historically been some breakage with each eventlet update, this one is not expected to be much different unfortunately. Currently there are known issues with oslo.service but they look solvable. A list of all projects using eventlet is attached. The full list of non-branched projects will be at the bottom of this message, but the projects that I think should be more careful are the following. kolla kolla-ansible heat-agents heat-dashboard tripleo-ipsec the rest of the repos seem to be plugins, which I'm personally less concerned about, but should still be branched (preferably sooner rather than later). ansible-role-container-registry ansible-role-redhat-subscription ansible-role-tripleo-modify-image barbican-tempest-plugin blazar-tempest-plugin cinder-tempest-plugin cloudkitty-tempest-plugin congress-tempest-plugin designate-tempest-plugin devstack-plugin-amqp1 devstack-plugin-kafka ec2api-tempest-plugin heat-agents heat-dashboard heat-tempest-plugin ironic-tempest-plugin keystone-tempest-plugin kolla-ansible kolla kuryr-tempest-plugin magnum-tempest-plugin manila-tempest-plugin mistral-tempest-plugin monasca-kibana-plugin monasca-tempest-plugin murano-tempest-plugin networking-generic-switch-tempest-plugin neutron-tempest-plugin octavia-tempest-plugin oswin-tempest-plugin patrole release-test sahara-tests senlin-tempest-plugin solum-tempest-plugin telemetry-tempest-plugin tempest-tripleo-ui tempest tripleo-ipsec trove-tempest-plugin vitrage-tempest-plugin watcher-tempest-plugin zaqar-tempest-plugin zun-tempest-plugin -- Matthew Thode (prometheanfire) -------------- next part -------------- +----------------------------------------+--------------------------------------------------------------------------------------+------+--------------------------------------------------------------------------------+ | Repository | Filename | Line | Text | +----------------------------------------+--------------------------------------------------------------------------------------+------+--------------------------------------------------------------------------------+ | airship-drydock | requirements-lock.txt | 12 | eventlet==0.23.0 | | airship-promenade | requirements-frozen.txt | 17 | eventlet==0.23.0 | | apmec | requirements.txt | 11 | eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 # MIT | | astara | requirements.txt | 5 | eventlet!=0.18.3,>=0.18.2 # MIT | | astara-appliance | requirements.txt | 7 | eventlet!=0.18.3,>=0.18.2 # MIT | | astara-horizon | test-requirements.txt | 9 | eventlet!=0.18.3,>=0.18.2 # MIT | | barbican | requirements.txt | 8 | eventlet>=0.18.2,!=0.18.3,!=0.20.1 # MIT | | bareon | requirements.txt | 5 | eventlet!=0.18.3,>=0.18.2 # MIT | | bilean | requirements.txt | 7 | eventlet>=0.17.4 | | blazar | requirements.txt | 8 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | ceilometer-zvm | requirements.txt | 1 | eventlet>=0.17.4 | | ci-cd-pipeline-app-murano | murano-apps/LBaaS-interface/package/Resources/scripts/lbaas_api-0.1/requirements.txt | 6 | eventlet>=0.17.4 | | cinder | requirements.txt | 10 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | cloudkitty | requirements.txt | 6 | eventlet!=0.18.3,>=0.18.2 # MIT | | cloudkitty | rtd-requirements.txt | 5 | eventlet>=0.17.4 | | cloudpulse | requirements.txt | 8 | eventlet!=0.18.3,>=0.18.2 # MIT | | compute-hyperv | requirements.txt | 17 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | congress | requirements.txt | 6 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | cyborg | requirements.txt | 9 | eventlet>=0.12.0,!=0.18.3,!=0.20.1,!=0.21.0 # MIT | | daisycloud-core | code/daisy/requirements.txt | 10 | eventlet>=0.16.1,!=0.17.0 | | daisycloud-core | code/daisy-discoverd/requirements.txt | 1 | eventlet>=0.15.1,<0.16.0 | | designate | requirements.txt | 6 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | ec2-api | requirements.txt | 7 | eventlet!=0.18.3,!=0.20.1,!=0.21.0 # MIT | | faafo | requirements.txt | 6 | eventlet>=0.17.4 | | fuel-agent | requirements.txt | 2 | eventlet>=0.17.4 | | futurist | test-requirements.txt | 7 | # Used for making sure the eventlet executors work. | | futurist | test-requirements.txt | 8 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | gce-api | requirements.txt | 5 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | glance | requirements.txt | 10 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | glance_store | requirements.txt | 11 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | glare | requirements.txt | 9 | eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 # MIT | | hacking | test-requirements.txt | 15 | # since eventlet is such a common universal import, add it to the hacking test | | hacking | test-requirements.txt | 19 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | heat | requirements.txt | 9 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | heat-cfnclient | requirements.txt | 3 | eventlet>=0.15.2 | | iotronic | requirements.txt | 6 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | iotronic | test-requirements.txt | 7 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | ironic | requirements.txt | 8 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | ironic-inspector | requirements.txt | 8 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | ironic-lib | test-requirements.txt | 6 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | ironic-python-agent | requirements.txt | 5 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | karbor | requirements.txt | 9 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | kiloeyes | requirements.txt | 20 | eventlet!=0.18.3,>=0.18.2 | | kingbird | requirements.txt | 11 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | kuryr-kubernetes | requirements.txt | 10 | eventlet!=0.18.3,!=0.20.1,!=0.21.0,>=0.18.2 # MIT | | magnum | requirements.txt | 18 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | manila | requirements.txt | 10 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | meteos | requirements.txt | 10 | eventlet!=0.18.3,>=0.18.2 # MIT | | mistral | requirements.txt | 10 | eventlet!=0.20.1,>=0.20.0 # MIT | | mistral | setup.cfg | 104 | eventlet = futurist:GreenThreadPoolExecutor | | mistral-extra | examples/v2/calculator/requirements.txt | 6 | eventlet!=0.18.3,>=0.18.2 # MIT | | mixmatch | requirements.txt | 7 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | mogan | requirements.txt | 8 | eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 # MIT | | monasca-agent | requirements.txt | 27 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | monasca-api | requirements.txt | 22 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | monasca-events-api | requirements.txt | 17 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | monasca-log-api | requirements.txt | 17 | eventlet!=0.18.3,!=0.20.1,!=0.21.0,>=0.18.2 # MIT | | murano | requirements.txt | 9 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | murano-agent | requirements.txt | 6 | eventlet>=0.20.0,!=0.20.1,!=0.21.0 # MIT | | networking-ale-omniswitch | requirements.txt | 7 | eventlet>=0.17.4 | | networking-avaya | requirements.txt | 6 | eventlet!=0.18.3,>=0.18.2 | | networking-brocade | requirements.txt | 6 | eventlet!=0.18.3,>=0.18.2 # MIT | | networking-calico | requirements.txt | 7 | eventlet!=0.18.3,!=0.20.1,!=0.21.0,!=0.23.0,>=0.18.2 # MIT | | networking-cisco | test-requirements.txt | 29 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | networking-hyperv | requirements.txt | 7 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | networking-mlnx | requirements.txt | 8 | eventlet!=0.18.3,>=0.18.2 # MIT | | networking-nec | requirements.txt | 9 | eventlet!=0.18.3,>=0.18.2 # MIT | | networking-powervm | requirements.txt | 8 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | networking-sfc | requirements.txt | 6 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | neutron | requirements.txt | 10 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | neutron-dynamic-routing | requirements.txt | 6 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | neutron-fwaas | requirements.txt | 6 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | neutron-lbaas | requirements.txt | 6 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | neutron-tempest-plugin | requirements.txt | 19 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | nova | requirements.txt | 8 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | nova-zvm-virt-driver | test-requirements.txt | 19 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | os-brick | requirements.txt | 7 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | os-collect-config | requirements.txt | 7 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | os-net-config | requirements.txt | 7 | eventlet!=0.18.3,>=0.18.2 # MIT | | os-win | requirements.txt | 8 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | os-xenapi | requirements.txt | 8 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | oslo.concurrency | test-requirements.txt | 17 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | oslo.db | test-requirements.txt | 6 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | oslo.messaging | setup.cfg | 54 | eventlet = futurist:GreenThreadPoolExecutor | | oslo.messaging | test-requirements.txt | 32 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | oslo.privsep | requirements.txt | 11 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | oslo.reports | test-requirements.txt | 15 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | oslo.rootwrap | test-requirements.txt | 21 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | oslo.service | requirements.txt | 6 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | oslo.vmware | requirements.txt | 19 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | packetary | requirements.txt | 8 | eventlet!=0.18.3,>=0.18.2 # MIT | | picasso | examples/python-swiftfunctionsmiddleware/requirements.txt | 8 | eventlet>=0.17.4 # MIT | | radar | requirements.txt | 19 | eventlet>=0.13.0 | | ranger-agent | requirements.txt | 17 | eventlet!=0.18.3,>=0.18.2 | | requirements | global-requirements.txt | 57 | # NOTE: New versions of eventlet should not be accepted lightly | | requirements | global-requirements.txt | 59 | eventlet!=0.18.3,!=0.20.1,!=0.21.0,!=0.23.0 # MIT | | requirements | openstack_requirements/tests/files/gr-base.txt | 14 | eventlet>=0.12.0 | | requirements | openstack_requirements/tests/files/project-with-bad-requirement.txt | 9 | eventlet>=0.9.12 | | requirements | openstack_requirements/tests/files/project-with-oslo-tar.txt | 9 | eventlet>=0.9.17 | | requirements | openstack_requirements/tests/files/project.txt | 9 | eventlet>=0.9.12 | | requirements | openstack_requirements/tests/files/upper-constraints.txt | 127 | eventlet===0.19.0 | | requirements | openstack_requirements/tests/test_update.py | 56 | eventlet>=0.9.12 -> eventlet>=0.12.0 | | requirements | openstack_requirements/tests/test_update.py | 167 | eventlet>=0.9.12 -> eventlet>=0.12.0 | | requirements | openstack_requirements/tests/test_update.py | 199 | eventlet>=0.9.12 -> eventlet>=0.12.0 | | sahara | requirements.txt | 11 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | searchlight | requirements.txt | 12 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | senlin | requirements.txt | 8 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | solum | requirements.txt | 4 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | storlets | requirements.txt | 8 | eventlet>=0.17.4 # MIT | | storyboard | requirements.txt | 23 | eventlet>=0.13.0 | | stx-config | sysinv/sysinv/sysinv/requirements.txt | 6 | eventlet==0.20.0 | | swauth | requirements.txt | 5 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | swift | requirements.txt | 6 | eventlet>=0.17.4,!=0.23.0 # MIT | | swift-bench | requirements.txt | 2 | eventlet>=0.17.4 # MIT | | swiftonfile | requirements.txt | 6 | eventlet>=0.16.1,!=0.17.0 | | swiftonhpss | requirements.txt | 6 | eventlet>=0.16.1,!=0.17.0 | | tacker | requirements.txt | 11 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | taskflow | doc/requirements.txt | 19 | eventlet!=0.18.3,!=0.20.1,!=0.21.0,>=0.18.2 # MIT | | taskflow | setup.cfg | 68 | eventlet = | | taskflow | setup.cfg | 69 | eventlet!=0.18.3,!=0.20.1,!=0.21.0,>=0.18.2 # MIT | | taskflow | test-requirements.txt | 16 | eventlet!=0.18.3,!=0.20.1,!=0.21.0,>=0.18.2 # MIT | | terracotta | requirements.txt | 7 | eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 # MIT | | tricircle | requirements.txt | 11 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | trio2o | requirements.txt | 12 | eventlet!=0.18.3,>=0.18.2 # MIT | | trove | requirements.txt | 6 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | upstream-institute-virtual-environment | elements/upstream-training/static/tmp/requirements.txt | 47 | eventlet==0.20.0 | | valence | requirements.txt | 7 | eventlet>=0.18.2,!=0.18.3,!=0.20.1,<0.21.0 # MIT | | vitrage | requirements.txt | 45 | eventlet!=0.20.1,>=0.20.0 # MIT | | vmware-nsx | requirements.txt | 6 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | vmware-nsxlib | requirements.txt | 7 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | | zun | requirements.txt | 6 | eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT | +----------------------------------------+--------------------------------------------------------------------------------------+------+--------------------------------------------------------------------------------+ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From johnsomor at gmail.com Fri Aug 31 03:24:32 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 30 Aug 2018 20:24:32 -0700 Subject: [openstack-dev] [octavia] Proposing Carlos Goncalves (cgoncalves) as an Octavia core reviewer Message-ID: Hello Octavia community, I would like to propose Carlos Goncalves as a core reviewer on the Octavia project. Carlos has provided numerous enhancements to the Octavia project, including setting up the grenade gate for Octavia upgrade testing. Over the last few releases he has also been providing quality reviews, in line with the other core reviewers [1]. I feel that Carlos would make an excellent addition to the Octavia core reviewer team. Existing Octavia core reviewers, please reply to this email with your support or concerns with adding Jacky to the core team. Michael [1] http://stackalytics.com/report/contribution/octavia-group/90 From hudayou at hotmail.com Fri Aug 31 03:29:00 2018 From: hudayou at hotmail.com (Jacky Hu) Date: Fri, 31 Aug 2018 03:29:00 +0000 Subject: [openstack-dev] [octavia] Proposing Carlos Goncalves (cgoncalves) as an Octavia core reviewer In-Reply-To: References: Message-ID: +1 Definitely a good contributor for the octavia community. 发自我的 iPhone > 在 2018年8月31日,上午11:24,Michael Johnson 写道: > > Hello Octavia community, > > I would like to propose Carlos Goncalves as a core reviewer on the > Octavia project. > > Carlos has provided numerous enhancements to the Octavia project, > including setting up the grenade gate for Octavia upgrade testing. > > Over the last few releases he has also been providing quality reviews, > in line with the other core reviewers [1]. I feel that Carlos would > make an excellent addition to the Octavia core reviewer team. > > Existing Octavia core reviewers, please reply to this email with your > support or concerns with adding Jacky to the core team. > > Michael > > [1] http://stackalytics.com/report/contribution/octavia-group/90 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From adriant at catalyst.net.nz Fri Aug 31 05:51:15 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Fri, 31 Aug 2018 17:51:15 +1200 Subject: [openstack-dev] [goals][python3][adjutant][barbican][chef][cinder][cloudkitty][i18n][infra][loci][nova][charms][rpm][puppet][qa][telemetry][trove] join the bandwagon! In-Reply-To: <1535671314-sup-5525@lrrr.local> References: <1535671314-sup-5525@lrrr.local> Message-ID: <47097489-0f7f-3b2e-31fc-8c2944c0a2da@catalyst.net.nz> Adjutant should be should be good to go. I don't believe there are any blockers (unless I've missed some). On 31/08/18 11:24 AM, Doug Hellmann wrote: > Below is the list of project teams that have not yet started migrating > their zuul configuration. If you're ready to go, please respond to this > email to let us know so we can start proposing patches. > > Doug > > | adjutant | 3 repos | > | barbican | 5 repos | > | Chef OpenStack | 19 repos | > | cinder | 6 repos | > | cloudkitty | 5 repos | > | I18n | 2 repos | > | Infrastructure | 158 repos | > | loci | 1 repos | > | nova | 6 repos | > | OpenStack Charms | 80 repos | > | Packaging-rpm | 4 repos | > | Puppet OpenStack | 47 repos | > | Quality Assurance | 22 repos | > | Telemetry | 8 repos | > | trove | 5 repos | > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sileht at sileht.net Fri Aug 31 06:09:00 2018 From: sileht at sileht.net (Mehdi Abaakouk) Date: Fri, 31 Aug 2018 08:09:00 +0200 Subject: [openstack-dev] [goals][python3][adjutant][barbican][chef][cinder][cloudkitty][i18n][infra][loci][nova][charms][rpm][puppet][qa][telemetry][trove] join the bandwagon! In-Reply-To: <1535671314-sup-5525@lrrr.local> References: <1535671314-sup-5525@lrrr.local> Message-ID: <20180831060900.3erg5vd6ghzs7xmr@sileht.net> Telemetry is ready On Thu, Aug 30, 2018 at 07:24:23PM -0400, Doug Hellmann wrote: >Below is the list of project teams that have not yet started migrating >their zuul configuration. If you're ready to go, please respond to this >email to let us know so we can start proposing patches. > >Doug > >| adjutant | 3 repos | >| barbican | 5 repos | >| Chef OpenStack | 19 repos | >| cinder | 6 repos | >| cloudkitty | 5 repos | >| I18n | 2 repos | >| Infrastructure | 158 repos | >| loci | 1 repos | >| nova | 6 repos | >| OpenStack Charms | 80 repos | >| Packaging-rpm | 4 repos | >| Puppet OpenStack | 47 repos | >| Quality Assurance | 22 repos | >| Telemetry | 8 repos | >| trove | 5 repos | > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mehdi Abaakouk mail: sileht at sileht.net irc: sileht From mchandras at suse.de Fri Aug 31 07:40:57 2018 From: mchandras at suse.de (Markos Chandras) Date: Fri, 31 Aug 2018 08:40:57 +0100 Subject: [openstack-dev] [openstack-ansible] Stepping down from OpenStack-Ansible core In-Reply-To: References: Message-ID: <4ffc72f5-f91e-3885-41bc-92920f361b0f@suse.de> On 30/08/18 18:40, Andy McCrae wrote: > Now that Rocky is all but ready it seems like a good time! Since > changing roles I've not been able to keep up enough focus on reviews and > other obligations - so I think it's time to step aside as a core reviewer. > > I want to say thanks to everybody in the community, I'm really proud to > see the work we've done and how the OSA team has grown. I've learned a > tonne from all of you - it's definitely been a great experience. > > Thanks, > Andy > > Hello Andy, It is sad to see you go. Thank you very much for everything you've done for OpenStack-Ansible and for trusting me as a core reviewer when I still was relatively a new to the project. Best of luck on your new role :) -- markos SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg From jlibosva at redhat.com Fri Aug 31 08:24:35 2018 From: jlibosva at redhat.com (Jakub Libosvar) Date: Fri, 31 Aug 2018 10:24:35 +0200 Subject: [openstack-dev] [Neutron] Stepping down from Neutron core team Message-ID: Hi all, as you have might already heard, I'm no longer involved in Neutron development due to some changes. Therefore I'm officially stepping down from the core team because I can't provide same quality reviews as I tried to do before. I'd like to thank you all for the opportunity I was given in the Neutron team, thank you for all I have learned over the years professionally, technically and personally. Tomorrow it's gonna be exactly 5 years since I started hacking Neutron and I must say I really enjoyed working with all Neutrinos here and I had privilege to meet most of you in person and that has an extreme value for me. Keep on being a great community! Thank you again! Kuba From eumel at arcor.de Fri Aug 31 08:26:23 2018 From: eumel at arcor.de (Frank Kloeker) Date: Fri, 31 Aug 2018 10:26:23 +0200 Subject: [openstack-dev] [openstack-ansible] Stepping down from OpenStack-Ansible core In-Reply-To: <7558af44-e1aa-3fe9-4cf8-d9588f9d64a5@gmail.com> References: <7558af44-e1aa-3fe9-4cf8-d9588f9d64a5@gmail.com> Message-ID: <5bc993edb71907a42bfaaaa13b4362ba@arcor.de> Hey Andy, I can only underline what Ian says. It was a pleasure to work with you, many thanks for your kind support all the time. Keep your ears stiff, what we say in Germany. And good luck :) kind regards Frank Am 2018-08-30 20:05, schrieb Ian Y. Choi: > Hello Andy, > > Thanks a lot for your work on OpenStack-Ansible team. > > It was very happy to collaborate with you as different teams (me: I18n > team) during Ocata and Pike release cycles, > and I think I18n team now has better insight on OpenStack-Ansible > thanks to the help from you and so many kind contributors. > > > With many thanks, > > /Ian > > Andy McCrae wrote on 8/31/2018 2:40 AM: >> Now that Rocky is all but ready it seems like a good time! Since >> changing roles I've not been able to keep up enough focus on reviews >> and other obligations - so I think it's time to step aside as a core >> reviewer. >> >> I want to say thanks to everybody in the community, I'm really proud >> to see the work we've done and how the OSA team has grown. I've >> learned a tonne from all of you - it's definitely been a great >> experience. >> >> Thanks, >> Andy >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From balazs.gibizer at ericsson.com Fri Aug 31 08:29:25 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 31 Aug 2018 10:29:25 +0200 Subject: [openstack-dev] [neutron][nova] Small bandwidth demo on the PTG In-Reply-To: <4bb21c51-0092-70f3-a535-8fa59adae7ae@gmail.com> References: <1535619300.3600.5@smtp.office365.com> <4bb21c51-0092-70f3-a535-8fa59adae7ae@gmail.com> Message-ID: <1535704165.17206.0@smtp.office365.com> On Thu, Aug 30, 2018 at 8:13 PM, melanie witt wrote: > On Thu, 30 Aug 2018 12:43:06 -0500, Miguel Lavalle wrote: >> Gibi, Bence, >> >> In fact, I added the demo explicitly to the Neutron PTG agenda from >> 1:30 to 2, to give it visiblilty > > I'm interested in seeing the demo too. Will the demo be shown at the > Neutron room or the Nova room? Historically, lunch has ended at 1:30, > so this will be during the same time as the Neutron/Nova cross > project time. Should we just co-locate together for the demo and the > session? I expect anyone watching the demo will want to participate > in the Neutron/Nova session as well. Either room is fine by me. > I assume that the nova - neturon cross project session will be in the nova room, so I propose to have the demo there as well to avoid unnecessarily moving people around. For us it is totally OK to start the demo at 1:30. Cheers, gibi > -melanie > >> On Thu, Aug 30, 2018 at 3:55 AM, Balázs Gibizer >> > >> wrote: >> >> Hi, >> >> Based on the Nova PTG planning etherpad [1] there is a need to >> talk >> about the current state of the bandwidth work [2][3]. Bence >> (rubasov) has already planned to show a small demo to Neutron >> folks >> about the current state of the implementation. So Bence and I are >> wondering about bringing that demo close to the nova - neutron >> cross >> project session. That session is currently planned to happen >> Thursday after lunch. So we are think about showing the demo >> right >> before that session starts. It would start 30 minutes before the >> nova - neutron cross project session. >> >> Are Nova folks also interested in seeing such a demo? >> >> If you are interested in seeing the demo please drop us a line or >> ping us in IRC so we know who should we wait for. >> >> Cheers, >> gibi >> >> [1] https://etherpad.openstack.org/p/nova-ptg-stein >> >> [2] >> >> https://specs.openstack.org/openstack/neutron-specs/specs/rocky/minimum-bandwidth-allocation-placement-api.html >> >> >> [3] >> >> https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/bandwidth-resource-provider.html >> >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sfinucan at redhat.com Fri Aug 31 08:35:55 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Fri, 31 Aug 2018 09:35:55 +0100 Subject: [openstack-dev] [Openstack-operators] [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180831000334.GR26778@thor.bakeyournoodle.com> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180830211257.oa6hxd4pningzqf4@yuggoth.org> <20180831000334.GR26778@thor.bakeyournoodle.com> Message-ID: On Fri, 2018-08-31 at 10:03 +1000, Tony Breeds wrote: > On Thu, Aug 30, 2018 at 09:12:57PM +0000, Jeremy Stanley wrote: > > On 2018-08-31 01:13:58 +0800 (+0800), Rico Lin wrote: > > [...] > > > What needs to be done for this is full topic categories support > > > under `options` page so people get to filter emails properly. > > > > [...] > > > > Unfortunately, topic filtering is one of the MM2 features the > > Mailman community decided nobody used (or at least not enough to > > warrant preserving it in MM3). I do think we need to be consistent > > about tagging subjects to make client-side filtering more effective > > for people who want that, but if we _do_ want to be able to upgrade > > we shouldn't continue to rely on server-side filtering support in > > Mailman unless we can somehow work with them to help in > > reimplementing it. > > The suggestion is to implement it as a 3rd party plugin or work with the > mm community to implement: > https://wiki.mailman.psf.io/DEV/Dynamic%20Sublists > > So if we decide we really want that in mm3 we have options. > > Yours Tony. I've tinked with mailman 3 before so I could probably take a shot at this over the next few week(end)s; however, I've no idea how this feature is supposed to work. Any chance an admin of the current list could send me a couple of screenshots of the feature in mailman 2 along with a brief description of the feature? Alternatively, maybe we could upload them to the wiki page Tony linked above or, better yet, to the technical details page for same: https://wiki.mailman.psf.io/DEV/Brief%20Technical%20Details Cheers, Stephen From dalvarez at redhat.com Fri Aug 31 08:38:18 2018 From: dalvarez at redhat.com (Daniel Alvarez Sanchez) Date: Fri, 31 Aug 2018 10:38:18 +0200 Subject: [openstack-dev] [Neutron] Stepping down from Neutron core team In-Reply-To: References: Message-ID: Thanks a lot Kuba for all your contributions! You've been a great mentor to me since I joined OpenStack and I'm so happy that I got to work with you. Great engineer and even better person! All the best, my friend! On Fri, Aug 31, 2018 at 10:25 AM Jakub Libosvar wrote: > Hi all, > > as you have might already heard, I'm no longer involved in Neutron > development due to some changes. Therefore I'm officially stepping down > from the core team because I can't provide same quality reviews as I > tried to do before. > > I'd like to thank you all for the opportunity I was given in the Neutron > team, thank you for all I have learned over the years professionally, > technically and personally. Tomorrow it's gonna be exactly 5 years since > I started hacking Neutron and I must say I really enjoyed working with > all Neutrinos here and I had privilege to meet most of you in person and > that has an extreme value for me. Keep on being a great community! > > Thank you again! > Kuba > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Fri Aug 31 08:41:14 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 31 Aug 2018 10:41:14 +0200 Subject: [openstack-dev] [Openstack-operators] [ironic][tripleo][edge] Discussing ironic federation and distributed deployments In-Reply-To: References: Message-ID: <61f07d29-185b-7f9a-b0a8-311272c4fd4d@redhat.com> On 08/30/2018 07:29 PM, Emilien Macchi wrote: > > > On Thu, Aug 30, 2018 at 1:21 PM Julia Kreger > wrote: > > Greetings everyone, > > It looks like the most agreeable time on the doodle[1] seems to be > Tuesday September 4th at 13:00 UTC. Are there any objections to using > this time? > > If not, I'll go ahead and create an etherpad, and setup a bluejeans > call for that time to enable high bandwidth discussion. > > > TripleO sessions start on Wednesday, so +1 from us (unless I missed something). This is about a call a week before the PTG, not the PTG itself. You're still very welcome to join! Dmitry > -- > Emilien Macchi > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jaosorior at redhat.com Fri Aug 31 08:45:07 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Fri, 31 Aug 2018 11:45:07 +0300 Subject: [openstack-dev] [tripleo] PTG topics and agenda In-Reply-To: References: <0c407d93-8809-8c1c-4d1b-11a9e797cb90@redhat.com> Message-ID: <0b5d0243-42ea-7359-c5d1-25b748897770@redhat.com> Thanks, merged the topics. On 08/30/2018 07:10 PM, Giulio Fidente wrote: > On 8/28/18 2:50 PM, Juan Antonio Osorio Robles wrote: >> Hello folks! >> >> >> With the PTG being quite soon, I just wanted to remind folks to add your >> topics on the etherpad: https://etherpad.openstack.org/p/tripleo-ptg-stein > thanks Juan, > > I think the Edge (line 53) and Split Control Plane (line 74) sessions > can probably be merged into a single one. > > I'd be fine with James driving it, I think it'd be fine to discuss the > "control plane updates" issue [1] in that same session. > > 1. > http://lists.openstack.org/pipermail/openstack-dev/2018-August/133247.html > From skaplons at redhat.com Fri Aug 31 08:49:42 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Fri, 31 Aug 2018 10:49:42 +0200 Subject: [openstack-dev] [Neutron] Stepping down from Neutron core team In-Reply-To: References: Message-ID: <131487DD-0E85-40C1-BEF9-265FAC2DDF58@redhat.com> It’s sad news. Thanks Kuba for all Your help You gave me when I was newcomer in Neutron community. Good luck in Your next projects :) > Wiadomość napisana przez Jakub Libosvar w dniu 31.08.2018, o godz. 10:24: > > Hi all, > > as you have might already heard, I'm no longer involved in Neutron > development due to some changes. Therefore I'm officially stepping down > from the core team because I can't provide same quality reviews as I > tried to do before. > > I'd like to thank you all for the opportunity I was given in the Neutron > team, thank you for all I have learned over the years professionally, > technically and personally. Tomorrow it's gonna be exactly 5 years since > I started hacking Neutron and I must say I really enjoyed working with > all Neutrinos here and I had privilege to meet most of you in person and > that has an extreme value for me. Keep on being a great community! > > Thank you again! > Kuba > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From tobias.urdin at binero.se Fri Aug 31 08:49:56 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Fri, 31 Aug 2018 10:49:56 +0200 Subject: [openstack-dev] [goals][python3][adjutant][barbican][chef][cinder][cloudkitty][i18n][infra][loci][nova][charms][rpm][puppet][qa][telemetry][trove] join the bandwagon! In-Reply-To: <1535671314-sup-5525@lrrr.local> References: <1535671314-sup-5525@lrrr.local> Message-ID: <14d4593e-57db-7f93-961c-585b9e5204f9@binero.se> Hello Doug, I've proposed moving all job config from project-config to the repos [1]. I don't know what to do with the periodic job here [2] should that be left in project-config or moved? Best regards Tobias [1] https://review.openstack.org/#/q/topic:move-zuul-config [2] https://github.com/openstack-infra/project-config/blob/master/zuul.d/projects.yaml#L10891 On 08/31/2018 01:34 AM, Doug Hellmann wrote: > Below is the list of project teams that have not yet started migrating > their zuul configuration. If you're ready to go, please respond to this > email to let us know so we can start proposing patches. > > Doug > > | adjutant | 3 repos | > | barbican | 5 repos | > | Chef OpenStack | 19 repos | > | cinder | 6 repos | > | cloudkitty | 5 repos | > | I18n | 2 repos | > | Infrastructure | 158 repos | > | loci | 1 repos | > | nova | 6 repos | > | OpenStack Charms | 80 repos | > | Packaging-rpm | 4 repos | > | Puppet OpenStack | 47 repos | > | Quality Assurance | 22 repos | > | Telemetry | 8 repos | > | trove | 5 repos | > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From thierry at openstack.org Fri Aug 31 08:54:26 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 31 Aug 2018 10:54:26 +0200 Subject: [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830170350.wrz4wlanb276kncb@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: <95b9f8c7-0ea1-306d-453f-13ae1713e334@openstack.org> Jeremy Stanley wrote: > [...] > The proposal is simple: create a new openstack-discuss mailing list > to cover all the above sorts of discussion and stop using the other > four. [...] > > Also, in case you were wondering, no the irony of cross-posting this > message to four mailing lists is not lost on me. ;) As someone who just had to process a dozen of ML moderation requests about non-member posting to lists due to replies to a cross-posted topic, I wholeheartedly support the list merging :) -- Thierry Carrez (ttx) From christophe.sauthier at objectif-libre.com Fri Aug 31 09:20:33 2018 From: christophe.sauthier at objectif-libre.com (Christophe Sauthier) Date: Fri, 31 Aug 2018 11:20:33 +0200 Subject: [openstack-dev] =?utf-8?b?W2dvYWxzXVtweXRob24zXVthZGp1dGFudF1b?= =?utf-8?b?YmFyYmljYW5dW2NoZWZdW2NpbmRlcl1bY2xvdWRraXR0eV1baTE4bl1baW5m?= =?utf-8?b?cmFdW2xvY2ldW25vdmFdW2NoYXJtc11bcnBtXVtwdXBwZXRdW3FhXVt0ZWxl?= =?utf-8?q?metry=5D=5Btrove=5D_join_the_bandwagon!?= In-Reply-To: <1535671314-sup-5525@lrrr.local> References: <1535671314-sup-5525@lrrr.local> Message-ID: <5a0ea1129dc9ecaf64f52668255ea4b6@objectif-libre.com> Hello Doug Le 2018-08-31 01:24, Doug Hellmann a écrit : > Below is the list of project teams that have not yet started migrating > their zuul configuration. If you're ready to go, please respond to > this > email to let us know so we can start proposing patches. > > Doug > > | adjutant | 3 repos | > | barbican | 5 repos | > | Chef OpenStack | 19 repos | > | cinder | 6 repos | > | cloudkitty | 5 repos | > | I18n | 2 repos | > | Infrastructure | 158 repos | > | loci | 1 repos | > | nova | 6 repos | > | OpenStack Charms | 80 repos | > | Packaging-rpm | 4 repos | > | Puppet OpenStack | 47 repos | > | Quality Assurance | 22 repos | > | Telemetry | 8 repos | > | trove | 5 repos | We are ready to start on the cloudkitty's team ! Christophe ---- Christophe Sauthier CEO Objectif Libre : Au service de votre Cloud +33 (0) 6 16 98 63 96 | christophe.sauthier at objectif-libre.com https://www.objectif-libre.com | @objectiflibre Recevez la Pause Cloud Et DevOps : https://olib.re/abo-pause From Jesse.Pretorius at rackspace.co.uk Fri Aug 31 10:06:04 2018 From: Jesse.Pretorius at rackspace.co.uk (Jesse Pretorius) Date: Fri, 31 Aug 2018 10:06:04 +0000 Subject: [openstack-dev] [openstack-ansible] Stepping down from OpenStack-Ansible core In-Reply-To: References: Message-ID: <174DB545-C631-476B-A9C0-4ECCD1939C2F@rackspace.co.uk> > From: Andy McCrae > Now that Rocky is all but ready it seems like a good time! Since changing roles I've not been able to keep up enough focus on reviews and other obligations - so I think it's time to step aside as a core reviewer. > I want to say thanks to everybody in the community, I'm really proud to see the work we've done and how the OSA team has grown. I've learned a tonne from all of you - it's definitely been a great experience. Right from the start of the project, back in Icehouse, your can-do attitude was inspiring. This carried through to your leadership of the project as PTL later. You are already missed. Thank you for everything you put into it and we wish you the best for your next endeavours. ________________________________ Rackspace Limited is a company registered in England & Wales (company registered number 03897010) whose registered office is at 5 Millington Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may contain confidential or privileged information intended for the recipient. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse at rackspace.com and delete the original message. Your cooperation is appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jistr at redhat.com Fri Aug 31 10:07:33 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Fri, 31 Aug 2018 12:07:33 +0200 Subject: [openstack-dev] [tripleo] quickstart for humans In-Reply-To: <20180830142821.gw76edbscvhh3afp@localhost.localdomain> References: <20180830142821.gw76edbscvhh3afp@localhost.localdomain> Message-ID: <4cd2fafa-f644-1c1f-56e4-010d1360cf04@redhat.com> On 30.8.2018 16:28, Honza Pokorny wrote: > Hello! > > Over the last few months, it seems that tripleo-quickstart has evolved > into a CI tool. It's primarily used by computers, and not humans. > tripleo-quickstart is a helpful set of ansible playbooks, and a > collection of feature sets. However, it's become less useful for > setting up development environments by humans. For example, devmode.sh > was recently deprecated without a user-friendly replacement. Moreover, > during some informal irc conversations in #oooq, some developers even > mentioned the plan to merge tripleo-quickstart and tripleo-ci. > > I think it would be beneficial to create a set of defaults for > tripleo-quickstart that can be used to spin up new environments; a set > of defaults for humans. This can either be a well-maintained script in > tripleo-quickstart itself, or a brand new project, e.g. > tripleo-quickstart-humans. The number of settings, knobs, and flags > should be kept to a minimum. > > This would accomplish two goals: > > 1. It would bring uniformity to the team. Each environment is > installed the same way. When something goes wrong, we can > eliminate differences in setup when debugging. This should save a > lot of time. > > 2. Quicker and more reliable environment setup. If the set of defaults > is used by many people, it should container fewer bugs because more > people using something should translate into more bug reports, and > more bug fixes. > > These thoughts are coming from the context of tripleo-ui development. I > need an environment in order to develop, but I don't necessarily always > care about how it's installed. I want something that works for most > scenarios. > > What do you think? Does this make sense? Does something like this > already exist? I've been tinkering in this area for a long time, previously with inlunch [1], and now quicklunch [2] (which is a wrapper around quickstart), and i've been involved in various conversations about this over the past years, so i feel like i may have some insight to share on all this in general. * A single config for everyone is not achievable, IMO. Someone wants HA, others want Ceph, Sahara, OpenDaylight, etc. There's no overlap here to be found i think, while respecting that the resulting deployment needs to be of reasonable size. * "for humans" definition differs significantly based on who you ask. E.g. my intention with [2] was to readily expose *more* knobs and tweaks and be more transparent with the underlying workings of Ansible, because i felt like quickstart.sh hides too much from me. In my opinion [2] is sufficiently "for humans", yet it does pretty much the opposite of what you're looking for. * It's hard to strike a good balance between for-CI and for-humans (and the various definitions of for-humans ;) ), but it's worth to keep doing that as high in the software stack as possible, because there is great value in running CI and all dev envs with the same (underlying) tool. Over the years i've observed that Quickstart is trying hard to consolidate various requirements, but it's simply difficult to please all stakeholders, as often the requirements are somewhat contradictory. (This point is not in conflict with anything discussed so far, but i just think it's worth mentioning, so that we don't display Quickstart in a way it doesn't deserve.) These points are to illustrate my previous experience, that when we go above a certain layer of "this is a generic all-purpose configurable tool" (= Quickstart), it seems to yield better results to focus on building the next layer/wrapper for "humans like me" rather than "humans in general". So with regards to the specific goal stemming from tripleo-ui dev requirements as you mentioned above, IMO it makes sense to team up with UI folks and others who have similar expectations about what a TripleO dev env means, and make some wrapper around Quickstart like you suggested. Since you want to reduce rather than extend the number of knobs, it could even be just a script perhaps. My 2 cents, i hope it helps a bit. Jirka [1] https://github.com/jistr/inlunch [2] https://gitlab.com/jistr/quicklunch > > Thanks for listening! > > Honza > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From james.page at canonical.com Fri Aug 31 10:22:42 2018 From: james.page at canonical.com (James Page) Date: Fri, 31 Aug 2018 11:22:42 +0100 Subject: [openstack-dev] [upgrade][sig] Upgrade SIG/Stein PTG etherpad Message-ID: Hi Folks We have a half day planned on Monday afternoon in Denver for the customary discussion around OpenStack upgrades. I've started a pad here: https://etherpad.openstack.org/p/upgrade-sig-ptg-stein Please feel free to add ideas and indicate if you will be participating in the discussion. Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From honjo.rikimaru at po.ntt-tx.co.jp Fri Aug 31 10:27:20 2018 From: honjo.rikimaru at po.ntt-tx.co.jp (Rikimaru Honjo) Date: Fri, 31 Aug 2018 19:27:20 +0900 Subject: [openstack-dev] [nova-lxd]Feature support matrix of nova-lxd Message-ID: <084af1cf-7d31-6b5e-1bef-4fe1cc87d2ea@po.ntt-tx.co.jp> Hello, I'm planning to write a feature support matrix[1] of nova-lxd and add it to nova-lxd repository. A similar document exists as todo.txt[2], but this is old. Can I write it? If someone is writing the same document now, I'll stop writing. [1] It will be like this: https://docs.openstack.org/nova/latest/user/support-matrix.html [2] https://github.com/openstack/nova-lxd/blob/master/specs/todo.txt Best regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at po.ntt-tx.co.jp From cjeanner at redhat.com Fri Aug 31 10:35:04 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Fri, 31 Aug 2018 12:35:04 +0200 Subject: [openstack-dev] [tripleo] quickstart for humans In-Reply-To: <20180830142821.gw76edbscvhh3afp@localhost.localdomain> References: <20180830142821.gw76edbscvhh3afp@localhost.localdomain> Message-ID: <557387e5-8ef6-90c4-efcd-96f5119ec48a@redhat.com> On 08/30/2018 04:28 PM, Honza Pokorny wrote: > Hello! > > Over the last few months, it seems that tripleo-quickstart has evolved > into a CI tool. It's primarily used by computers, and not humans. > tripleo-quickstart is a helpful set of ansible playbooks, and a > collection of feature sets. However, it's become less useful for > setting up development environments by humans. For example, devmode.sh > was recently deprecated without a user-friendly replacement. Moreover, > during some informal irc conversations in #oooq, some developers even > mentioned the plan to merge tripleo-quickstart and tripleo-ci. > > I think it would be beneficial to create a set of defaults for > tripleo-quickstart that can be used to spin up new environments; a set > of defaults for humans. This can either be a well-maintained script in > tripleo-quickstart itself, or a brand new project, e.g. > tripleo-quickstart-humans. The number of settings, knobs, and flags > should be kept to a minimum. > > This would accomplish two goals: > > 1. It would bring uniformity to the team. Each environment is > installed the same way. When something goes wrong, we can > eliminate differences in setup when debugging. This should save a > lot of time. > > 2. Quicker and more reliable environment setup. If the set of defaults > is used by many people, it should container fewer bugs because more > people using something should translate into more bug reports, and > more bug fixes. > > These thoughts are coming from the context of tripleo-ui development. I > need an environment in order to develop, but I don't necessarily always > care about how it's installed. I want something that works for most > scenarios. > > What do you think? Does this make sense? Does something like this > already exist? Hello, As an exercise in order to learn a bit more ansible and refresh my deploy knowledge, I've create that simple thing: https://github.com/cjeanner/tripleo-lab It's "more or less generic", but it was probably never deployed outside my home infra - its aim is to provide a quick'n'dropable libvirt env, allowing some tweaking in a convenient way. That's not at quickstart level - but in order to boostrap an undercloud or a more complete env, it's more than enough. The other reason I made this was the feeling quickstart is a beast, not really easy to master - apparently I'm not the only one """fearing"""" it. I probably didn't dig deep enough. And I wanted to get my own thing, with some proxy/local mirror support in order to alleviate network traffic on my home line (it's fast, but still... it's faster on the LAN ;) ). Cheers, C. > > Thanks for listening! > > Honza > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From aj at suse.com Fri Aug 31 10:36:48 2018 From: aj at suse.com (Andreas Jaeger) Date: Fri, 31 Aug 2018 12:36:48 +0200 Subject: [openstack-dev] [goals][python3][adjutant][barbican][chef][cinder][cloudkitty][i18n][infra][loci][nova][charms][rpm][puppet][qa][telemetry][trove] join the bandwagon! In-Reply-To: <14d4593e-57db-7f93-961c-585b9e5204f9@binero.se> References: <1535671314-sup-5525@lrrr.local> <14d4593e-57db-7f93-961c-585b9e5204f9@binero.se> Message-ID: <4c886ac1-58e8-8169-f4a4-66b88d0fc8c1@suse.com> On 2018-08-31 10:49, Tobias Urdin wrote: > Hello Doug, > > I've proposed moving all job config from project-config to the repos [1]. > I don't know what to do with the periodic job here [2] should that be > left in project-config or moved? > Tobias, I'm sorry to see you doing this - but please abandon! You've done only part of the work and finishing it will complicate it. Doug has scripts for these and can easily run that. What Doug asked for was for an official "go ahead" from the puppet team, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From james.page at canonical.com Fri Aug 31 11:03:54 2018 From: james.page at canonical.com (James Page) Date: Fri, 31 Aug 2018 12:03:54 +0100 Subject: [openstack-dev] [nova-lxd]Feature support matrix of nova-lxd In-Reply-To: <084af1cf-7d31-6b5e-1bef-4fe1cc87d2ea@po.ntt-tx.co.jp> References: <084af1cf-7d31-6b5e-1bef-4fe1cc87d2ea@po.ntt-tx.co.jp> Message-ID: Hi Rikimaru On Fri, 31 Aug 2018 at 11:28 Rikimaru Honjo wrote: > Hello, > > I'm planning to write a feature support matrix[1] of nova-lxd and > add it to nova-lxd repository. > A similar document exists as todo.txt[2], but this is old. > > Can I write it? > Yes please! > If someone is writing the same document now, I'll stop writing. > They are not - please go ahead - this would be a valuable contribution for users evaluating this driver. Regards Jjames -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Aug 31 11:04:08 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 31 Aug 2018 20:04:08 +0900 Subject: [openstack-dev] [tempest][qa][congress] trouble setting tempest feature flag In-Reply-To: References: Message-ID: <1658fa7921d.b0e101d446903.3018642758063447690@ghanshyammann.com> ---- On Wed, 29 Aug 2018 08:20:37 +0900 Eric K wrote ---- > Ha. Turned out to be a simple mistake in hyphens vs underscores. Thanks for update and good to know it is resolved now. Sorry I could not checked this further due to PTO. -gmann > On Tue, Aug 28, 2018 at 3:06 PM Eric K wrote: > > > > Any thoughts on what could be going wrong that the tempest tests still > > see the default conf values rather than those set here? Thanks lots! > > > > Here is the devstack log line showing the flags being set: > > http://logs.openstack.org/64/594564/4/check/congress-devstack-api-mysql/ce34264/logs/devstacklog.txt.gz#_2018-08-28_21_23_15_934 > > > > On Wed, Aug 22, 2018 at 9:12 AM Eric K wrote: > > > > > > Hi all, > > > > > > I have added feature flags for the congress tempest plugin [1] and set > > > them in the devstack plugin [2], but the flags seem to be ignored. The > > > tests are skipped [3] according to the default False flag rather than > > > run according to the True flag set in devstack plugin. Any hints on > > > what may be wrong? Thanks so much! > > > > > > [1] https://review.openstack.org/#/c/594747/3 > > > [2] https://review.openstack.org/#/c/594793/1/devstack/plugin.sh > > > [3] http://logs.openstack.org/64/594564/3/check/congress-devstack-api-mysql/b2cd46f/logs/testr_results.html.gz > > > (the bottom two skipped tests were expected to run) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From shardy at redhat.com Fri Aug 31 11:11:42 2018 From: shardy at redhat.com (Steven Hardy) Date: Fri, 31 Aug 2018 12:11:42 +0100 Subject: [openstack-dev] [tripleo] quickstart for humans In-Reply-To: <20180830142821.gw76edbscvhh3afp@localhost.localdomain> References: <20180830142821.gw76edbscvhh3afp@localhost.localdomain> Message-ID: On Thu, Aug 30, 2018 at 3:28 PM, Honza Pokorny wrote: > Hello! > > Over the last few months, it seems that tripleo-quickstart has evolved > into a CI tool. It's primarily used by computers, and not humans. > tripleo-quickstart is a helpful set of ansible playbooks, and a > collection of feature sets. However, it's become less useful for > setting up development environments by humans. For example, devmode.sh > was recently deprecated without a user-friendly replacement. Moreover, > during some informal irc conversations in #oooq, some developers even > mentioned the plan to merge tripleo-quickstart and tripleo-ci. I was recently directed to the reproducer-quickstart.sh script that's written in the logs directory for all oooq CI jobs - does that help as a replacement for the previous devmode interface? Not that familiar with it myself but it seems to target many of the use-cases you mention e.g uniform reproducer for issues, potentially quicker way to replicate CI results? Steve From tobias.urdin at binero.se Fri Aug 31 11:31:24 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Fri, 31 Aug 2018 13:31:24 +0200 Subject: [openstack-dev] [goals][python3][adjutant][barbican][chef][cinder][cloudkitty][i18n][infra][loci][nova][charms][rpm][puppet][qa][telemetry][trove] join the bandwagon! In-Reply-To: <4c886ac1-58e8-8169-f4a4-66b88d0fc8c1@suse.com> References: <1535671314-sup-5525@lrrr.local> <14d4593e-57db-7f93-961c-585b9e5204f9@binero.se> <4c886ac1-58e8-8169-f4a4-66b88d0fc8c1@suse.com> Message-ID: <664b12c4-4a98-b786-7353-d716271478cf@binero.se> Oh, that's bad. I will abandon. It's a go for Puppet then. Best regards On 08/31/2018 12:36 PM, Andreas Jaeger wrote: > On 2018-08-31 10:49, Tobias Urdin wrote: >> Hello Doug, >> >> I've proposed moving all job config from project-config to the repos [1]. >> I don't know what to do with the periodic job here [2] should that be >> left in project-config or moved? >> > Tobias, I'm sorry to see you doing this - but please abandon! You've > done only part of the work and finishing it will complicate it. Doug has > scripts for these and can easily run that. What Doug asked for was for > an official "go ahead" from the puppet team, > > Andreas From jim at jimrollenhagen.com Fri Aug 31 11:44:44 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 31 Aug 2018 07:44:44 -0400 Subject: [openstack-dev] [kolla][tripleo][oslo][all] Bumping eventlet to 0.24.1 In-Reply-To: <20180831031004.nou6m2y3dfcoxadk@gentoo.org> References: <20180823145013.vzt46kgd7d7lkmkj@gentoo.org> <20180831015246.z4zvjp3lkb2yegis@gentoo.org> <20180831031004.nou6m2y3dfcoxadk@gentoo.org> Message-ID: On Thu, Aug 30, 2018 at 11:10 PM, Matthew Thode wrote: > On 18-08-30 20:52:46, Matthew Thode wrote: > > On 18-08-23 09:50:13, Matthew Thode wrote: > > > This is your warning, if you have concerns please comment in > > > https://review.openstack.org/589382 . cross tests pass, so that's a > > > good sign... atm this is only for stein. > > > > > > > Consider yourself on notice, https://review.openstack.org/589382 is > > planned to be merged on monday. > > > > A bit more of follow up since that was so dry. There are some projects > that have not branched (mainly cycle-trailing and plugins). > > There has historically been some breakage with each eventlet update, > this one is not expected to be much different unfortunately. Currently > there are known issues with oslo.service but they look solvable. A list > of all projects using eventlet is attached. > > The full list of non-branched projects will be at the bottom of this > message, but the projects that I think should be more careful are the > following. > > kolla kolla-ansible heat-agents heat-dashboard tripleo-ipsec > > the rest of the repos seem to be plugins, which I'm personally less > concerned about, but should still be branched (preferably sooner rather > than later). > Tempest plugins, like tempest, are not meant to be branched: http://lists.openstack.org/pipermail/openstack-dev/2018-August/133211.html // jim > > ansible-role-container-registry > ansible-role-redhat-subscription > ansible-role-tripleo-modify-image > barbican-tempest-plugin > blazar-tempest-plugin > cinder-tempest-plugin > cloudkitty-tempest-plugin > congress-tempest-plugin > designate-tempest-plugin > devstack-plugin-amqp1 > devstack-plugin-kafka > ec2api-tempest-plugin > heat-agents > heat-dashboard > heat-tempest-plugin > ironic-tempest-plugin > keystone-tempest-plugin > kolla-ansible > kolla > kuryr-tempest-plugin > magnum-tempest-plugin > manila-tempest-plugin > mistral-tempest-plugin > monasca-kibana-plugin > monasca-tempest-plugin > murano-tempest-plugin > networking-generic-switch-tempest-plugin > neutron-tempest-plugin > octavia-tempest-plugin > oswin-tempest-plugin > patrole > release-test > sahara-tests > senlin-tempest-plugin > solum-tempest-plugin > telemetry-tempest-plugin > tempest-tripleo-ui > tempest > tripleo-ipsec > trove-tempest-plugin > vitrage-tempest-plugin > watcher-tempest-plugin > zaqar-tempest-plugin > zun-tempest-plugin > > -- > Matthew Thode (prometheanfire) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From opentastic at gmail.com Fri Aug 31 11:48:59 2018 From: opentastic at gmail.com (Edward Hope-Morley) Date: Fri, 31 Aug 2018 12:48:59 +0100 Subject: [openstack-dev] [charms] Deployment guide stable/rocky cut In-Reply-To: References: Message-ID: <024de94f-a194-8811-c5fa-8cfdaf367a16@gmail.com> Hi Frode, I think it would be a good idea to add a link to the charm deployment guide at the following page: https://docs.openstack.org/rocky/deploy/ - Ed On 17/08/18 08:47, Frode Nordahl wrote: > Hello OpenStack charmers, > > I am writing to inform you that  a `stable/rocky` branch has been cut > for the `openstack/charm-deployment-guide` repository. > > Should there be any further updates to the guide before the release > the changes will need to be landed in `master` and then back-ported to > `stable/rocky`. > > -- > Frode Nordahl > Software Engineer > Canonical Ltd. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Fri Aug 31 11:50:52 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 31 Aug 2018 07:50:52 -0400 Subject: [openstack-dev] [Openstack-operators] [ironic][tripleo][edge] Discussing ironic federation and distributed deployments In-Reply-To: <61f07d29-185b-7f9a-b0a8-311272c4fd4d@redhat.com> References: <61f07d29-185b-7f9a-b0a8-311272c4fd4d@redhat.com> Message-ID: On Fri, Aug 31, 2018 at 4:42 AM Dmitry Tantsur wrote: > This is about a call a week before the PTG, not the PTG itself. You're > still > very welcome to join! > It's good too! Our TripleO IRC meeting is at 14 UTC. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Aug 31 11:53:49 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 31 Aug 2018 07:53:49 -0400 Subject: [openstack-dev] [goals][python3][adjutant][barbican][chef][cinder][cloudkitty][i18n][infra][loci][nova][charms][rpm][puppet][qa][telemetry][trove] join the bandwagon! In-Reply-To: References: <1535671314-sup-5525@lrrr.local> Message-ID: <1535716394-sup-8932@lrrr.local> Excerpts from Samuel Cassiba's message of 2018-08-30 16:50:30 -0700: > On Thu, Aug 30, 2018 at 4:24 PM, Doug Hellmann wrote: > > Below is the list of project teams that have not yet started migrating > > their zuul configuration. If you're ready to go, please respond to this > > email to let us know so we can start proposing patches. > > > > Doug > > > > | adjutant | 3 repos | > > | barbican | 5 repos | > > | Chef OpenStack | 19 repos | > > | cinder | 6 repos | > > | cloudkitty | 5 repos | > > | I18n | 2 repos | > > | Infrastructure | 158 repos | > > | loci | 1 repos | > > | nova | 6 repos | > > | OpenStack Charms | 80 repos | > > | Packaging-rpm | 4 repos | > > | Puppet OpenStack | 47 repos | > > | Quality Assurance | 22 repos | > > | Telemetry | 8 repos | > > | trove | 5 repos | > > > > On behalf of Chef OpenStack, that one is good to go. > > Best, > Samuel (scas) > It looks like most of the settings for the Chef repos are already in-tree, so there are just these 2 patches to consider: +-----------------------------------------------------+--------------------------------+-------------------------------------+--------+ | Subject | Repo | URL | Branch | +-----------------------------------------------------+--------------------------------+-------------------------------------+--------+ | remove job settings for Chef OpenStack repositories | openstack-infra/project-config | https://review.openstack.org/598614 | master | | import zuul job settings from project-config | openstack/openstack-chef-specs | https://review.openstack.org/598613 | master | +-----------------------------------------------------+--------------------------------+-------------------------------------+--------+ From doug at doughellmann.com Fri Aug 31 11:59:02 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 31 Aug 2018 07:59:02 -0400 Subject: [openstack-dev] [goals][python3][adjutant][barbican][chef][cinder][cloudkitty][i18n][infra][loci][nova][charms][rpm][puppet][qa][telemetry][trove] join the bandwagon! In-Reply-To: <47097489-0f7f-3b2e-31fc-8c2944c0a2da@catalyst.net.nz> References: <1535671314-sup-5525@lrrr.local> <47097489-0f7f-3b2e-31fc-8c2944c0a2da@catalyst.net.nz> Message-ID: <1535716693-sup-556@lrrr.local> Excerpts from Adrian Turjak's message of 2018-08-31 17:51:15 +1200: > Adjutant should be should be good to go. I don't believe there are any > blockers (unless I've missed some). I've proposed the needed patches: +-----------------------------------------------+--------------------------------+-------------------------------------+---------------+ | Subject | Repo | URL | Branch | +-----------------------------------------------+--------------------------------+-------------------------------------+---------------+ | remove job settings for adjutant repositories | openstack-infra/project-config | https://review.openstack.org/598620 | master | | import zuul job settings from project-config | openstack/adjutant | https://review.openstack.org/598615 | master | | add python 3.6 unit test job | openstack/adjutant | https://review.openstack.org/598616 | master | | import zuul job settings from project-config | openstack/adjutant | https://review.openstack.org/598617 | stable/pike | | import zuul job settings from project-config | openstack/adjutant | https://review.openstack.org/598618 | stable/queens | | import zuul job settings from project-config | openstack/adjutant | https://review.openstack.org/598619 | stable/rocky | +-----------------------------------------------+--------------------------------+-------------------------------------+---------------+ From nmagnezi at redhat.com Fri Aug 31 12:01:22 2018 From: nmagnezi at redhat.com (Nir Magnezi) Date: Fri, 31 Aug 2018 15:01:22 +0300 Subject: [openstack-dev] [octavia] Proposing Carlos Goncalves (cgoncalves) as an Octavia core reviewer In-Reply-To: References: Message-ID: Carlos made a significant impact with quality reviews and code contributions. I think he would make a great addition to the core team. +1 From me. /Nir On Fri, Aug 31, 2018 at 6:29 AM Jacky Hu wrote: > +1 > Definitely a good contributor for the octavia community. > > 发自我的 iPhone > > > 在 2018年8月31日,上午11:24,Michael Johnson 写道: > > > > Hello Octavia community, > > > > I would like to propose Carlos Goncalves as a core reviewer on the > > Octavia project. > > > > Carlos has provided numerous enhancements to the Octavia project, > > including setting up the grenade gate for Octavia upgrade testing. > > > > Over the last few releases he has also been providing quality reviews, > > in line with the other core reviewers [1]. I feel that Carlos would > > make an excellent addition to the Octavia core reviewer team. > > > > Existing Octavia core reviewers, please reply to this email with your > > support or concerns with adding Jacky to the core team. > > > > Michael > > > > [1] http://stackalytics.com/report/contribution/octavia-group/90 > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Fri Aug 31 12:02:23 2018 From: zigo at debian.org (Thomas Goirand) Date: Fri, 31 Aug 2018 14:02:23 +0200 Subject: [openstack-dev] [Openstack-operators] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830213341.yuxyen2elx2c3is4@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> <20180830213341.yuxyen2elx2c3is4@yuggoth.org> Message-ID: On 08/30/2018 11:33 PM, Jeremy Stanley wrote: > On 2018-08-30 22:49:26 +0200 (+0200), Thomas Goirand wrote: > [...] >> I really don't want this. I'm happy with things being sorted in >> multiple lists, even though I'm subscribed to multiples. > > I understand where you're coming from I'm coming from the time when OpenStack had a list on launchpad where everything was mixed. We did the split because it was really annoying to have everything mixed. > I was accustomed to communities where developers had one mailing > list, users had another, and whenever a user asked a question on the > developer mailing list they were told to go away and bother the user > mailing list instead (not even a good, old-fashioned "RTFM" for > their trouble). I don't think that's what we are doing. Usually, when someone does the mistake, we do reply to him/her, at the same time pointing to the correct list. > You're probably intimately familiar with at least > one of these communities. ;) I know what you have in mind! Indeed, in that list, it happens that some people are a bit harsh to users. Hopefully, the folks in OpenStack devel aren't like this. > As the years went by, it's become apparent to me that this is > actually an antisocial behavior pattern In the OpenStack lists, every day, some developers take the time to answer users. So I don't see what there is to fix. > I believe OpenStack actually wants users to see the > development work which is underway, come to understand it, and > become part of that process. Users are very much welcome in our -dev list. I don't think there's a problem here. > Requiring them to have their > conversations elsewhere sends the opposite message. In many places and occasion, we've sent the correct message. On 08/30/2018 11:45 PM, Jimmy McArthur wrote: > IMO this is easily solved by tagging. If emails are properly tagged > (which they typically are), most email clients will properly sort on > rules and you can just auto-delete if you're 100% not interested in a > particular topic. This topically works with folks used to send tags. It doesn't for new comers, which is what you see with newbies coming to ask questions. Cheers, Thomas Goirand (zigo) From doug at doughellmann.com Fri Aug 31 12:14:24 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 31 Aug 2018 08:14:24 -0400 Subject: [openstack-dev] [goals][python3][adjutant][barbican][chef][cinder][cloudkitty][i18n][infra][loci][nova][charms][rpm][puppet][qa][telemetry][trove] join the bandwagon! In-Reply-To: <20180831060900.3erg5vd6ghzs7xmr@sileht.net> References: <1535671314-sup-5525@lrrr.local> <20180831060900.3erg5vd6ghzs7xmr@sileht.net> Message-ID: <1535717644-sup-5917@lrrr.local> Excerpts from Mehdi Abaakouk's message of 2018-08-31 08:09:00 +0200: > Telemetry is ready Here you go: +------------------------------------------------+------------------------------------+-------------------------------------+---------------+ | Subject | Repo | URL | Branch | +------------------------------------------------+------------------------------------+-------------------------------------+---------------+ | remove job settings for Telemetry repositories | openstack-infra/project-config | https://review.openstack.org/598671 | master | | import zuul job settings from project-config | openstack/aodh | https://review.openstack.org/598628 | master | | switch documentation job to new PTI | openstack/aodh | https://review.openstack.org/598629 | master | | add python 3.6 unit test job | openstack/aodh | https://review.openstack.org/598630 | master | | import zuul job settings from project-config | openstack/aodh | https://review.openstack.org/598648 | stable/ocata | | import zuul job settings from project-config | openstack/aodh | https://review.openstack.org/598653 | stable/pike | | import zuul job settings from project-config | openstack/aodh | https://review.openstack.org/598659 | stable/queens | | import zuul job settings from project-config | openstack/aodh | https://review.openstack.org/598665 | stable/rocky | | import zuul job settings from project-config | openstack/ceilometer | https://review.openstack.org/598631 | master | | switch documentation job to new PTI | openstack/ceilometer | https://review.openstack.org/598632 | master | | add python 3.6 unit test job | openstack/ceilometer | https://review.openstack.org/598633 | master | | import zuul job settings from project-config | openstack/ceilometer | https://review.openstack.org/598649 | stable/ocata | | import zuul job settings from project-config | openstack/ceilometer | https://review.openstack.org/598654 | stable/pike | | import zuul job settings from project-config | openstack/ceilometer | https://review.openstack.org/598660 | stable/queens | | import zuul job settings from project-config | openstack/ceilometer | https://review.openstack.org/598666 | stable/rocky | | import zuul job settings from project-config | openstack/ceilometermiddleware | https://review.openstack.org/598634 | master | | switch documentation job to new PTI | openstack/ceilometermiddleware | https://review.openstack.org/598635 | master | | add python 3.6 unit test job | openstack/ceilometermiddleware | https://review.openstack.org/598636 | master | | import zuul job settings from project-config | openstack/ceilometermiddleware | https://review.openstack.org/598650 | stable/ocata | | import zuul job settings from project-config | openstack/ceilometermiddleware | https://review.openstack.org/598655 | stable/pike | | import zuul job settings from project-config | openstack/ceilometermiddleware | https://review.openstack.org/598661 | stable/queens | | import zuul job settings from project-config | openstack/ceilometermiddleware | https://review.openstack.org/598667 | stable/rocky | | convert py35 jobs to py3 | openstack/panko | https://review.openstack.org/575831 | master | | import zuul job settings from project-config | openstack/panko | https://review.openstack.org/598637 | master | | switch documentation job to new PTI | openstack/panko | https://review.openstack.org/598638 | master | | add python 3.6 unit test job | openstack/panko | https://review.openstack.org/598639 | master | | import zuul job settings from project-config | openstack/panko | https://review.openstack.org/598651 | stable/ocata | | import zuul job settings from project-config | openstack/panko | https://review.openstack.org/598656 | stable/pike | | import zuul job settings from project-config | openstack/panko | https://review.openstack.org/598662 | stable/queens | | import zuul job settings from project-config | openstack/panko | https://review.openstack.org/598668 | stable/rocky | | import zuul job settings from project-config | openstack/python-aodhclient | https://review.openstack.org/598640 | master | | switch documentation job to new PTI | openstack/python-aodhclient | https://review.openstack.org/598641 | master | | add python 3.6 unit test job | openstack/python-aodhclient | https://review.openstack.org/598642 | master | | import zuul job settings from project-config | openstack/python-aodhclient | https://review.openstack.org/598652 | stable/ocata | | import zuul job settings from project-config | openstack/python-aodhclient | https://review.openstack.org/598657 | stable/pike | | import zuul job settings from project-config | openstack/python-aodhclient | https://review.openstack.org/598663 | stable/queens | | import zuul job settings from project-config | openstack/python-aodhclient | https://review.openstack.org/598669 | stable/rocky | | import zuul job settings from project-config | openstack/python-pankoclient | https://review.openstack.org/598643 | master | | switch documentation job to new PTI | openstack/python-pankoclient | https://review.openstack.org/598644 | master | | add python 3.6 unit test job | openstack/python-pankoclient | https://review.openstack.org/598645 | master | | import zuul job settings from project-config | openstack/python-pankoclient | https://review.openstack.org/598658 | stable/pike | | import zuul job settings from project-config | openstack/python-pankoclient | https://review.openstack.org/598664 | stable/queens | | import zuul job settings from project-config | openstack/python-pankoclient | https://review.openstack.org/598670 | stable/rocky | | import zuul job settings from project-config | openstack/telemetry-specs | https://review.openstack.org/598646 | master | | import zuul job settings from project-config | openstack/telemetry-tempest-plugin | https://review.openstack.org/598647 | master | +------------------------------------------------+------------------------------------+-------------------------------------+---------------+ From dougal at redhat.com Fri Aug 31 12:17:00 2018 From: dougal at redhat.com (Dougal Matthews) Date: Fri, 31 Aug 2018 13:17:00 +0100 Subject: [openstack-dev] [tripleo] quickstart for humans In-Reply-To: References: <20180830142821.gw76edbscvhh3afp@localhost.localdomain> Message-ID: On 31 August 2018 at 12:11, Steven Hardy wrote: > On Thu, Aug 30, 2018 at 3:28 PM, Honza Pokorny wrote: > > Hello! > > > > Over the last few months, it seems that tripleo-quickstart has evolved > > into a CI tool. It's primarily used by computers, and not humans. > > tripleo-quickstart is a helpful set of ansible playbooks, and a > > collection of feature sets. However, it's become less useful for > > setting up development environments by humans. For example, devmode.sh > > was recently deprecated without a user-friendly replacement. Moreover, > > during some informal irc conversations in #oooq, some developers even > > mentioned the plan to merge tripleo-quickstart and tripleo-ci. > > I was recently directed to the reproducer-quickstart.sh script that's > written in the logs directory for all oooq CI jobs - does that help as > a replacement for the previous devmode interface? > > Not that familiar with it myself but it seems to target many of the > use-cases you mention e.g uniform reproducer for issues, potentially > quicker way to replicate CI results? > It is very good for that. However, the problem I have with reproducer scripts is that they are tied to the CI output. If I am working on a patch, the only way I know to get a reproducer is to submit the patch and then wait for CI to finish and then run the script again myself. It would be very useful if I there was a tool where I could run a specific CI job, with a gerrit patch included (or even a local change would be more amazing!). Perhaps even a reproducer script generator would do the job. > > Steve > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From honza at redhat.com Fri Aug 31 12:39:14 2018 From: honza at redhat.com (Honza Pokorny) Date: Fri, 31 Aug 2018 14:39:14 +0200 Subject: [openstack-dev] [tripleo] quickstart for humans In-Reply-To: References: <20180830142821.gw76edbscvhh3afp@localhost.localdomain> Message-ID: <20180831123914.ro2tccwqynugrofa@localhost.localdomain> On 2018-08-31 13:17, Dougal Matthews wrote: > On 31 August 2018 at 12:11, Steven Hardy wrote: > > > On Thu, Aug 30, 2018 at 3:28 PM, Honza Pokorny wrote: > > > Hello! > > > > > > Over the last few months, it seems that tripleo-quickstart has evolved > > > into a CI tool. It's primarily used by computers, and not humans. > > > tripleo-quickstart is a helpful set of ansible playbooks, and a > > > collection of feature sets. However, it's become less useful for > > > setting up development environments by humans. For example, devmode.sh > > > was recently deprecated without a user-friendly replacement. Moreover, > > > during some informal irc conversations in #oooq, some developers even > > > mentioned the plan to merge tripleo-quickstart and tripleo-ci. > > > > I was recently directed to the reproducer-quickstart.sh script that's > > written in the logs directory for all oooq CI jobs - does that help as > > a replacement for the previous devmode interface? > > > > Not that familiar with it myself but it seems to target many of the > > use-cases you mention e.g uniform reproducer for issues, potentially > > quicker way to replicate CI results? > > > > It is very good for that. However, the problem I have with reproducer > scripts is that they are tied to the CI output. If I am working on a patch, > the only way I know to get a reproducer is to submit the patch and then > wait for CI to finish and then run the script again myself. > > It would be very useful if I there was a tool where I could run a specific > CI job, with a gerrit patch included (or even a local change would be more > amazing!). Perhaps even a reproducer script generator would do the job. > Yes, the reproducer seems to work quite well. It's just not very user-friendly. The issue is that you need to find the right CI job, go to the logs, download the file, etc. Instead of simply cloning a repo, installing the tool, and running it when needed. As such, we already have a couple of patches to introduce a reproducer script generator. One by me, and one by Gabriele. https://review.openstack.org/#/c/586843/ https://review.openstack.org/#/c/548005/ > > > > > > > Steve > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Fri Aug 31 13:04:48 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 31 Aug 2018 09:04:48 -0400 Subject: [openstack-dev] [goals][python3][adjutant][barbican][chef][cinder][cloudkitty][i18n][infra][loci][nova][charms][rpm][puppet][qa][telemetry][trove] join the bandwagon! In-Reply-To: <5a0ea1129dc9ecaf64f52668255ea4b6@objectif-libre.com> References: <1535671314-sup-5525@lrrr.local> <5a0ea1129dc9ecaf64f52668255ea4b6@objectif-libre.com> Message-ID: <1535720627-sup-1016@lrrr.local> Excerpts from Christophe Sauthier's message of 2018-08-31 11:20:33 +0200: > We are ready to start on the cloudkitty's team ! > > Christophe Here are the patches: +-------------------------------------------------+-------------------------------------+-------------------------------------+---------------+ | Subject | Repo | URL | Branch | +-------------------------------------------------+-------------------------------------+-------------------------------------+---------------+ | remove job settings for cloudkitty repositories | openstack-infra/project-config | https://review.openstack.org/598929 | master | | import zuul job settings from project-config | openstack/cloudkitty | https://review.openstack.org/598884 | master | | switch documentation job to new PTI | openstack/cloudkitty | https://review.openstack.org/598885 | master | | add python 3.6 unit test job | openstack/cloudkitty | https://review.openstack.org/598886 | master | | import zuul job settings from project-config | openstack/cloudkitty | https://review.openstack.org/598900 | stable/ocata | | import zuul job settings from project-config | openstack/cloudkitty | https://review.openstack.org/598906 | stable/pike | | import zuul job settings from project-config | openstack/cloudkitty | https://review.openstack.org/598912 | stable/queens | | import zuul job settings from project-config | openstack/cloudkitty | https://review.openstack.org/598918 | stable/rocky | | import zuul job settings from project-config | openstack/cloudkitty-dashboard | https://review.openstack.org/598888 | master | | switch documentation job to new PTI | openstack/cloudkitty-dashboard | https://review.openstack.org/598889 | master | | add python 3.6 unit test job | openstack/cloudkitty-dashboard | https://review.openstack.org/598890 | master | | import zuul job settings from project-config | openstack/cloudkitty-dashboard | https://review.openstack.org/598902 | stable/ocata | | import zuul job settings from project-config | openstack/cloudkitty-dashboard | https://review.openstack.org/598908 | stable/pike | | import zuul job settings from project-config | openstack/cloudkitty-dashboard | https://review.openstack.org/598914 | stable/queens | | import zuul job settings from project-config | openstack/cloudkitty-dashboard | https://review.openstack.org/598920 | stable/rocky | | import zuul job settings from project-config | openstack/cloudkitty-specs | https://review.openstack.org/598893 | master | | import zuul job settings from project-config | openstack/cloudkitty-tempest-plugin | https://review.openstack.org/598895 | master | | import zuul job settings from project-config | openstack/python-cloudkittyclient | https://review.openstack.org/598897 | master | | add python 3.6 unit test job | openstack/python-cloudkittyclient | https://review.openstack.org/598898 | master | | import zuul job settings from project-config | openstack/python-cloudkittyclient | https://review.openstack.org/598904 | stable/ocata | | import zuul job settings from project-config | openstack/python-cloudkittyclient | https://review.openstack.org/598910 | stable/pike | | import zuul job settings from project-config | openstack/python-cloudkittyclient | https://review.openstack.org/598917 | stable/queens | | import zuul job settings from project-config | openstack/python-cloudkittyclient | https://review.openstack.org/598923 | stable/rocky | +-------------------------------------------------+-------------------------------------+-------------------------------------+---------------+ From doug at doughellmann.com Fri Aug 31 13:06:47 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 31 Aug 2018 09:06:47 -0400 Subject: [openstack-dev] [goals][python3][adjutant][barbican][chef][cinder][cloudkitty][i18n][infra][loci][nova][charms][rpm][puppet][qa][telemetry][trove] join the bandwagon! In-Reply-To: <664b12c4-4a98-b786-7353-d716271478cf@binero.se> References: <1535671314-sup-5525@lrrr.local> <14d4593e-57db-7f93-961c-585b9e5204f9@binero.se> <4c886ac1-58e8-8169-f4a4-66b88d0fc8c1@suse.com> <664b12c4-4a98-b786-7353-d716271478cf@binero.se> Message-ID: <1535720770-sup-9200@lrrr.local> Excerpts from Tobias Urdin's message of 2018-08-31 13:31:24 +0200: > Oh, that's bad. I will abandon. > It's a go for Puppet then. > > Best regards There are quite a few patches for all of the Puppet modules. +-------------------------------------------------------+----------------------------------------+-------------------------------------+---------------+ | Subject | Repo | URL | Branch | +-------------------------------------------------------+----------------------------------------+-------------------------------------+---------------+ | remove job settings for Puppet OpenStack repositories | openstack-infra/project-config | https://review.openstack.org/598709 | master | | import zuul job settings from project-config | openstack/puppet-aodh | https://review.openstack.org/598679 | master | | switch documentation job to new PTI | openstack/puppet-aodh | https://review.openstack.org/598680 | master | | import zuul job settings from project-config | openstack/puppet-aodh | https://review.openstack.org/598767 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-aodh | https://review.openstack.org/598801 | stable/pike | | import zuul job settings from project-config | openstack/puppet-aodh | https://review.openstack.org/598836 | stable/queens | | import zuul job settings from project-config | openstack/puppet-aodh | https://review.openstack.org/598876 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-barbican | https://review.openstack.org/598681 | master | | switch documentation job to new PTI | openstack/puppet-barbican | https://review.openstack.org/598682 | master | | import zuul job settings from project-config | openstack/puppet-barbican | https://review.openstack.org/598768 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-barbican | https://review.openstack.org/598802 | stable/pike | | import zuul job settings from project-config | openstack/puppet-barbican | https://review.openstack.org/598837 | stable/queens | | import zuul job settings from project-config | openstack/puppet-barbican | https://review.openstack.org/598877 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-ceilometer | https://review.openstack.org/598683 | master | | switch documentation job to new PTI | openstack/puppet-ceilometer | https://review.openstack.org/598684 | master | | import zuul job settings from project-config | openstack/puppet-ceilometer | https://review.openstack.org/598769 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-ceilometer | https://review.openstack.org/598803 | stable/pike | | import zuul job settings from project-config | openstack/puppet-ceilometer | https://review.openstack.org/598838 | stable/queens | | import zuul job settings from project-config | openstack/puppet-ceilometer | https://review.openstack.org/598878 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-ceph | https://review.openstack.org/598685 | master | | switch documentation job to new PTI | openstack/puppet-ceph | https://review.openstack.org/598686 | master | | import zuul job settings from project-config | openstack/puppet-cinder | https://review.openstack.org/598687 | master | | switch documentation job to new PTI | openstack/puppet-cinder | https://review.openstack.org/598688 | master | | import zuul job settings from project-config | openstack/puppet-cinder | https://review.openstack.org/598770 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-cinder | https://review.openstack.org/598804 | stable/pike | | import zuul job settings from project-config | openstack/puppet-cinder | https://review.openstack.org/598839 | stable/queens | | import zuul job settings from project-config | openstack/puppet-cinder | https://review.openstack.org/598879 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-cloudkitty | https://review.openstack.org/598689 | master | | switch documentation job to new PTI | openstack/puppet-cloudkitty | https://review.openstack.org/598690 | master | | import zuul job settings from project-config | openstack/puppet-cloudkitty | https://review.openstack.org/598840 | stable/queens | | import zuul job settings from project-config | openstack/puppet-cloudkitty | https://review.openstack.org/598880 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-congress | https://review.openstack.org/598691 | master | | switch documentation job to new PTI | openstack/puppet-congress | https://review.openstack.org/598692 | master | | import zuul job settings from project-config | openstack/puppet-congress | https://review.openstack.org/598771 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-congress | https://review.openstack.org/598805 | stable/pike | | import zuul job settings from project-config | openstack/puppet-congress | https://review.openstack.org/598841 | stable/queens | | import zuul job settings from project-config | openstack/puppet-congress | https://review.openstack.org/598881 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-designate | https://review.openstack.org/598693 | master | | switch documentation job to new PTI | openstack/puppet-designate | https://review.openstack.org/598694 | master | | import zuul job settings from project-config | openstack/puppet-designate | https://review.openstack.org/598772 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-designate | https://review.openstack.org/598806 | stable/pike | | import zuul job settings from project-config | openstack/puppet-designate | https://review.openstack.org/598842 | stable/queens | | import zuul job settings from project-config | openstack/puppet-designate | https://review.openstack.org/598882 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-ec2api | https://review.openstack.org/598695 | master | | switch documentation job to new PTI | openstack/puppet-ec2api | https://review.openstack.org/598696 | master | | import zuul job settings from project-config | openstack/puppet-ec2api | https://review.openstack.org/598773 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-ec2api | https://review.openstack.org/598807 | stable/pike | | import zuul job settings from project-config | openstack/puppet-ec2api | https://review.openstack.org/598843 | stable/queens | | import zuul job settings from project-config | openstack/puppet-ec2api | https://review.openstack.org/598883 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-freezer | https://review.openstack.org/598697 | master | | switch documentation job to new PTI | openstack/puppet-freezer | https://review.openstack.org/598698 | master | | import zuul job settings from project-config | openstack/puppet-freezer | https://review.openstack.org/598844 | stable/queens | | import zuul job settings from project-config | openstack/puppet-freezer | https://review.openstack.org/598887 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-glance | https://review.openstack.org/598699 | master | | switch documentation job to new PTI | openstack/puppet-glance | https://review.openstack.org/598700 | master | | import zuul job settings from project-config | openstack/puppet-glance | https://review.openstack.org/598774 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-glance | https://review.openstack.org/598808 | stable/pike | | import zuul job settings from project-config | openstack/puppet-glance | https://review.openstack.org/598845 | stable/queens | | import zuul job settings from project-config | openstack/puppet-glance | https://review.openstack.org/598891 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-glare | https://review.openstack.org/598701 | master | | switch documentation job to new PTI | openstack/puppet-glare | https://review.openstack.org/598702 | master | | import zuul job settings from project-config | openstack/puppet-glare | https://review.openstack.org/598846 | stable/queens | | import zuul job settings from project-config | openstack/puppet-glare | https://review.openstack.org/598892 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-gnocchi | https://review.openstack.org/598703 | master | | switch documentation job to new PTI | openstack/puppet-gnocchi | https://review.openstack.org/598704 | master | | import zuul job settings from project-config | openstack/puppet-gnocchi | https://review.openstack.org/598775 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-gnocchi | https://review.openstack.org/598809 | stable/pike | | import zuul job settings from project-config | openstack/puppet-gnocchi | https://review.openstack.org/598847 | stable/queens | | import zuul job settings from project-config | openstack/puppet-gnocchi | https://review.openstack.org/598894 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-heat | https://review.openstack.org/598705 | master | | switch documentation job to new PTI | openstack/puppet-heat | https://review.openstack.org/598706 | master | | import zuul job settings from project-config | openstack/puppet-heat | https://review.openstack.org/598776 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-heat | https://review.openstack.org/598810 | stable/pike | | import zuul job settings from project-config | openstack/puppet-heat | https://review.openstack.org/598848 | stable/queens | | import zuul job settings from project-config | openstack/puppet-heat | https://review.openstack.org/598896 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-horizon | https://review.openstack.org/598707 | master | | switch documentation job to new PTI | openstack/puppet-horizon | https://review.openstack.org/598708 | master | | import zuul job settings from project-config | openstack/puppet-horizon | https://review.openstack.org/598777 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-horizon | https://review.openstack.org/598811 | stable/pike | | import zuul job settings from project-config | openstack/puppet-horizon | https://review.openstack.org/598849 | stable/queens | | import zuul job settings from project-config | openstack/puppet-horizon | https://review.openstack.org/598899 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-ironic | https://review.openstack.org/598710 | master | | switch documentation job to new PTI | openstack/puppet-ironic | https://review.openstack.org/598711 | master | | import zuul job settings from project-config | openstack/puppet-ironic | https://review.openstack.org/598778 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-ironic | https://review.openstack.org/598812 | stable/pike | | import zuul job settings from project-config | openstack/puppet-ironic | https://review.openstack.org/598850 | stable/queens | | import zuul job settings from project-config | openstack/puppet-ironic | https://review.openstack.org/598901 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-keystone | https://review.openstack.org/598712 | master | | switch documentation job to new PTI | openstack/puppet-keystone | https://review.openstack.org/598713 | master | | import zuul job settings from project-config | openstack/puppet-keystone | https://review.openstack.org/598779 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-keystone | https://review.openstack.org/598813 | stable/pike | | import zuul job settings from project-config | openstack/puppet-keystone | https://review.openstack.org/598851 | stable/queens | | import zuul job settings from project-config | openstack/puppet-keystone | https://review.openstack.org/598903 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-magnum | https://review.openstack.org/598714 | master | | switch documentation job to new PTI | openstack/puppet-magnum | https://review.openstack.org/598715 | master | | import zuul job settings from project-config | openstack/puppet-magnum | https://review.openstack.org/598780 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-magnum | https://review.openstack.org/598814 | stable/pike | | import zuul job settings from project-config | openstack/puppet-magnum | https://review.openstack.org/598852 | stable/queens | | import zuul job settings from project-config | openstack/puppet-magnum | https://review.openstack.org/598905 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-manila | https://review.openstack.org/598716 | master | | switch documentation job to new PTI | openstack/puppet-manila | https://review.openstack.org/598717 | master | | import zuul job settings from project-config | openstack/puppet-manila | https://review.openstack.org/598781 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-manila | https://review.openstack.org/598815 | stable/pike | | import zuul job settings from project-config | openstack/puppet-manila | https://review.openstack.org/598853 | stable/queens | | import zuul job settings from project-config | openstack/puppet-manila | https://review.openstack.org/598907 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-mistral | https://review.openstack.org/598718 | master | | switch documentation job to new PTI | openstack/puppet-mistral | https://review.openstack.org/598719 | master | | import zuul job settings from project-config | openstack/puppet-mistral | https://review.openstack.org/598782 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-mistral | https://review.openstack.org/598816 | stable/pike | | import zuul job settings from project-config | openstack/puppet-mistral | https://review.openstack.org/598854 | stable/queens | | import zuul job settings from project-config | openstack/puppet-mistral | https://review.openstack.org/598909 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-murano | https://review.openstack.org/598720 | master | | switch documentation job to new PTI | openstack/puppet-murano | https://review.openstack.org/598721 | master | | import zuul job settings from project-config | openstack/puppet-murano | https://review.openstack.org/598783 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-murano | https://review.openstack.org/598817 | stable/pike | | import zuul job settings from project-config | openstack/puppet-murano | https://review.openstack.org/598855 | stable/queens | | import zuul job settings from project-config | openstack/puppet-murano | https://review.openstack.org/598911 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-neutron | https://review.openstack.org/598722 | master | | switch documentation job to new PTI | openstack/puppet-neutron | https://review.openstack.org/598723 | master | | import zuul job settings from project-config | openstack/puppet-neutron | https://review.openstack.org/598784 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-neutron | https://review.openstack.org/598818 | stable/pike | | import zuul job settings from project-config | openstack/puppet-neutron | https://review.openstack.org/598856 | stable/queens | | import zuul job settings from project-config | openstack/puppet-neutron | https://review.openstack.org/598913 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-nova | https://review.openstack.org/598724 | master | | switch documentation job to new PTI | openstack/puppet-nova | https://review.openstack.org/598725 | master | | import zuul job settings from project-config | openstack/puppet-nova | https://review.openstack.org/598785 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-nova | https://review.openstack.org/598819 | stable/pike | | import zuul job settings from project-config | openstack/puppet-nova | https://review.openstack.org/598857 | stable/queens | | import zuul job settings from project-config | openstack/puppet-nova | https://review.openstack.org/598915 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-octavia | https://review.openstack.org/598726 | master | | switch documentation job to new PTI | openstack/puppet-octavia | https://review.openstack.org/598727 | master | | import zuul job settings from project-config | openstack/puppet-octavia | https://review.openstack.org/598786 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-octavia | https://review.openstack.org/598820 | stable/pike | | import zuul job settings from project-config | openstack/puppet-octavia | https://review.openstack.org/598858 | stable/queens | | import zuul job settings from project-config | openstack/puppet-octavia | https://review.openstack.org/598916 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-openstack-guide | https://review.openstack.org/598730 | master | | switch documentation job to new PTI | openstack/puppet-openstack-guide | https://review.openstack.org/598731 | master | | import zuul job settings from project-config | openstack/puppet-openstack-specs | https://review.openstack.org/598736 | master | | import zuul job settings from project-config | openstack/puppet-openstack_extras | https://review.openstack.org/598728 | master | | switch documentation job to new PTI | openstack/puppet-openstack_extras | https://review.openstack.org/598729 | master | | import zuul job settings from project-config | openstack/puppet-openstack_extras | https://review.openstack.org/598787 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-openstack_extras | https://review.openstack.org/598821 | stable/pike | | import zuul job settings from project-config | openstack/puppet-openstack_extras | https://review.openstack.org/598859 | stable/queens | | import zuul job settings from project-config | openstack/puppet-openstack_extras | https://review.openstack.org/598919 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-openstack_spec_helper | https://review.openstack.org/598734 | master | | switch documentation job to new PTI | openstack/puppet-openstack_spec_helper | https://review.openstack.org/598735 | master | | import zuul job settings from project-config | openstack/puppet-openstack_spec_helper | https://review.openstack.org/598789 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-openstack_spec_helper | https://review.openstack.org/598823 | stable/pike | | import zuul job settings from project-config | openstack/puppet-openstack_spec_helper | https://review.openstack.org/598861 | stable/queens | | import zuul job settings from project-config | openstack/puppet-openstack_spec_helper | https://review.openstack.org/598922 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-openstacklib | https://review.openstack.org/598732 | master | | switch documentation job to new PTI | openstack/puppet-openstacklib | https://review.openstack.org/598733 | master | | import zuul job settings from project-config | openstack/puppet-openstacklib | https://review.openstack.org/598788 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-openstacklib | https://review.openstack.org/598822 | stable/pike | | import zuul job settings from project-config | openstack/puppet-openstacklib | https://review.openstack.org/598860 | stable/queens | | import zuul job settings from project-config | openstack/puppet-openstacklib | https://review.openstack.org/598921 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-oslo | https://review.openstack.org/598737 | master | | switch documentation job to new PTI | openstack/puppet-oslo | https://review.openstack.org/598738 | master | | import zuul job settings from project-config | openstack/puppet-oslo | https://review.openstack.org/598790 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-oslo | https://review.openstack.org/598824 | stable/pike | | import zuul job settings from project-config | openstack/puppet-oslo | https://review.openstack.org/598862 | stable/queens | | import zuul job settings from project-config | openstack/puppet-oslo | https://review.openstack.org/598924 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-ovn | https://review.openstack.org/598739 | master | | switch documentation job to new PTI | openstack/puppet-ovn | https://review.openstack.org/598740 | master | | import zuul job settings from project-config | openstack/puppet-ovn | https://review.openstack.org/598791 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-ovn | https://review.openstack.org/598825 | stable/pike | | import zuul job settings from project-config | openstack/puppet-ovn | https://review.openstack.org/598863 | stable/queens | | import zuul job settings from project-config | openstack/puppet-ovn | https://review.openstack.org/598925 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-panko | https://review.openstack.org/598741 | master | | switch documentation job to new PTI | openstack/puppet-panko | https://review.openstack.org/598742 | master | | import zuul job settings from project-config | openstack/puppet-panko | https://review.openstack.org/598792 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-panko | https://review.openstack.org/598826 | stable/pike | | import zuul job settings from project-config | openstack/puppet-panko | https://review.openstack.org/598864 | stable/queens | | import zuul job settings from project-config | openstack/puppet-panko | https://review.openstack.org/598926 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-qdr | https://review.openstack.org/598743 | master | | switch documentation job to new PTI | openstack/puppet-qdr | https://review.openstack.org/598744 | master | | import zuul job settings from project-config | openstack/puppet-qdr | https://review.openstack.org/598865 | stable/queens | | import zuul job settings from project-config | openstack/puppet-qdr | https://review.openstack.org/598927 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-rally | https://review.openstack.org/598745 | master | | switch documentation job to new PTI | openstack/puppet-rally | https://review.openstack.org/598746 | master | | import zuul job settings from project-config | openstack/puppet-rally | https://review.openstack.org/598866 | stable/queens | | import zuul job settings from project-config | openstack/puppet-rally | https://review.openstack.org/598928 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-sahara | https://review.openstack.org/598747 | master | | switch documentation job to new PTI | openstack/puppet-sahara | https://review.openstack.org/598748 | master | | import zuul job settings from project-config | openstack/puppet-sahara | https://review.openstack.org/598793 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-sahara | https://review.openstack.org/598827 | stable/pike | | import zuul job settings from project-config | openstack/puppet-sahara | https://review.openstack.org/598867 | stable/queens | | import zuul job settings from project-config | openstack/puppet-sahara | https://review.openstack.org/598930 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-senlin | https://review.openstack.org/598749 | master | | switch documentation job to new PTI | openstack/puppet-senlin | https://review.openstack.org/598750 | master | | import zuul job settings from project-config | openstack/puppet-swift | https://review.openstack.org/598751 | master | | switch documentation job to new PTI | openstack/puppet-swift | https://review.openstack.org/598752 | master | | import zuul job settings from project-config | openstack/puppet-swift | https://review.openstack.org/598794 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-swift | https://review.openstack.org/598828 | stable/pike | | import zuul job settings from project-config | openstack/puppet-swift | https://review.openstack.org/598868 | stable/queens | | import zuul job settings from project-config | openstack/puppet-swift | https://review.openstack.org/598931 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-tacker | https://review.openstack.org/598753 | master | | switch documentation job to new PTI | openstack/puppet-tacker | https://review.openstack.org/598754 | master | | import zuul job settings from project-config | openstack/puppet-tacker | https://review.openstack.org/598795 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-tacker | https://review.openstack.org/598829 | stable/pike | | import zuul job settings from project-config | openstack/puppet-tacker | https://review.openstack.org/598869 | stable/queens | | import zuul job settings from project-config | openstack/puppet-tacker | https://review.openstack.org/598932 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-tempest | https://review.openstack.org/598755 | master | | switch documentation job to new PTI | openstack/puppet-tempest | https://review.openstack.org/598756 | master | | import zuul job settings from project-config | openstack/puppet-tempest | https://review.openstack.org/598796 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-tempest | https://review.openstack.org/598830 | stable/pike | | import zuul job settings from project-config | openstack/puppet-tempest | https://review.openstack.org/598870 | stable/queens | | import zuul job settings from project-config | openstack/puppet-tempest | https://review.openstack.org/598933 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-trove | https://review.openstack.org/598757 | master | | switch documentation job to new PTI | openstack/puppet-trove | https://review.openstack.org/598758 | master | | import zuul job settings from project-config | openstack/puppet-trove | https://review.openstack.org/598797 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-trove | https://review.openstack.org/598831 | stable/pike | | import zuul job settings from project-config | openstack/puppet-trove | https://review.openstack.org/598871 | stable/queens | | import zuul job settings from project-config | openstack/puppet-trove | https://review.openstack.org/598934 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-vitrage | https://review.openstack.org/598759 | master | | switch documentation job to new PTI | openstack/puppet-vitrage | https://review.openstack.org/598760 | master | | import zuul job settings from project-config | openstack/puppet-vitrage | https://review.openstack.org/598832 | stable/pike | | import zuul job settings from project-config | openstack/puppet-vitrage | https://review.openstack.org/598872 | stable/queens | | import zuul job settings from project-config | openstack/puppet-vitrage | https://review.openstack.org/598935 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-vswitch | https://review.openstack.org/598761 | master | | switch documentation job to new PTI | openstack/puppet-vswitch | https://review.openstack.org/598762 | master | | import zuul job settings from project-config | openstack/puppet-vswitch | https://review.openstack.org/598798 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-vswitch | https://review.openstack.org/598833 | stable/pike | | import zuul job settings from project-config | openstack/puppet-vswitch | https://review.openstack.org/598873 | stable/queens | | import zuul job settings from project-config | openstack/puppet-vswitch | https://review.openstack.org/598936 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-watcher | https://review.openstack.org/598763 | master | | switch documentation job to new PTI | openstack/puppet-watcher | https://review.openstack.org/598764 | master | | import zuul job settings from project-config | openstack/puppet-watcher | https://review.openstack.org/598799 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-watcher | https://review.openstack.org/598834 | stable/pike | | import zuul job settings from project-config | openstack/puppet-watcher | https://review.openstack.org/598874 | stable/queens | | import zuul job settings from project-config | openstack/puppet-watcher | https://review.openstack.org/598937 | stable/rocky | | import zuul job settings from project-config | openstack/puppet-zaqar | https://review.openstack.org/598765 | master | | switch documentation job to new PTI | openstack/puppet-zaqar | https://review.openstack.org/598766 | master | | import zuul job settings from project-config | openstack/puppet-zaqar | https://review.openstack.org/598800 | stable/ocata | | import zuul job settings from project-config | openstack/puppet-zaqar | https://review.openstack.org/598835 | stable/pike | | import zuul job settings from project-config | openstack/puppet-zaqar | https://review.openstack.org/598875 | stable/queens | | import zuul job settings from project-config | openstack/puppet-zaqar | https://review.openstack.org/598938 | stable/rocky | +-------------------------------------------------------+----------------------------------------+-------------------------------------+---------------+ From jaosorior at redhat.com Fri Aug 31 13:08:24 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Fri, 31 Aug 2018 16:08:24 +0300 Subject: [openstack-dev] [Tripleo] fluentd logging status In-Reply-To: References: Message-ID: Logging is a topic that I think should get more love on the TripleO side. On 08/24/2018 12:17 PM, Juan Badia Payno wrote: > Recently, I did a little test regarding fluentd logging on the gates > master[1], queens[2], pike [3]. I don't like the status of it, I'm > still working on them, but basically there are quite a lot of > misconfigured logs and some services that they are not configured at all. > > I think we need to put some effort on the logging. The purpose of this > email is to point out that we need to do a little effort on the task. > > First of all, I think we need to enable fluentd on all the scenarios, > as it is on the tests [1][2][3] commented on the beginning of the > email. Once everything is ok and some automatic test regarding logging > is done they can be disabled. Wes, do you have an opinion about this? I think it would be a good idea to avoid these types of regressions. > > I'd love not to create a new bug for every misconfigured/unconfigured > service, but if requested to grab more attention on it, I will open it. One bug to fix all this is fine, but we do need a public place to track all the work that needs to be done. Lets reference that place on the bug. Could be Trello or an etherpad, or whatever you want, it's up to you. > > The plan I have in mind is something like: >  * Make an initial picture of what the fluentd/log status is (from > pike upwards). >  * Fix all misconfigured services. (designate,...) >  * Add the non-configured services. (manila,...) >  * Add an automated check to find a possible > unconfigured/misconfigured problem. > > Any comments, doubts or questions are welcome > > Cheers, > Juan > > [1] https://review.openstack.org/594836 > [2] https://review.openstack.org/594838 > [3] https://review.openstack.org/594840 > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaosorior at redhat.com Fri Aug 31 13:08:36 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Fri, 31 Aug 2018 16:08:36 +0300 Subject: [openstack-dev] [Tripleo] fluentd logging status In-Reply-To: References: Message-ID: <3054db77-3ba3-3ad2-25f0-17c7a3fc8df0@redhat.com> Logging is a topic that I think should get more love on the TripleO side. On 08/24/2018 12:17 PM, Juan Badia Payno wrote: > Recently, I did a little test regarding fluentd logging on the gates > master[1], queens[2], pike [3]. I don't like the status of it, I'm > still working on them, but basically there are quite a lot of > misconfigured logs and some services that they are not configured at all. > > I think we need to put some effort on the logging. The purpose of this > email is to point out that we need to do a little effort on the task. > > First of all, I think we need to enable fluentd on all the scenarios, > as it is on the tests [1][2][3] commented on the beginning of the > email. Once everything is ok and some automatic test regarding logging > is done they can be disabled. Wes, do you have an opinion about this? I think it would be a good idea to avoid these types of regressions. > > I'd love not to create a new bug for every misconfigured/unconfigured > service, but if requested to grab more attention on it, I will open it. One bug to fix all this is fine, but we do need a public place to track all the work that needs to be done. Lets reference that place on the bug. Could be Trello or an etherpad, or whatever you want, it's up to you. > > The plan I have in mind is something like: >  * Make an initial picture of what the fluentd/log status is (from > pike upwards). >  * Fix all misconfigured services. (designate,...) >  * Add the non-configured services. (manila,...) >  * Add an automated check to find a possible > unconfigured/misconfigured problem. > > Any comments, doubts or questions are welcome > > Cheers, > Juan > > [1] https://review.openstack.org/594836 > [2] https://review.openstack.org/594838 > [3] https://review.openstack.org/594840 > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From luka.peschke at objectif-libre.com Fri Aug 31 13:16:03 2018 From: luka.peschke at objectif-libre.com (Luka Peschke) Date: Fri, 31 Aug 2018 15:16:03 +0200 Subject: [openstack-dev] =?utf-8?b?W2dvYWxzXVtweXRob24zXVthZGp1dGFudF1b?= =?utf-8?b?YmFyYmljYW5dW2NoZWZdW2NpbmRlcl1bY2xvdWRraXR0eV1baTE4bl1baW5m?= =?utf-8?b?cmFdW2xvY2ldW25vdmFdW2NoYXJtc11bcnBtXVtwdXBwZXRdW3FhXVt0ZWxl?= =?utf-8?q?metry=5D=5Btrove=5D_join_the_bandwagon!?= In-Reply-To: <1535720627-sup-1016@lrrr.local> References: <1535671314-sup-5525@lrrr.local> <5a0ea1129dc9ecaf64f52668255ea4b6@objectif-libre.com> <1535720627-sup-1016@lrrr.local> Message-ID: Thank you for this! For small projects like cloudkitty it is really helpful when matters like this one are handled by persons who are external to the project. Regards, Luka Peschke Le 2018-08-31 15:04, Doug Hellmann a écrit : > Excerpts from Christophe Sauthier's message of 2018-08-31 11:20:33 > +0200: > >> We are ready to start on the cloudkitty's team ! >> >> Christophe > > Here are the patches: > > +-------------------------------------------------+-------------------------------------+-------------------------------------+---------------+ > | Subject | Repo > | URL | Branch > | > +-------------------------------------------------+-------------------------------------+-------------------------------------+---------------+ > | remove job settings for cloudkitty repositories | > openstack-infra/project-config | > https://review.openstack.org/598929 | master | > | import zuul job settings from project-config | > openstack/cloudkitty | > https://review.openstack.org/598884 | master | > | switch documentation job to new PTI | > openstack/cloudkitty | > https://review.openstack.org/598885 | master | > | add python 3.6 unit test job | > openstack/cloudkitty | > https://review.openstack.org/598886 | master | > | import zuul job settings from project-config | > openstack/cloudkitty | > https://review.openstack.org/598900 | stable/ocata | > | import zuul job settings from project-config | > openstack/cloudkitty | > https://review.openstack.org/598906 | stable/pike | > | import zuul job settings from project-config | > openstack/cloudkitty | > https://review.openstack.org/598912 | stable/queens | > | import zuul job settings from project-config | > openstack/cloudkitty | > https://review.openstack.org/598918 | stable/rocky | > | import zuul job settings from project-config | > openstack/cloudkitty-dashboard | > https://review.openstack.org/598888 | master | > | switch documentation job to new PTI | > openstack/cloudkitty-dashboard | > https://review.openstack.org/598889 | master | > | add python 3.6 unit test job | > openstack/cloudkitty-dashboard | > https://review.openstack.org/598890 | master | > | import zuul job settings from project-config | > openstack/cloudkitty-dashboard | > https://review.openstack.org/598902 | stable/ocata | > | import zuul job settings from project-config | > openstack/cloudkitty-dashboard | > https://review.openstack.org/598908 | stable/pike | > | import zuul job settings from project-config | > openstack/cloudkitty-dashboard | > https://review.openstack.org/598914 | stable/queens | > | import zuul job settings from project-config | > openstack/cloudkitty-dashboard | > https://review.openstack.org/598920 | stable/rocky | > | import zuul job settings from project-config | > openstack/cloudkitty-specs | > https://review.openstack.org/598893 | master | > | import zuul job settings from project-config | > openstack/cloudkitty-tempest-plugin | > https://review.openstack.org/598895 | master | > | import zuul job settings from project-config | > openstack/python-cloudkittyclient | > https://review.openstack.org/598897 | master | > | add python 3.6 unit test job | > openstack/python-cloudkittyclient | > https://review.openstack.org/598898 | master | > | import zuul job settings from project-config | > openstack/python-cloudkittyclient | > https://review.openstack.org/598904 | stable/ocata | > | import zuul job settings from project-config | > openstack/python-cloudkittyclient | > https://review.openstack.org/598910 | stable/pike | > | import zuul job settings from project-config | > openstack/python-cloudkittyclient | > https://review.openstack.org/598917 | stable/queens | > | import zuul job settings from project-config | > openstack/python-cloudkittyclient | > https://review.openstack.org/598923 | stable/rocky | > +-------------------------------------------------+-------------------------------------+-------------------------------------+---------------+ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean.mcginnis at gmx.com Fri Aug 31 13:23:22 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 31 Aug 2018 08:23:22 -0500 Subject: [openstack-dev] OpenStack Rocky is officially released Message-ID: <20180831132322.GA23563@sm-workstation> The following was sent out yesterday to the openstack-announce mailing list. Thank you to everyone involved in the Rocky development cycle. Truly a lot has happened since the snowpocalypse. On to Stein (and hopefully a less chaotic PTG)! ------ Hello OpenStack community, I'm excited to announce the final releases for the components of OpenStack Rocky, which conclude the Rocky development cycle. You will find a complete list of all components, their latest versions, and links to individual project release notes documents listed on the new release site. https://releases.openstack.org/rocky/ Congratulations to all of the teams who have contributed to this release! Our next production cycle, Stein, has already started. We will meet in Denver, Colorado, USA September 10-14 at the Project Team Gathering to plan the work for the upcoming cycle. I hope to see you there! Thanks, Sean McGinnis and the whole Release Management team From dirk at dmllr.de Fri Aug 31 13:26:10 2018 From: dirk at dmllr.de (=?UTF-8?B?RGlyayBNw7xsbGVy?=) Date: Fri, 31 Aug 2018 15:26:10 +0200 Subject: [openstack-dev] [goals][python3][adjutant][barbican][chef][cinder][cloudkitty][i18n][infra][loci][nova][charms][rpm][puppet][qa][telemetry][trove] join the bandwagon! In-Reply-To: <1535671314-sup-5525@lrrr.local> References: <1535671314-sup-5525@lrrr.local> Message-ID: Am Fr., 31. Aug. 2018 um 01:28 Uhr schrieb Doug Hellmann : Hi Doug, > | Packaging-rpm | 4 repos | We're ready - please send the patches. Greetings, Dirk From smalleni at redhat.com Fri Aug 31 14:02:04 2018 From: smalleni at redhat.com (Sai Sindhur Malleni) Date: Fri, 31 Aug 2018 10:02:04 -0400 Subject: [openstack-dev] [Rally] Deployment check fails Message-ID: Hey all, rally deployment check fails saying, bad admin credentials but I'm able to use the admin tenant to performance openstack operations like creating VMs extra. rally deployment check fails without giving much information. (.rally-venv) [stack at undercloud browbeat]$ rally deployment check -------------------------------------------------------------------------------- Platform openstack: -------------------------------------------------------------------------------- Error while checking admin credentials: AuthenticationFailed: Bad admin creds: { "auth_url": "https://10.0.0.5:13000//v3", "domain_name": null, "endpoint_type": null, "https_cacert": "", "https_insecure": false, "password": "***", "profiler_conn_str": null, "profiler_hmac_key": null, "project_domain_name": "Default", "region_name": "", "tenant_name": "admin", "user_domain_name": "Default", "username": "admin" } I'm not sure that the reason is give I can source the adminrc and run openstack commands normally. Here are the contents of the rc file: for key in $( set | awk '{FS="="} /^OS_/ {print $1}' ); do unset $key ; done export OS_NO_CACHE=True export COMPUTE_API_VERSION=1.1 export OS_USERNAME=admin export no_proxy=,10.0.0.5,192.168.24.7 export OS_USER_DOMAIN_NAME=Default export OS_VOLUME_API_VERSION=3 export OS_CLOUDNAME=overcloud export OS_AUTH_URL=https://10.0.0.5:13000//v3 export NOVA_VERSION=1.1 export OS_IMAGE_API_VERSION=2 export OS_PASSWORD=kYbMNEdPwGfCBUrwDH4rdxZyJ export OS_PROJECT_DOMAIN_NAME=Default export OS_IDENTITY_API_VERSION=3 export OS_PROJECT_NAME=admin export OS_AUTH_TYPE=password export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext object is not available" # Add OS_CLOUDNAME to PS1 if [ -z "${CLOUDPROMPT_ENABLED:-}" ]; then export PS1=${PS1:-""} export PS1=\${OS_CLOUDNAME:+"(\$OS_CLOUDNAME)"}\ $PS1 export CLOUDPROMPT_ENABLED=1 fi Please let me know if there is a way to know more about what the issue is. -- Sai Sindhur Malleni Software Engineer Red Hat Inc. 314 Littleton Road Westford MA, USA -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Fri Aug 31 14:13:00 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Fri, 31 Aug 2018 09:13:00 -0500 Subject: [openstack-dev] [kolla][tripleo][oslo][all] Bumping eventlet to 0.24.1 In-Reply-To: References: <20180823145013.vzt46kgd7d7lkmkj@gentoo.org> <20180831015246.z4zvjp3lkb2yegis@gentoo.org> <20180831031004.nou6m2y3dfcoxadk@gentoo.org> Message-ID: <20180831141300.744p3h4ficssnskb@gentoo.org> On 18-08-31 07:44:44, Jim Rollenhagen wrote: > On Thu, Aug 30, 2018 at 11:10 PM, Matthew Thode > wrote: > > > On 18-08-30 20:52:46, Matthew Thode wrote: > > > On 18-08-23 09:50:13, Matthew Thode wrote: > > > > This is your warning, if you have concerns please comment in > > > > https://review.openstack.org/589382 . cross tests pass, so that's a > > > > good sign... atm this is only for stein. > > > > > > > > > > Consider yourself on notice, https://review.openstack.org/589382 is > > > planned to be merged on monday. > > > > > > > A bit more of follow up since that was so dry. There are some projects > > that have not branched (mainly cycle-trailing and plugins). > > > > There has historically been some breakage with each eventlet update, > > this one is not expected to be much different unfortunately. Currently > > there are known issues with oslo.service but they look solvable. A list > > of all projects using eventlet is attached. > > > > The full list of non-branched projects will be at the bottom of this > > message, but the projects that I think should be more careful are the > > following. > > > > kolla kolla-ansible heat-agents heat-dashboard tripleo-ipsec > > > > the rest of the repos seem to be plugins, which I'm personally less > > concerned about, but should still be branched (preferably sooner rather > > than later). > > > > Tempest plugins, like tempest, are not meant to be branched: > http://lists.openstack.org/pipermail/openstack-dev/2018-August/133211.html > Yep, it's only on the list because the command used to generate it can't handle projects that should not branch. > > > > > > ansible-role-container-registry > > ansible-role-redhat-subscription > > ansible-role-tripleo-modify-image > > barbican-tempest-plugin > > blazar-tempest-plugin > > cinder-tempest-plugin > > cloudkitty-tempest-plugin > > congress-tempest-plugin > > designate-tempest-plugin > > devstack-plugin-amqp1 > > devstack-plugin-kafka > > ec2api-tempest-plugin > > heat-agents > > heat-dashboard > > heat-tempest-plugin > > ironic-tempest-plugin > > keystone-tempest-plugin > > kolla-ansible > > kolla > > kuryr-tempest-plugin > > magnum-tempest-plugin > > manila-tempest-plugin > > mistral-tempest-plugin > > monasca-kibana-plugin > > monasca-tempest-plugin > > murano-tempest-plugin > > networking-generic-switch-tempest-plugin > > neutron-tempest-plugin > > octavia-tempest-plugin > > oswin-tempest-plugin > > patrole > > release-test > > sahara-tests > > senlin-tempest-plugin > > solum-tempest-plugin > > telemetry-tempest-plugin > > tempest-tripleo-ui > > tempest > > tripleo-ipsec > > trove-tempest-plugin > > vitrage-tempest-plugin > > watcher-tempest-plugin > > zaqar-tempest-plugin > > zun-tempest-plugin > > > > -- > > Matthew Thode (prometheanfire) > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From andr.kurilin at gmail.com Fri Aug 31 14:13:18 2018 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Fri, 31 Aug 2018 17:13:18 +0300 Subject: [openstack-dev] [Rally] Deployment check fails In-Reply-To: References: Message-ID: Hi! Sorry for such a not user-friendly error message. Several days ago, we merged a fix[*] for it and I'm planning to release a new version of rally-openstack package soon. While the fix is not released, please share the result of the 2 commands: - rally env show --only-spec # replace password and etc - rally env info [*] - https://github.com/openstack/rally-openstack/commit/5821f8b8714c532778f2eef142a5fdeb3a1e6f05 пт, 31 авг. 2018 г. в 17:02, Sai Sindhur Malleni : > Hey all, > > rally deployment check fails saying, bad admin credentials but I'm able to > use the admin tenant to performance openstack operations like creating VMs > extra. rally deployment check fails without giving much information. > (.rally-venv) [stack at undercloud browbeat]$ rally deployment check > > -------------------------------------------------------------------------------- > Platform openstack: > > -------------------------------------------------------------------------------- > > Error while checking admin credentials: > AuthenticationFailed: Bad admin creds: > { > "auth_url": "https://10.0.0.5:13000//v3", > "domain_name": null, > "endpoint_type": null, > "https_cacert": "", > "https_insecure": false, > "password": "***", > "profiler_conn_str": null, > "profiler_hmac_key": null, > "project_domain_name": "Default", > "region_name": "", > "tenant_name": "admin", > "user_domain_name": "Default", > "username": "admin" > } > > I'm not sure that the reason is give I can source the adminrc and run > openstack commands normally. > > Here are the contents of the rc file: > for key in $( set | awk '{FS="="} /^OS_/ {print $1}' ); do unset $key ; > done > export OS_NO_CACHE=True > export COMPUTE_API_VERSION=1.1 > export OS_USERNAME=admin > export no_proxy=,10.0.0.5,192.168.24.7 > export OS_USER_DOMAIN_NAME=Default > export OS_VOLUME_API_VERSION=3 > export OS_CLOUDNAME=overcloud > export OS_AUTH_URL=https://10.0.0.5:13000//v3 > export NOVA_VERSION=1.1 > export OS_IMAGE_API_VERSION=2 > export OS_PASSWORD=kYbMNEdPwGfCBUrwDH4rdxZyJ > export OS_PROJECT_DOMAIN_NAME=Default > export OS_IDENTITY_API_VERSION=3 > export OS_PROJECT_NAME=admin > export OS_AUTH_TYPE=password > export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext > object is not available" > > # Add OS_CLOUDNAME to PS1 > if [ -z "${CLOUDPROMPT_ENABLED:-}" ]; then > export PS1=${PS1:-""} > export PS1=\${OS_CLOUDNAME:+"(\$OS_CLOUDNAME)"}\ $PS1 > export CLOUDPROMPT_ENABLED=1 > fi > > > Please let me know if there is a way to know more about what the issue is. > -- > Sai Sindhur Malleni > > Software Engineer > Red Hat Inc. > 314 Littleton Road > Westford MA, USA > > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Aug 31 14:15:29 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 31 Aug 2018 10:15:29 -0400 Subject: [openstack-dev] [goals][python3][adjutant][barbican][chef][cinder][cloudkitty][i18n][infra][loci][nova][charms][rpm][puppet][qa][telemetry][trove] join the bandwagon! In-Reply-To: References: <1535671314-sup-5525@lrrr.local> Message-ID: <1535724883-sup-6344@lrrr.local> Excerpts from Dirk Müller's message of 2018-08-31 15:26:10 +0200: > Am Fr., 31. Aug. 2018 um 01:28 Uhr schrieb Doug Hellmann > : > > Hi Doug, > > > | Packaging-rpm | 4 repos | > > We're ready - please send the patches. > > Greetings, > Dirk > Here you go: +----------------------------------------------------+--------------------------------+-------------------------------------+--------+ | Subject | Repo | URL | Branch | +----------------------------------------------------+--------------------------------+-------------------------------------+--------+ | remove job settings for Packaging-rpm repositories | openstack-infra/project-config | https://review.openstack.org/598974 | master | | import zuul job settings from project-config | openstack/pymod2pkg | https://review.openstack.org/598967 | master | | switch documentation job to new PTI | openstack/pymod2pkg | https://review.openstack.org/598968 | master | | add python 3.6 unit test job | openstack/pymod2pkg | https://review.openstack.org/598969 | master | | import zuul job settings from project-config | openstack/renderspec | https://review.openstack.org/598970 | master | | switch documentation job to new PTI | openstack/renderspec | https://review.openstack.org/598971 | master | | add python 3.6 unit test job | openstack/renderspec | https://review.openstack.org/598972 | master | | import zuul job settings from project-config | openstack/rpm-packaging-tools | https://review.openstack.org/598973 | master | +----------------------------------------------------+--------------------------------+-------------------------------------+--------+ From doug at doughellmann.com Fri Aug 31 14:31:36 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 31 Aug 2018 10:31:36 -0400 Subject: [openstack-dev] [kolla][tripleo][oslo][all] Bumping eventlet to 0.24.1 In-Reply-To: <20180831141300.744p3h4ficssnskb@gentoo.org> References: <20180823145013.vzt46kgd7d7lkmkj@gentoo.org> <20180831015246.z4zvjp3lkb2yegis@gentoo.org> <20180831031004.nou6m2y3dfcoxadk@gentoo.org> <20180831141300.744p3h4ficssnskb@gentoo.org> Message-ID: <1535725877-sup-6780@lrrr.local> Excerpts from Matthew Thode's message of 2018-08-31 09:13:00 -0500: > On 18-08-31 07:44:44, Jim Rollenhagen wrote: > > > > Tempest plugins, like tempest, are not meant to be branched: > > http://lists.openstack.org/pipermail/openstack-dev/2018-August/133211.html > > > > Yep, it's only on the list because the command used to generate it can't > handle projects that should not branch. https://review.openstack.org/598981 should fix that From sbauza at redhat.com Fri Aug 31 14:41:01 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Fri, 31 Aug 2018 16:41:01 +0200 Subject: [openstack-dev] [nova][placement] Freezing placement for extraction In-Reply-To: References: Message-ID: On Thu, Aug 30, 2018 at 6:34 PM, Eric Fried wrote: > Greetings. > > The captains of placement extraction have declared readiness to begin > the process of seeding the new repository (once [1] has finished > merging). As such, we are freezing development in the affected portions > of the openstack/nova repository until this process is completed. We're > relying on our active placement reviewers noticing any patches that > touch these "affected portions" and, if that reviewer is not a nova > core, bringing them to the attention of one, so we can put a -2 on it. > > Apologies for having missed the large and wide discussions about placement future in the past weeks. I was off so I just saw the consensus yesterday evening my time. Now that disclaimer is done, can I know the reasoning why we call the freeze as of now and not waiting for either Stein-2 or Stein-3 ? My main concern is that the reshaper series is still being reviewed for Nova. Some other changes using Placement (like drivers using nested Resource Providers and the likes) are also not yet implemented (or even be uploaded) and I'm a bit afraid of us discovering yet another cross-services problem (say with having two distinct computes having different versions) that would make the fix more harder than just fixing directly. > Once the extraction is complete [2], any such frozen patches should be > abandoned and reproposed to the openstack/placement repository. > > Since there will be an interval during which placement code will exist > in both repositories, but before $world has cut over to using > openstack/placement, it is possible that some crucial fix will still > need to be merged into the openstack/nova side. In this case, the fix > must be proposed to *both* repositories, and the justification for its > existence in openstack/nova made clear. > > We surely can do such things for small fixes that don't impact a lot of files. What I'm a bit afraid of is any large change that would get some merge conflicts. Sure, we can find ways to fix it too, but again, why shouldn't we just wait for Stein-2 ? -Sylvain (yet again apologies for the late opinion). For more details on the technical aspects of the extraction process, > refer to this thread [3]. > > For information on the procedural/governance process we will be > following, see [4]. > > Please let us know if you have any questions or concerns, either via > this thread or in #openstack-placement. > > [1] https://review.openstack.org/#/c/597220/ > [2] meaning that we've merged the initial glut of patches necessary to > repath everything and get tests passing > [3] > http://lists.openstack.org/pipermail/openstack-dev/2018-August/133781.html > [4] https://docs.openstack.org/infra/manual/creators.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Aug 31 15:42:51 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 31 Aug 2018 10:42:51 -0500 Subject: [openstack-dev] [nova][placement] Freezing placement for extraction In-Reply-To: References: Message-ID: <80678229-1947-c3d3-d3d2-ca83ac2b87db@gmail.com> On 8/31/2018 9:41 AM, Sylvain Bauza wrote: > Apologies for having missed the large and wide discussions about > placement future in the past weeks. I was off so I just saw the > consensus yesterday evening my time. > Now that disclaimer is done, can I know the reasoning why we call the > freeze as of now and not waiting for either Stein-2 or Stein-3 ? If we're going to do the extraction in Stien, which we said we'd do in Dublin, we need to start that as early as possible to iron out any deployment bugs in the switch. We can't wait until the 2nd or 3rd milestone, it would be too risky. > > My main concern is that the reshaper series is still being reviewed for > Nova. Some other changes using Placement (like drivers using nested > Resource Providers and the likes) are also not yet implemented (or even > be uploaded) and I'm a bit afraid of us discovering yet another > cross-services problem (say with having two distinct computes having > different versions) that would make the fix more harder than just fixing > directly. The Placement-side changes for the reshaper changes are merged. The framework code for compute is either merged or on it's way. The outstanding changes for reshaper are: 1. libvirt and xenapi driver changes to use it - remember me emailing you and the xen team about this last week? We couldn't hold up the existing patches forever. 2. The offline migration stuff for FFU (I believe Dan was signed up for that). 3. Docs and whatever other polishing is needed. So there is nothing related to reshaper that should block Placement extraction happening at this point. Sure we could hit some very weird bug once the driver implementation happens - that's a risk we talked about last week when we removed the -2 from the Placement API change, but again, without people around to work on the driver changes, we can't just sit and hold forever because that pushes out the extraction which makes delivering a smooth upgrade for the extraction during stein riskier, so pick your poison. -- Thanks, Matt From openstack at fried.cc Fri Aug 31 15:45:14 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 31 Aug 2018 10:45:14 -0500 Subject: [openstack-dev] Nominating Chris Dent for placement-core Message-ID: The openstack/placement project [1] and its core team [2] have been established in gerrit. I hereby nominate Chris Dent for membership in the placement-core team. He has been instrumental in the design, implementation, and stewardship of the placement API since its inception and has shown clear and consistent leadership. As we are effectively bootstrapping placement-core at this time, it would seem appropriate to consider +1/-1 responses from heavy placement contributors as well as existing cores (currently nova-core). [1] https://review.openstack.org/#/admin/projects/openstack/placement [2] https://review.openstack.org/#/admin/groups/1936,members From smalleni at redhat.com Fri Aug 31 15:47:12 2018 From: smalleni at redhat.com (Sai Sindhur Malleni) Date: Fri, 31 Aug 2018 11:47:12 -0400 Subject: [openstack-dev] [Rally] Deployment check fails In-Reply-To: References: Message-ID: Hey Andrey, Here is the output of what you asked for (overcloud) (.rally-venv) [stack at undercloud browbeat]$ rally env show --only-spec { "existing at openstack": { "endpoint": null, "region_name": "", "https_insecure": false, "profiler_hmac_key": null, "admin": { "username": "admin", "project_name": "admin", "user_domain_name": "Default", "password": "kYbMNEdPwGfCBUrwDH4rdxZyJ", "project_domain_name": "Default" }, "https_cacert": "", "endpoint_type": null, "auth_url": "https://10.0.0.5:13000//v3", "profiler_conn_str": null } } (overcloud) (.rally-venv) [stack at undercloud browbeat]$ rally env info 2018-08-31 15:46:25.993 440110 ERROR rally.env.env_mgr [-] Plugin existing at openstack.info() method is broken: AuthenticationFailed: Failed to authenticate to https://10.0.0.5:13000//v3 for user 'admin' in project 'admin': SSLError: SSL exception connecting to https://10.0.0.5:13000//v3: HTTPSConnectionPool(host='10.0.0.5', port=13000): Max retries exceeded with url: //v3 (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),)) 2018-08-31 15:46:25.993 440110 ERROR rally.env.env_mgr Traceback (most recent call last): 2018-08-31 15:46:25.993 440110 ERROR rally.env.env_mgr File "/home/stack/browbeat/.rally-venv/lib/python2.7/site-packages/rally/env/env_mgr.py", line 523, in get_info 2018-08-31 15:46:25.993 440110 ERROR rally.env.env_mgr info = p.info() 2018-08-31 15:46:25.993 440110 ERROR rally.env.env_mgr File "/home/stack/browbeat/.rally-venv/lib/python2.7/site-packages/rally_openstack/platforms/existing.py", line 183, in info 2018-08-31 15:46:25.993 440110 ERROR rally.env.env_mgr for stype, name in osclients.Clients(active_user).services().items(): 2018-08-31 15:46:25.993 440110 ERROR rally.env.env_mgr File "/home/stack/browbeat/.rally-venv/lib/python2.7/site-packages/rally_openstack/osclients.py", line 860, in services 2018-08-31 15:46:25.993 440110 ERROR rally.env.env_mgr available_services = self.keystone.service_catalog.get_endpoints() 2018-08-31 15:46:25.993 440110 ERROR rally.env.env_mgr File "/home/stack/browbeat/.rally-venv/lib/python2.7/site-packages/rally_openstack/osclients.py", line 225, in service_catalog 2018-08-31 15:46:25.993 440110 ERROR rally.env.env_mgr return self.auth_ref.service_catalog 2018-08-31 15:46:25.993 440110 ERROR rally.env.env_mgr File "/home/stack/browbeat/.rally-venv/lib/python2.7/site-packages/rally_openstack/osclients.py", line 245, in auth_ref 2018-08-31 15:46:25.993 440110 ERROR rally.env.env_mgr error=str(e)) 2018-08-31 15:46:25.993 440110 ERROR rally.env.env_mgr AuthenticationFailed: Failed to authenticate to https://10.0.0.5:13000//v3 for user 'admin' in project 'admin': SSLError: SSL exception connecting to https://10.0.0.5:13000//v3: HTTPSConnectionPool(host='10.0.0.5', port=13000): Max retries exceeded with url: //v3 (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),)) 2018-08-31 15:46:25.993 440110 ERROR rally.env.env_mgr Env `overcloud (87e6bdec-45c3-4194-9d7c-0451c0c030b5)' +--------------------+------+---------------------------------------------------+ | platform | info | error | +--------------------+------+---------------------------------------------------+ | existing at openstack | null | Plugin existing at openstack.info() method is broken | +--------------------+------+---------------------------------------------------+ On Fri, Aug 31, 2018 at 10:13 AM Andrey Kurilin wrote: > Hi! > > Sorry for such a not user-friendly error message. Several days ago, we > merged a fix[*] for it and I'm planning to release a new version of > rally-openstack package soon. > While the fix is not released, please share the result of the 2 commands: > > - rally env show --only-spec # replace password and etc > - rally env info > > [*] - > https://github.com/openstack/rally-openstack/commit/5821f8b8714c532778f2eef142a5fdeb3a1e6f05 > > пт, 31 авг. 2018 г. в 17:02, Sai Sindhur Malleni : > >> Hey all, >> >> rally deployment check fails saying, bad admin credentials but I'm able >> to use the admin tenant to performance openstack operations like creating >> VMs extra. rally deployment check fails without giving much information. >> (.rally-venv) [stack at undercloud browbeat]$ rally deployment check >> >> -------------------------------------------------------------------------------- >> Platform openstack: >> >> -------------------------------------------------------------------------------- >> >> Error while checking admin credentials: >> AuthenticationFailed: Bad admin creds: >> { >> "auth_url": "https://10.0.0.5:13000//v3", >> "domain_name": null, >> "endpoint_type": null, >> "https_cacert": "", >> "https_insecure": false, >> "password": "***", >> "profiler_conn_str": null, >> "profiler_hmac_key": null, >> "project_domain_name": "Default", >> "region_name": "", >> "tenant_name": "admin", >> "user_domain_name": "Default", >> "username": "admin" >> } >> >> I'm not sure that the reason is give I can source the adminrc and run >> openstack commands normally. >> >> Here are the contents of the rc file: >> for key in $( set | awk '{FS="="} /^OS_/ {print $1}' ); do unset $key ; >> done >> export OS_NO_CACHE=True >> export COMPUTE_API_VERSION=1.1 >> export OS_USERNAME=admin >> export no_proxy=,10.0.0.5,192.168.24.7 >> export OS_USER_DOMAIN_NAME=Default >> export OS_VOLUME_API_VERSION=3 >> export OS_CLOUDNAME=overcloud >> export OS_AUTH_URL=https://10.0.0.5:13000//v3 >> export NOVA_VERSION=1.1 >> export OS_IMAGE_API_VERSION=2 >> export OS_PASSWORD=kYbMNEdPwGfCBUrwDH4rdxZyJ >> export OS_PROJECT_DOMAIN_NAME=Default >> export OS_IDENTITY_API_VERSION=3 >> export OS_PROJECT_NAME=admin >> export OS_AUTH_TYPE=password >> export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true >> SSLContext object is not available" >> >> # Add OS_CLOUDNAME to PS1 >> if [ -z "${CLOUDPROMPT_ENABLED:-}" ]; then >> export PS1=${PS1:-""} >> export PS1=\${OS_CLOUDNAME:+"(\$OS_CLOUDNAME)"}\ $PS1 >> export CLOUDPROMPT_ENABLED=1 >> fi >> >> >> Please let me know if there is a way to know more about what the issue is. >> -- >> Sai Sindhur Malleni >> >> Software Engineer >> Red Hat Inc. >> 314 Littleton Road >> Westford MA, USA >> >> > > -- > Best regards, > Andrey Kurilin. > -- Sai Sindhur Malleni Software Engineer Red Hat Inc. 314 Littleton Road Westford MA, USA -------------- next part -------------- An HTML attachment was scrubbed... URL: From German.Eichberger at rackspace.com Fri Aug 31 15:48:10 2018 From: German.Eichberger at rackspace.com (German Eichberger) Date: Fri, 31 Aug 2018 15:48:10 +0000 Subject: [openstack-dev] [octavia] Proposing Carlos Goncalves (cgoncalves) as an Octavia core reviewer In-Reply-To: References: Message-ID: <8016CEB8-4EF1-4E09-A315-4E9392281518@rackspace.com> Concur. Carlos has been a great contributor! +1 From: Nir Magnezi Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Friday, August 31, 2018 at 5:02 AM To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [octavia] Proposing Carlos Goncalves (cgoncalves) as an Octavia core reviewer Carlos made a significant impact with quality reviews and code contributions. I think he would make a great addition to the core team. +1 From me. /Nir On Fri, Aug 31, 2018 at 6:29 AM Jacky Hu > wrote: +1 Definitely a good contributor for the octavia community. 发自我的 iPhone > 在 2018年8月31日,上午11:24,Michael Johnson > 写道: > > Hello Octavia community, > > I would like to propose Carlos Goncalves as a core reviewer on the > Octavia project. > > Carlos has provided numerous enhancements to the Octavia project, > including setting up the grenade gate for Octavia upgrade testing. > > Over the last few releases he has also been providing quality reviews, > in line with the other core reviewers [1]. I feel that Carlos would > make an excellent addition to the Octavia core reviewer team. > > Existing Octavia core reviewers, please reply to this email with your > support or concerns with adding Jacky to the core team. > > Michael > > [1] http://stackalytics.com/report/contribution/octavia-group/90 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Fri Aug 31 15:55:04 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 31 Aug 2018 10:55:04 -0500 Subject: [openstack-dev] [octavia] Proposing Carlos Goncalves (cgoncalves) as an Octavia core reviewer In-Reply-To: References: Message-ID: Well, I don't vote here but I stiil want to express my +1. I knew this was going to happen sooner rather than later On Thu, Aug 30, 2018 at 10:24 PM, Michael Johnson wrote: > Hello Octavia community, > > I would like to propose Carlos Goncalves as a core reviewer on the > Octavia project. > > Carlos has provided numerous enhancements to the Octavia project, > including setting up the grenade gate for Octavia upgrade testing. > > Over the last few releases he has also been providing quality reviews, > in line with the other core reviewers [1]. I feel that Carlos would > make an excellent addition to the Octavia core reviewer team. > > Existing Octavia core reviewers, please reply to this email with your > support or concerns with adding Jacky to the core team. > > Michael > > [1] http://stackalytics.com/report/contribution/octavia-group/90 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Fri Aug 31 15:58:26 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 31 Aug 2018 10:58:26 -0500 Subject: [openstack-dev] Nominating Chris Dent for placement-core In-Reply-To: References: Message-ID: I don't get to vote here either, but I was one of the first users of the Placement API (Neutron Routed Networks) and I always got great support and guidance from cdent. So +1 and well deserved On Fri, Aug 31, 2018 at 10:45 AM, Eric Fried wrote: > The openstack/placement project [1] and its core team [2] have been > established in gerrit. > > I hereby nominate Chris Dent for membership in the placement-core team. > He has been instrumental in the design, implementation, and stewardship > of the placement API since its inception and has shown clear and > consistent leadership. > > As we are effectively bootstrapping placement-core at this time, it > would seem appropriate to consider +1/-1 responses from heavy placement > contributors as well as existing cores (currently nova-core). > > [1] https://review.openstack.org/#/admin/projects/openstack/placement > [2] https://review.openstack.org/#/admin/groups/1936,members > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From samuel at cassi.ba Fri Aug 31 15:59:50 2018 From: samuel at cassi.ba (Samuel Cassiba) Date: Fri, 31 Aug 2018 08:59:50 -0700 Subject: [openstack-dev] [chef] fog-openstack 0.2.0 breakage Message-ID: Ohai! fog-openstack 0.2.0 was recently released, which had less than optimal effects on Chef OpenStack due to the client cookbook's lack of version pinning on the gem. The crucial change is that fog-openstack itself now determines Identity API versions internally, in preparation for a versionless Keystone endpoint. Chef OpenStack has carried code for Identity API determination for years, to facilitate migrating from Identity v2.0 to Identity v3. Unfortunately, those two methods became at odds with the release of fog-openstack 0.2. At the time of this writing, PR #421 (https://github.com/fog/fog-openstack/pull/421) has been merged, but there is no new release on rubygems.org as of yet. That is likely to happen Very Soon(tm). On the home front, with the help of Roger Luethi and Christoph Albers, we've introduced version constraints to the client cookbook to pin the gem to 0.1.x. At present, we've merged constraints for master, stable/queens and stable/pike. The new release was primed to go into ChefDK 3.2 had it not been brought up sooner. Thank you to everyone who gave a heads-up! Best, scas From dms at danplanet.com Fri Aug 31 15:59:56 2018 From: dms at danplanet.com (Dan Smith) Date: Fri, 31 Aug 2018 08:59:56 -0700 Subject: [openstack-dev] [nova][placement] Freezing placement for extraction In-Reply-To: <80678229-1947-c3d3-d3d2-ca83ac2b87db@gmail.com> (Matt Riedemann's message of "Fri, 31 Aug 2018 10:42:51 -0500") References: <80678229-1947-c3d3-d3d2-ca83ac2b87db@gmail.com> Message-ID: > If we're going to do the extraction in Stien, which we said we'd do in > Dublin, we need to start that as early as possible to iron out any > deployment bugs in the switch. We can't wait until the 2nd or 3rd > milestone, it would be too risky. I agree that the current extraction plan is highly risky and that if it's going to happen, we need plenty of time to clean up the mess. I imagine what Sylvain is getting at here is that if we followed the process of other splits like nova-volume, we'd be doing this differently. In that case, we'd freeze late in the cycle when freezing is appropriate anyway. We'd split out placement such that the nova-integrated one and the separate one are equivalent, and do the work to get it working on its own. In the next cycle new changes go to the split placement only. Operators are able to upgrade to stein without deploying a new stein service first, and can switch to the split placement at their leisure, separate from the release upgrade process. To be honest, I'm not sure how we got to the point of considering it acceptable to be splitting out a piece of nova in a single cycle such that operators have to deploy a new thing in order to upgrade. But alas, as has been said, this is politically more important than ... everything else. --Dan From rasca at redhat.com Fri Aug 31 16:03:38 2018 From: rasca at redhat.com (Raoul Scarazzini) Date: Fri, 31 Aug 2018 18:03:38 +0200 Subject: [openstack-dev] [tripleo] quickstart for humans In-Reply-To: <4cd2fafa-f644-1c1f-56e4-010d1360cf04@redhat.com> References: <20180830142821.gw76edbscvhh3afp@localhost.localdomain> <4cd2fafa-f644-1c1f-56e4-010d1360cf04@redhat.com> Message-ID: On 8/31/18 12:07 PM, Jiří Stránský wrote: [...] > * "for humans" definition differs significantly based on who you ask. > E.g. my intention with [2] was to readily expose *more* knobs and tweaks > and be more transparent with the underlying workings of Ansible, because > i felt like quickstart.sh hides too much from me. In my opinion [2] is > sufficiently "for humans", yet it does pretty much the opposite of what > you're looking for. Hey Jiri, I think that "for humans" means simply that you launch the command with just one parameter (i.e. the virthost), and then you have something. And because of this I think here is just a matter of concentrate the efforts to turn back quickstart.sh to its original scope: making you launch it with just one parameter and have an available environment after a while (OK, sometimes more than a while). Since part of the recent discussions were around the hypotheses of removing it, maybe we can think about make it useful again. Mostly because it is right that the needs of everyone are different, but on the other side with a solid starting point (the default) you can think about customizing depending on your needs. I'm for recycling what we have, planet (and me) will enjoy it! My 0,0000002 cents. -- Raoul Scarazzini rasca at redhat.com From jaypipes at gmail.com Fri Aug 31 16:04:19 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 31 Aug 2018 12:04:19 -0400 Subject: [openstack-dev] [chef] fog-openstack 0.2.0 breakage In-Reply-To: References: Message-ID: <396e4a2e-08fb-417a-ff1a-5b0f072e377b@gmail.com> Thanks for notifying about this, Samuel. Our most modern deployment is actually currently blocked on this and I'm glad to see a resolution. Best, -jay On 08/31/2018 11:59 AM, Samuel Cassiba wrote: > Ohai! > > fog-openstack 0.2.0 was recently released, which had less than optimal > effects on Chef OpenStack due to the client cookbook's lack of version > pinning on the gem. > > The crucial change is that fog-openstack itself now determines > Identity API versions internally, in preparation for a versionless > Keystone endpoint. Chef OpenStack has carried code for Identity API > determination for years, to facilitate migrating from Identity v2.0 to > Identity v3. Unfortunately, those two methods became at odds with the > release of fog-openstack 0.2. > > At the time of this writing, PR #421 > (https://github.com/fog/fog-openstack/pull/421) has been merged, but > there is no new release on rubygems.org as of yet. That is likely to > happen Very Soon(tm). > > On the home front, with the help of Roger Luethi and Christoph Albers, > we've introduced version constraints to the client cookbook to pin the > gem to 0.1.x. At present, we've merged constraints for master, > stable/queens and stable/pike. > > The new release was primed to go into ChefDK 3.2 had it not been > brought up sooner. Thank you to everyone who gave a heads-up! > > Best, > > scas > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From miguel at mlavalle.com Fri Aug 31 16:11:28 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 31 Aug 2018 11:11:28 -0500 Subject: [openstack-dev] [neutron][nova] Small bandwidth demo on the PTG In-Reply-To: <1535704165.17206.0@smtp.office365.com> References: <1535619300.3600.5@smtp.office365.com> <4bb21c51-0092-70f3-a535-8fa59adae7ae@gmail.com> <1535704165.17206.0@smtp.office365.com> Message-ID: Nova room is fine. See you'all there On Fri, Aug 31, 2018 at 3:29 AM, Balázs Gibizer wrote: > > > On Thu, Aug 30, 2018 at 8:13 PM, melanie witt wrote: > >> On Thu, 30 Aug 2018 12:43:06 -0500, Miguel Lavalle wrote: >> >>> Gibi, Bence, >>> >>> In fact, I added the demo explicitly to the Neutron PTG agenda from >>> 1:30 to 2, to give it visiblilty >>> >> >> I'm interested in seeing the demo too. Will the demo be shown at the >> Neutron room or the Nova room? Historically, lunch has ended at 1:30, so >> this will be during the same time as the Neutron/Nova cross project time. >> Should we just co-locate together for the demo and the session? I expect >> anyone watching the demo will want to participate in the Neutron/Nova >> session as well. Either room is fine by me. >> >> > I assume that the nova - neturon cross project session will be in the nova > room, so I propose to have the demo there as well to avoid unnecessarily > moving people around. For us it is totally OK to start the demo at 1:30. > > Cheers, > gibi > > > > -melanie >> >> On Thu, Aug 30, 2018 at 3:55 AM, Balázs Gibizer < >>> balazs.gibizer at ericsson.com > wrote: >>> >>> Hi, >>> >>> Based on the Nova PTG planning etherpad [1] there is a need to talk >>> about the current state of the bandwidth work [2][3]. Bence >>> (rubasov) has already planned to show a small demo to Neutron folks >>> about the current state of the implementation. So Bence and I are >>> wondering about bringing that demo close to the nova - neutron cross >>> project session. That session is currently planned to happen >>> Thursday after lunch. So we are think about showing the demo right >>> before that session starts. It would start 30 minutes before the >>> nova - neutron cross project session. >>> >>> Are Nova folks also interested in seeing such a demo? >>> >>> If you are interested in seeing the demo please drop us a line or >>> ping us in IRC so we know who should we wait for. >>> >>> Cheers, >>> gibi >>> >>> [1] https://etherpad.openstack.org/p/nova-ptg-stein >>> >>> [2] >>> https://specs.openstack.org/openstack/neutron-specs/specs/ro >>> cky/minimum-bandwidth-allocation-placement-api.html >>> >> ocky/minimum-bandwidth-allocation-placement-api.html> >>> [3] >>> https://specs.openstack.org/openstack/nova-specs/specs/rocky >>> /approved/bandwidth-resource-provider.html >>> >> y/approved/bandwidth-resource-provider.html> >>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >> subscribe> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Aug 31 16:17:26 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 31 Aug 2018 16:17:26 +0000 Subject: [openstack-dev] Mailman topic filtering (was: Bringing the community together...) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180830211257.oa6hxd4pningzqf4@yuggoth.org> <20180831000334.GR26778@thor.bakeyournoodle.com> Message-ID: <20180831161726.wtjbzr6yvz2wgghv@yuggoth.org> On 2018-08-31 09:35:55 +0100 (+0100), Stephen Finucane wrote: [...] > I've tinked with mailman 3 before so I could probably take a shot at > this over the next few week(end)s; however, I've no idea how this > feature is supposed to work. Any chance an admin of the current list > could send me a couple of screenshots of the feature in mailman 2 along > with a brief description of the feature? Alternatively, maybe we could > upload them to the wiki page Tony linked above or, better yet, to the > technical details page for same: > > https://wiki.mailman.psf.io/DEV/Brief%20Technical%20Details Looks like this should be https://wiki.list.org/DEV/Brief%20Technical%20Details instead, however reading through it doesn't really sound like the topic filtering feature from MM2. The List Member Manual has a very brief description of the feature from the subscriber standpoint: http://www.list.org/mailman-member/node29.html The List Administration Manual unfortunately doesn't have any content for the feature, just a stubbed-out section heading: http://www.list.org/mailman-admin/node30.html Sending screenshots to the ML is a bit tough, but luckily MIT's listadmins have posted some so we don't need to: http://web.mit.edu/lists/mailman/topics.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From miguel at mlavalle.com Fri Aug 31 16:18:45 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 31 Aug 2018 11:18:45 -0500 Subject: [openstack-dev] [Neutron] Stepping down from Neutron core team In-Reply-To: <131487DD-0E85-40C1-BEF9-265FAC2DDF58@redhat.com> References: <131487DD-0E85-40C1-BEF9-265FAC2DDF58@redhat.com> Message-ID: Kuba, I made a last ditch awkward effort to convince you to stay a little longer unsuccessfully. I understand, though, that now you have other commitments. I know you will be successful in your new adventures, because you are really smart and hard working. And please remember that we will take you back with open arms at any moment in the future. Finally, I wish you good luck, because we all need a little bit of it Best regards Miguel On Fri, Aug 31, 2018 at 3:49 AM, Slawomir Kaplonski wrote: > It’s sad news. Thanks Kuba for all Your help You gave me when I was > newcomer in Neutron community. > Good luck in Your next projects :) > > > Wiadomość napisana przez Jakub Libosvar w dniu > 31.08.2018, o godz. 10:24: > > > > Hi all, > > > > as you have might already heard, I'm no longer involved in Neutron > > development due to some changes. Therefore I'm officially stepping down > > from the core team because I can't provide same quality reviews as I > > tried to do before. > > > > I'd like to thank you all for the opportunity I was given in the Neutron > > team, thank you for all I have learned over the years professionally, > > technically and personally. Tomorrow it's gonna be exactly 5 years since > > I started hacking Neutron and I must say I really enjoyed working with > > all Neutrinos here and I had privilege to meet most of you in person and > > that has an extreme value for me. Keep on being a great community! > > > > Thank you again! > > Kuba > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgoncalves at redhat.com Fri Aug 31 16:41:14 2018 From: cgoncalves at redhat.com (Carlos Goncalves) Date: Fri, 31 Aug 2018 18:41:14 +0200 Subject: [openstack-dev] [octavia] Proposing Carlos Goncalves (cgoncalves) as an Octavia core reviewer In-Reply-To: References: Message-ID: Ha! Gracias for the kind words, Miguel! :-) On Fri, Aug 31, 2018 at 5:55 PM, Miguel Lavalle wrote: > Well, I don't vote here but I stiil want to express my +1. I knew this was > going to happen sooner rather than later > > On Thu, Aug 30, 2018 at 10:24 PM, Michael Johnson > wrote: > >> Hello Octavia community, >> >> I would like to propose Carlos Goncalves as a core reviewer on the >> Octavia project. >> >> Carlos has provided numerous enhancements to the Octavia project, >> including setting up the grenade gate for Octavia upgrade testing. >> >> Over the last few releases he has also been providing quality reviews, >> in line with the other core reviewers [1]. I feel that Carlos would >> make an excellent addition to the Octavia core reviewer team. >> >> Existing Octavia core reviewers, please reply to this email with your >> support or concerns with adding Jacky to the core team. >> >> Michael >> >> [1] http://stackalytics.com/report/contribution/octavia-group/90 >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Aug 31 16:45:24 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 31 Aug 2018 16:45:24 +0000 Subject: [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> <20180830213341.yuxyen2elx2c3is4@yuggoth.org> Message-ID: <20180831164524.mlksltzbzey6tdyo@yuggoth.org> On 2018-08-31 14:02:23 +0200 (+0200), Thomas Goirand wrote: [...] > I'm coming from the time when OpenStack had a list on launchpad > where everything was mixed. We did the split because it was really > annoying to have everything mixed. [...] These days (just running stats for this calendar year) we've been averaging 4 messages a day on the general openstack at lists.o.o ML, so if it's volume you're worried about most of it would be the current -operators and -dev ML discussions anyway (many of which are general questions from users already, because as you also pointed out we don't usually tell them to take their questions elsewhere any more). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Fri Aug 31 17:14:07 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 31 Aug 2018 12:14:07 -0500 Subject: [openstack-dev] [oslo] Bumping eventlet to 0.24.1 In-Reply-To: <20180823145013.vzt46kgd7d7lkmkj@gentoo.org> References: <20180823145013.vzt46kgd7d7lkmkj@gentoo.org> Message-ID: <7bdd48e5-a4b8-b884-c10d-89722b5ba06a@nemebean.com> Just a heads up that for oslo.service we're going to need https://review.openstack.org/#/c/598384 and https://review.openstack.org/#/c/599032/1 for eventlet 0.24.1 compatibility. There aren't any functional issues as far as I can tell, but some unit tests were broken by new behavior. On 08/23/2018 09:50 AM, Matthew Thode wrote: > This is your warning, if you have concerns please comment in > https://review.openstack.org/589382 . cross tests pass, so that's a > good sign... atm this is only for stein. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From flux.adam at gmail.com Fri Aug 31 17:41:47 2018 From: flux.adam at gmail.com (Adam Harwell) Date: Sat, 1 Sep 2018 02:41:47 +0900 Subject: [openstack-dev] [octavia] Proposing Carlos Goncalves (cgoncalves) as an Octavia core reviewer In-Reply-To: References: Message-ID: +1 for sure! On Sat, Sep 1, 2018, 01:41 Carlos Goncalves wrote: > Ha! Gracias for the kind words, Miguel! :-) > > On Fri, Aug 31, 2018 at 5:55 PM, Miguel Lavalle > wrote: > >> Well, I don't vote here but I stiil want to express my +1. I knew this >> was going to happen sooner rather than later >> >> On Thu, Aug 30, 2018 at 10:24 PM, Michael Johnson >> wrote: >> >>> Hello Octavia community, >>> >>> I would like to propose Carlos Goncalves as a core reviewer on the >>> Octavia project. >>> >>> Carlos has provided numerous enhancements to the Octavia project, >>> including setting up the grenade gate for Octavia upgrade testing. >>> >>> Over the last few releases he has also been providing quality reviews, >>> in line with the other core reviewers [1]. I feel that Carlos would >>> make an excellent addition to the Octavia core reviewer team. >>> >>> Existing Octavia core reviewers, please reply to this email with your >>> support or concerns with adding Jacky to the core team. >>> >>> Michael >>> >>> [1] http://stackalytics.com/report/contribution/octavia-group/90 >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Fri Aug 31 17:59:19 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 31 Aug 2018 10:59:19 -0700 Subject: [openstack-dev] [octavia] Proposing Carlos Goncalves (cgoncalves) as an Octavia core reviewer In-Reply-To: References: Message-ID: With a unanimous vote I would like to welcome Carlos to the Octavia Core team! Michael On Fri, Aug 31, 2018 at 10:42 AM Adam Harwell wrote: > > +1 for sure! > > On Sat, Sep 1, 2018, 01:41 Carlos Goncalves wrote: >> >> Ha! Gracias for the kind words, Miguel! :-) >> >> On Fri, Aug 31, 2018 at 5:55 PM, Miguel Lavalle wrote: >>> >>> Well, I don't vote here but I stiil want to express my +1. I knew this was going to happen sooner rather than later >>> >>> On Thu, Aug 30, 2018 at 10:24 PM, Michael Johnson wrote: >>>> >>>> Hello Octavia community, >>>> >>>> I would like to propose Carlos Goncalves as a core reviewer on the >>>> Octavia project. >>>> >>>> Carlos has provided numerous enhancements to the Octavia project, >>>> including setting up the grenade gate for Octavia upgrade testing. >>>> >>>> Over the last few releases he has also been providing quality reviews, >>>> in line with the other core reviewers [1]. I feel that Carlos would >>>> make an excellent addition to the Octavia core reviewer team. >>>> >>>> Existing Octavia core reviewers, please reply to this email with your >>>> support or concerns with adding Jacky to the core team. >>>> >>>> Michael >>>> >>>> [1] http://stackalytics.com/report/contribution/octavia-group/90 >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cgoncalves at redhat.com Fri Aug 31 18:36:24 2018 From: cgoncalves at redhat.com (Carlos Goncalves) Date: Fri, 31 Aug 2018 20:36:24 +0200 Subject: [openstack-dev] [octavia] Proposing Carlos Goncalves (cgoncalves) as an Octavia core reviewer In-Reply-To: References: Message-ID: Thanks for the trust and opportunity, folks! On Fri, Aug 31, 2018 at 7:59 PM, Michael Johnson wrote: > With a unanimous vote I would like to welcome Carlos to the Octavia Core > team! > > Michael > > > On Fri, Aug 31, 2018 at 10:42 AM Adam Harwell wrote: > > > > +1 for sure! > > > > On Sat, Sep 1, 2018, 01:41 Carlos Goncalves > wrote: > >> > >> Ha! Gracias for the kind words, Miguel! :-) > >> > >> On Fri, Aug 31, 2018 at 5:55 PM, Miguel Lavalle > wrote: > >>> > >>> Well, I don't vote here but I stiil want to express my +1. I knew this > was going to happen sooner rather than later > >>> > >>> On Thu, Aug 30, 2018 at 10:24 PM, Michael Johnson > wrote: > >>>> > >>>> Hello Octavia community, > >>>> > >>>> I would like to propose Carlos Goncalves as a core reviewer on the > >>>> Octavia project. > >>>> > >>>> Carlos has provided numerous enhancements to the Octavia project, > >>>> including setting up the grenade gate for Octavia upgrade testing. > >>>> > >>>> Over the last few releases he has also been providing quality reviews, > >>>> in line with the other core reviewers [1]. I feel that Carlos would > >>>> make an excellent addition to the Octavia core reviewer team. > >>>> > >>>> Existing Octavia core reviewers, please reply to this email with your > >>>> support or concerns with adding Jacky to the core team. > >>>> > >>>> Michael > >>>> > >>>> [1] http://stackalytics.com/report/contribution/octavia-group/90 > >>>> > >>>> ____________________________________________________________ > ______________ > >>>> OpenStack Development Mailing List (not for usage questions) > >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >>> > >>> > >>> ____________________________________________________________ > ______________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Aug 31 18:57:53 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 31 Aug 2018 13:57:53 -0500 Subject: [openstack-dev] Nominating Chris Dent for placement-core In-Reply-To: References: Message-ID: <28425d36-35f7-4337-e42b-a176dc18ff1f@gmail.com> On 8/31/2018 10:45 AM, Eric Fried wrote: > The openstack/placement project [1] and its core team [2] have been > established in gerrit. > > I hereby nominate Chris Dent for membership in the placement-core team. > He has been instrumental in the design, implementation, and stewardship > of the placement API since its inception and has shown clear and > consistent leadership. > > As we are effectively bootstrapping placement-core at this time, it > would seem appropriate to consider +1/-1 responses from heavy placement > contributors as well as existing cores (currently nova-core). > > [1]https://review.openstack.org/#/admin/projects/openstack/placement > [2]https://review.openstack.org/#/admin/groups/1936,members +1 -- Thanks, Matt From jaypipes at gmail.com Fri Aug 31 19:03:50 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 31 Aug 2018 15:03:50 -0400 Subject: [openstack-dev] Nominating Chris Dent for placement-core In-Reply-To: References: Message-ID: <1225430f-a42c-fc4d-68ea-f9c4f37b45ae@gmail.com> On 08/31/2018 11:45 AM, Eric Fried wrote: > The openstack/placement project [1] and its core team [2] have been > established in gerrit. > > I hereby nominate Chris Dent for membership in the placement-core team. > He has been instrumental in the design, implementation, and stewardship > of the placement API since its inception and has shown clear and > consistent leadership. > > As we are effectively bootstrapping placement-core at this time, it > would seem appropriate to consider +1/-1 responses from heavy placement > contributors as well as existing cores (currently nova-core). > > [1] https://review.openstack.org/#/admin/projects/openstack/placement > [2] https://review.openstack.org/#/admin/groups/1936,members +1 From lbragstad at gmail.com Fri Aug 31 19:46:29 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 31 Aug 2018 14:46:29 -0500 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 27 August 2018 Message-ID: # Keystone Team Update - Week of 27 August 2018 ## News Welcome to Stein development! ## Release Status Well, Rocky went out the door. A friendly reminder to keep an eye out for bugs and things we should backport. ## PTG Planning The topics in the PTG etherpad have been worked into a schedule [0]. The TL;DR is that Monday is going to be a mainly focused on large, cross-project initiatives (just like what we did in Dublin). Tuesday we are going to be discussing ways we can improve multi-region support (edge-related discussions) and federation. Wednesday is relatively free of topics, but we have a lot of hackathon ideas. This is a good time to iterate quickly on things we need to get done, clean things up, or share how something works with respect to keystone (e.g. Flask). Have an idea you want to propose for Wednesday's hackathon? Just add it to the schedule [0]. Thursday is going to be for keystone-specific topics. Friday we plan to cover any remaining topics and try and formalize everything into the roadmap or specifications repo *before* we leave Denver. If you have comments, questions, or concerns regarding the schedule, please let someone know and we'll get it addressed. [0] https://etherpad.openstack.org/p/keystone-stein-ptg ## Stein Roadmap Planning Harry and I are working through the Rocky roadmap [0] and preparing a new board for Stein. Most of this prep work should be done prior to the PTG so that we can finalize and make adjustments in person. If you want to be involved in this process just ask. Additionally, the Stein series has been created in launchpad, along with the usual blueprints [1][2]. Feel free to use accordingly for other blueprints and bugs. [0] https://trello.com/b/wmyzbFq5/keystone-rocky-roadmap [1] https://blueprints.launchpad.net/keystone/+spec/deprecated-as-of-stein [2] https://blueprints.launchpad.net/keystone/+spec/removed-as-of-stein ## Open Specs Search query: https://bit.ly/2Pi6dGj We landed a couple cleanup patches that re-propose the MFA receipts [0] and capability lists [1] specifications to Stein. Just a note to make sure we treat those as living documents by updating them regularly if details change as we work through the implementations. The JWT specification [2] also received a facelift and is much more specific than it was in the past. Please have a gander if you're interested, or just curious. If the details are still unclear, just let us know and we can get them proposed prior to PTG discussions in a couple weeks. [0] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/stein/mfa-auth-receipt.html [1] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/stein/capabilities-app-creds.html [2] https://review.openstack.org/#/c/541903/ ## Recently Merged Changes Search query: https://bit.ly/2IACk3F We merged 27 changes this week. We also got a good start on the python 3 community goal [0]. Note that there were some patches proposed for the community goal last week, but the author wasn't listed as a champion for the goal and the patches contained errors. We weren't able to reach the author and neither were the goal champions. That said, those patches have been abandoned and Doug reran the tooling to migrate our jobs. Just something to keep in mind if you're reviewing those patches. [0] https://governance.openstack.org/tc/goals/stein/python3-first.html ## Changes that need Attention Search query: https://bit.ly/2wv7QLK There are 61 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. We're making good progress on the Flask reviews [0], but more reviews are always welcome. [0] https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bug/1776504 ## Bugs This week we opened 4 new bugs and closed 1. Bugs opened (4) - Bug #1789450 (keystone:Undecided) opened by Steven Relf https://bugs.launchpad.net/keystone/+bug/1789450 - Bug #1789849 (keystone:Undecided) opened by Jean- https://bugs.launchpad.net/keystone/+bug/1789849 - Bug #1790148 (keystone:Undecided) opened by FreudianSlip https://bugs.launchpad.net/keystone/+bug/1790148 - Bug #1789351 (keystonemiddleware:Undecided) opened by yatin https://bugs.launchpad.net/keystonemiddleware/+bug/1789351 Bugs fixed (1) - Bug #1787874 (keystone:Medium) fixed by wangxiyuan https://bugs.launchpad.net/keystone/+bug/1787874 ## Milestone Outlook We have a lot of work to do to shape the release between now and milestone 1, which will be October 26th. Otherwise we'll be meeting in Denver in a couple weeks. https://releases.openstack.org/stein/schedule.html ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Fri Aug 31 19:52:22 2018 From: melwittt at gmail.com (melanie witt) Date: Fri, 31 Aug 2018 12:52:22 -0700 Subject: [openstack-dev] Nominating Chris Dent for placement-core In-Reply-To: References: Message-ID: <55b42ee2-d5fb-6186-7bf5-3a279bab3680@gmail.com> On Fri, 31 Aug 2018 10:45:14 -0500, Eric Fried wrote: > The openstack/placement project [1] and its core team [2] have been > established in gerrit. > > I hereby nominate Chris Dent for membership in the placement-core team. > He has been instrumental in the design, implementation, and stewardship > of the placement API since its inception and has shown clear and > consistent leadership. > > As we are effectively bootstrapping placement-core at this time, it > would seem appropriate to consider +1/-1 responses from heavy placement > contributors as well as existing cores (currently nova-core). > > [1]https://review.openstack.org/#/admin/projects/openstack/placement > [2]https://review.openstack.org/#/admin/groups/1936,members +1 From jean-philippe at evrard.me Fri Aug 31 20:35:16 2018 From: jean-philippe at evrard.me (=?utf-8?q?jean-philippe=40evrard=2Eme?=) Date: Fri, 31 Aug 2018 22:35:16 +0200 Subject: [openstack-dev] =?utf-8?b?Pz09P3V0Zi04P3E/ICBbb3BlbnN0YWNrLWFu?= =?utf-8?q?sible=5D_Stepping_down_from_OpenStack-Ansible_core?= In-Reply-To: Message-ID: <440a-5b89a680-5-4bff4680@181276953> On Thursday, August 30, 2018 19:40 CEST, Andy McCrae wrote: > Now that Rocky is all but ready it seems like a good time! Since changing > roles I've not been able to keep up enough focus on reviews and other > obligations - so I think it's time to step aside as a core reviewer. > > I want to say thanks to everybody in the community, I'm really proud to see > the work we've done and how the OSA team has grown. I've learned a tonne > from all of you - it's definitely been a great experience. Andy, You've been there for the reshaping of OSA (splitting of repos, change of testing!), and always there when we needed you. I'd like to thank you for your work. I wish you all the best for your new role, and hope our paths will cross again soon! Best regards, Jean-Philippe Evrard (evrardjp) From skaplons at redhat.com Fri Aug 31 21:46:21 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Fri, 31 Aug 2018 23:46:21 +0200 Subject: [openstack-dev] [octavia] Proposing Carlos Goncalves (cgoncalves) as an Octavia core reviewer In-Reply-To: References: Message-ID: <0C9407CE-46E3-4DE5-8447-78612D7C66AE@redhat.com> Congratulation Carlos :) > Wiadomość napisana przez Carlos Goncalves w dniu 31.08.2018, o godz. 20:36: > > Thanks for the trust and opportunity, folks! > > On Fri, Aug 31, 2018 at 7:59 PM, Michael Johnson wrote: > With a unanimous vote I would like to welcome Carlos to the Octavia Core team! > > Michael > > > On Fri, Aug 31, 2018 at 10:42 AM Adam Harwell wrote: > > > > +1 for sure! > > > > On Sat, Sep 1, 2018, 01:41 Carlos Goncalves wrote: > >> > >> Ha! Gracias for the kind words, Miguel! :-) > >> > >> On Fri, Aug 31, 2018 at 5:55 PM, Miguel Lavalle wrote: > >>> > >>> Well, I don't vote here but I stiil want to express my +1. I knew this was going to happen sooner rather than later > >>> > >>> On Thu, Aug 30, 2018 at 10:24 PM, Michael Johnson wrote: > >>>> > >>>> Hello Octavia community, > >>>> > >>>> I would like to propose Carlos Goncalves as a core reviewer on the > >>>> Octavia project. > >>>> > >>>> Carlos has provided numerous enhancements to the Octavia project, > >>>> including setting up the grenade gate for Octavia upgrade testing. > >>>> > >>>> Over the last few releases he has also been providing quality reviews, > >>>> in line with the other core reviewers [1]. I feel that Carlos would > >>>> make an excellent addition to the Octavia core reviewer team. > >>>> > >>>> Existing Octavia core reviewers, please reply to this email with your > >>>> support or concerns with adding Jacky to the core team. > >>>> > >>>> Michael > >>>> > >>>> [1] http://stackalytics.com/report/contribution/octavia-group/90 > >>>> > >>>> __________________________________________________________________________ > >>>> OpenStack Development Mailing List (not for usage questions) > >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >>> > >>> > >>> __________________________________________________________________________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >> > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat