From anlin.kong at gmail.com Mon Jan 1 10:08:30 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Mon, 1 Jan 2018 23:08:30 +1300 Subject: [openstack-dev] [all] propose to upgrade python kubernetes (the k8s python client) to 4.0.0 which breaks oslo.service Message-ID: I edited the topic just for attention. However, the new kubernetes client version breaks the services that's using oslo.service which relies on eventlet library. Some error logs below: (Pdb) n > /vagrant/qinling/qinling/orchestrator/kubernetes/manager.py(49)__init__() -> client = api_client.ApiClient(configuration=config) (Pdb) n Exception in thread Thread-2: Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner self.run() File "/usr/lib/python2.7/threading.py", line 754, in run self.__target(*self.__args, **self.__kwargs) File "/usr/lib/python2.7/multiprocessing/pool.py", line 325, in _handle_workers while thread._state == RUN or (pool._cache and thread._state != TERMINATE): AttributeError: '_MainThread' object has no attribute '_state' I did a google search, found this: https://github.com/eventlet/eventlet/issues/147 multiprocessing.pool was introduced since 4.0.0 (threading lib was used before) I assume this is an backward incompatible change. Any suggestion? Cheers, Lingxian Kong (Larry) On Wed, Dec 13, 2017 at 10:41 PM, Eyal Leshem wrote: > Hi Lingxian, > > It's should - under the assumption of uses only the v1 models ( and not > v1_alpha or v1_beta). > see : https://kubernetes.io/docs/reference/api-overview/ > > thanks , > leyal > > On 13 December 2017 at 11:16, Lingxian Kong wrote: > >> hi, leyal, >> >> I suppose the upgrade is backward compatible, right? >> >> >> Cheers, >> Lingxian Kong (Larry) >> >> On Wed, Dec 13, 2017 at 8:51 PM, Eyal Leshem wrote: >> >>> Hi all , >>> >>> In order to use kubernetes client that support network-policies , >>> we plan to upgrade the python kubernetes package from 1.0.0 to 4.0.0. >>> >>> any objections ? >>> >>> thanks, >>> leyal >>> >>> clarification: >>> The purposed changed is for kubernetes-python-client - that called just >>> "kubernetes" in pypi >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ey.leshem at gmail.com Mon Jan 1 12:56:22 2018 From: ey.leshem at gmail.com (Eyal Leshem) Date: Mon, 1 Jan 2018 14:56:22 +0200 Subject: [openstack-dev] [all] propose to upgrade python kubernetes (the k8s python client) to 4.0.0 which breaks oslo.service In-Reply-To: References: Message-ID: Hi , According to https://github.com/eventlet/eventlet/issues/147 - it's looks that eventlet has issue with "multiprocessing.pool". The ThreadPool used in code that auto-generated by swagger. Possible workaround for that is to monky-patch the client library , and replace the pool with greenpool. If someone has better workaround, please share that with us :) btw , I don't think that should be treated as compatibility issue in the client python as it's an eventlet issue.. Thanks , leyal On 1 January 2018 at 12:08, Lingxian Kong wrote: > I edited the topic just for attention. > > However, the new kubernetes client version breaks the services that's > using oslo.service which relies on eventlet library. Some error logs below: > > (Pdb) n > > /vagrant/qinling/qinling/orchestrator/kubernetes/ > manager.py(49)__init__() > -> client = api_client.ApiClient(configuration=config) > (Pdb) n > Exception in thread Thread-2: > Traceback (most recent call last): > File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner > self.run() > File "/usr/lib/python2.7/threading.py", line 754, in run > self.__target(*self.__args, **self.__kwargs) > File "/usr/lib/python2.7/multiprocessing/pool.py", line 325, in > _handle_workers > while thread._state == RUN or (pool._cache and thread._state != > TERMINATE): > AttributeError: '_MainThread' object has no attribute '_state' > > I did a google search, found this: https://github.com/ > eventlet/eventlet/issues/147 > > multiprocessing.pool was introduced since 4.0.0 (threading lib was used > before) > > I assume this is an backward incompatible change. > > Any suggestion? > > Cheers, > Lingxian Kong (Larry) > > On Wed, Dec 13, 2017 at 10:41 PM, Eyal Leshem wrote: > >> Hi Lingxian, >> >> It's should - under the assumption of uses only the v1 models ( and not >> v1_alpha or v1_beta). >> see : https://kubernetes.io/docs/reference/api-overview/ >> >> thanks , >> leyal >> >> On 13 December 2017 at 11:16, Lingxian Kong wrote: >> >>> hi, leyal, >>> >>> I suppose the upgrade is backward compatible, right? >>> >>> >>> Cheers, >>> Lingxian Kong (Larry) >>> >>> On Wed, Dec 13, 2017 at 8:51 PM, Eyal Leshem >>> wrote: >>> >>>> Hi all , >>>> >>>> In order to use kubernetes client that support network-policies , >>>> we plan to upgrade the python kubernetes package from 1.0.0 to 4.0.0. >>>> >>>> any objections ? >>>> >>>> thanks, >>>> leyal >>>> >>>> clarification: >>>> The purposed changed is for kubernetes-python-client - that called just >>>> "kubernetes" in pypi >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Jan 2 02:33:29 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 1 Jan 2018 18:33:29 -0800 Subject: [openstack-dev] [tripleo] CI promotion blockers In-Reply-To: References: Message-ID: We've got promotion today, thanks Wes & Sagi for your help. On Sun, Dec 31, 2017 at 9:06 AM, Emilien Macchi wrote: > Here's an update on what we did the last days after merging these > blockers mentioned in the previous email: > > - Ignore a failing test in tempest (workaround) > https://review.rdoproject.org/r/#/c/11118/ until > https://review.openstack.org/#/c/526647/ is merged. It allowed RDO > repos to be consistent again, so we could have the latest patches in > TripleO, tested by Promotion jobs. > - scenario001 is timeouting a lot, we moved tacker/congress to > scenario007, and also removed MongoDB that was running for nothing. > - tripleo-ci-centos-7-containers-multinode was timeouting a lot, we > removed cinder and some other services already covered by scenarios, > so tripleo-ci-centos-7-containers-multinode is like ovb, testing the > minimum set of services (which is why we created this job). > - fixing an ipv6 issue in puppet-tripleo: > https://review.openstack.org/#/c/530219/ > > All of the above is merged. > Now the remaining blocker is to update the RDO CI layout for promotion jobs: > See https://review.rdoproject.org/r/#/c/11119/ and > https://review.rdoproject.org/r/#/c/11120/ > Once it merges and job runs, we should get a promotion. > > Let me know any question, > > On Wed, Dec 27, 2017 at 8:48 AM, Emilien Macchi wrote: >> Just a heads-up about what we've done the last days to make progress >> and hopefully get a promotion this week: >> >> - Disabling voting on scenario001, 002 and 003. They timeout too much, >> we haven't figured out why yet but we'll look at it this week and next >> week. Hopefully we can re-enable voting today or so. >> - Kolla added Sensu support and it broke our container builds. It >> should be fixed by https://review.openstack.org/#/c/529890/ and >> https://review.openstack.org/530232 >> - Keystone removed _member_ role management, so we stopped using it >> (only Member is enough): https://review.openstack.org/#/c/529849/ >> - Fixup MTU configuration for CI envs: https://review.openstack.org/#/c/527249 >> - Reduce memory for undercloud image convert: >> https://review.openstack.org/#/c/530137/ >> - Remove policy.json default rules from Heat in THT: >> https://review.openstack.org/#/c/530225 >> >> That's pretty all. Due to the lack of reviewers during the Christmas >> time, we had to land some patches ourselves. If there is any problem >> with one of them, please let us know. We're trying to maintain CI is >> good shape this week and it's a bit of a challenge ;-) >> -- >> Emilien Macchi > > > > -- > Emilien Macchi -- Emilien Macchi From glongwave at gmail.com Tue Jan 2 02:34:19 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Tue, 2 Jan 2018 10:34:19 +0800 Subject: [openstack-dev] [oslo][all] Final release for Oslo libraries(Jan 15 - Jan 19) Message-ID: Hi ALL, Happy New Year ! We are in the week of R-5 , according to the queens schedule [1], there is only two weeks before we issue final release for Oslo libraries. We plan to do that in Jan 15. So please wrap up related work in Oslo. Oslo team please focus on the patches which are still active. [1] https://releases.openstack.org/queens/schedule.html -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From glongwave at gmail.com Tue Jan 2 03:53:02 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Tue, 2 Jan 2018 11:53:02 +0800 Subject: [openstack-dev] [oslo] Oslo team updates Message-ID: In last two cycles some people's situation changed, can't focus on Oslo code review, so I propose some changes in Oslo team. Remove following people, thanks their past hard wok to make Oslo well, and welcome them back if they want to join the team again. please +1/-1 for the change Generalist Code Reviewers: Brant Knudson Specialist API Maintainers: oslo-cache-core: Brant Kundson David Stanek oslo-db-core: Viktor Serhieiev oslo-messaging-core:Dmitriy Ukhlov Oleksii Zamiatin Viktor Serhieiev oslo-policy-core: Brant Kundson David Stanek guang-yee oslo-service-core: Marian Horban We welcome anyone join the team or contribution in Oslo. The Oslo program brings together generalist code reviewers and specialist API maintainers They share a common interest in tackling copy-and-paste technical debt across the OpenStack project. For more information please refer to wiki [1]. [1] https://wiki.openstack.org/wiki/Oslo -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From christophe.sauthier at objectif-libre.com Tue Jan 2 11:46:45 2018 From: christophe.sauthier at objectif-libre.com (Christophe Sauthier) Date: Tue, 02 Jan 2018 12:46:45 +0100 Subject: [openstack-dev] =?utf-8?q?Proposing_Luka_Peschke_=28peschk=5Fl=29?= =?utf-8?q?_as_core_for_cloudkitty?= Message-ID: Hello developers mailing list folks, I'd like to propose that we add Luka Peschke (peschk_l) as an OpenStack cloudkitty core reviewer. He has been a member of our community for years, contributing very seriously in cloudkitty. He also provided many reviews on the project as you can see in his activity logs http://stackalytics.com/report/contribution/cloudkitty/60 His willing to help whenever it is need has been really appreciated ! Current Cloudkitty cores, please respond with +1 or explain your opinion if voting against... If there are no objection in the next 5 days I'll add him. All the best, Christophe ---- Christophe Sauthier CEO Objectif Libre : Au service de votre Cloud +33 (0) 6 16 98 63 96 | christophe.sauthier at objectif-libre.com www.objectif-libre.com | @objectiflibre | www.linkedin.com/company/objectif-libre Recevez la Pause Cloud Et DevOps : olib.re/abo-pause From arxcruz at redhat.com Tue Jan 2 13:23:42 2018 From: arxcruz at redhat.com (Arx Cruz) Date: Tue, 2 Jan 2018 14:23:42 +0100 Subject: [openstack-dev] [tripleo] Tripleo CI Community meeting tomorrow Message-ID: Hello We are going to have a TripleO CI Community meeting tomorrow 01/03/2018 at 2 pm UTC time. The meeting is going to happen on BlueJeans [1] and also on IRC on #tripleo channel. After that, we will hold Office Hours starting at 4PM UTC in case someone from community have any questions related to CI. Hope to see you there. 1 - https://bluejeans.com/7071866728 Kind regards, Arx Cruz -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Tue Jan 2 14:00:15 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Tue, 2 Jan 2018 08:00:15 -0600 Subject: [openstack-dev] Fwd: bug deputy report In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: Takashi Yamamoto Date: Tue, Jan 2, 2018 at 7:45 AM Subject: bug deputy report To: Miguel Lavalle Hi, I was a bug deputy but I don't think I can attend today's meeting. (I'm not feeling well today.) The week was quiet. I guess it's the most quiet week in a year. There were nothing critical or urgent. bug 1740068 lost composite primary key in firewall_group_port_associations_v2 A fix is available, it's unclear why migration tests couldn't catch this. bug 1740198 DHCP was enabled successfully when IP pool is exhausted seems like an old issue. Miguel, can you take a look? bug 1740450 Restarting l3 agent results in lost of centralized fip in snat ns the report seems valid and even has a suggested fix. asked bhaley to triage. -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Jan 2 14:31:26 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 02 Jan 2018 09:31:26 -0500 Subject: [openstack-dev] [oslo] Oslo team updates In-Reply-To: References: Message-ID: <1514903428-sup-9788@lrrr.local> Excerpts from ChangBo Guo's message of 2018-01-02 11:53:02 +0800: > In last two cycles some people's situation changed, can't focus on Oslo > code review, so I propose some changes in Oslo team. Remove following > people, thanks their past hard wok to make Oslo well, and welcome them back > if they want to join the team again. please +1/-1 for the change > > Generalist Code Reviewers: > Brant Knudson > > Specialist API Maintainers: > oslo-cache-core: Brant Kundson David Stanek > oslo-db-core: Viktor Serhieiev > oslo-messaging-core:Dmitriy Ukhlov Oleksii Zamiatin Viktor Serhieiev > oslo-policy-core: Brant Kundson David Stanek guang-yee > oslo-service-core: Marian Horban > > We welcome anyone join the team or contribution in Oslo. The Oslo program > brings together generalist code reviewers and specialist API maintainers > They share a common interest in tackling copy-and-paste technical debt > across the OpenStack project. For more information please refer to wiki > [1]. > > [1] https://wiki.openstack.org/wiki/Oslo +1 -- it's sad to see the team shrink a bit, but it's good to keep the list accurate based on when people can contribute. From davanum at gmail.com Tue Jan 2 14:39:45 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Tue, 2 Jan 2018 09:39:45 -0500 Subject: [openstack-dev] [oslo] Oslo team updates In-Reply-To: <1514903428-sup-9788@lrrr.local> References: <1514903428-sup-9788@lrrr.local> Message-ID: +1 from me as well. Thanks everyone! On Tue, Jan 2, 2018 at 9:31 AM, Doug Hellmann wrote: > Excerpts from ChangBo Guo's message of 2018-01-02 11:53:02 +0800: >> In last two cycles some people's situation changed, can't focus on Oslo >> code review, so I propose some changes in Oslo team. Remove following >> people, thanks their past hard wok to make Oslo well, and welcome them back >> if they want to join the team again. please +1/-1 for the change >> >> Generalist Code Reviewers: >> Brant Knudson >> >> Specialist API Maintainers: >> oslo-cache-core: Brant Kundson David Stanek >> oslo-db-core: Viktor Serhieiev >> oslo-messaging-core:Dmitriy Ukhlov Oleksii Zamiatin Viktor Serhieiev >> oslo-policy-core: Brant Kundson David Stanek guang-yee >> oslo-service-core: Marian Horban >> >> We welcome anyone join the team or contribution in Oslo. The Oslo program >> brings together generalist code reviewers and specialist API maintainers >> They share a common interest in tackling copy-and-paste technical debt >> across the OpenStack project. For more information please refer to wiki >> [1]. >> >> [1] https://wiki.openstack.org/wiki/Oslo > > +1 -- it's sad to see the team shrink a bit, but it's good to keep the > list accurate based on when people can contribute. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims From kgiusti at gmail.com Tue Jan 2 15:05:35 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Tue, 2 Jan 2018 10:05:35 -0500 Subject: [openstack-dev] [oslo] Oslo team updates In-Reply-To: References: <1514903428-sup-9788@lrrr.local> Message-ID: +1, and a big thank you for all your contributions On Tue, Jan 2, 2018 at 9:39 AM, Davanum Srinivas wrote: > +1 from me as well. Thanks everyone! > > On Tue, Jan 2, 2018 at 9:31 AM, Doug Hellmann wrote: >> Excerpts from ChangBo Guo's message of 2018-01-02 11:53:02 +0800: >>> In last two cycles some people's situation changed, can't focus on Oslo >>> code review, so I propose some changes in Oslo team. Remove following >>> people, thanks their past hard wok to make Oslo well, and welcome them back >>> if they want to join the team again. please +1/-1 for the change >>> >>> Generalist Code Reviewers: >>> Brant Knudson >>> >>> Specialist API Maintainers: >>> oslo-cache-core: Brant Kundson David Stanek >>> oslo-db-core: Viktor Serhieiev >>> oslo-messaging-core:Dmitriy Ukhlov Oleksii Zamiatin Viktor Serhieiev >>> oslo-policy-core: Brant Kundson David Stanek guang-yee >>> oslo-service-core: Marian Horban >>> >>> We welcome anyone join the team or contribution in Oslo. The Oslo program >>> brings together generalist code reviewers and specialist API maintainers >>> They share a common interest in tackling copy-and-paste technical debt >>> across the OpenStack project. For more information please refer to wiki >>> [1]. >>> >>> [1] https://wiki.openstack.org/wiki/Oslo >> >> +1 -- it's sad to see the team shrink a bit, but it's good to keep the >> list accurate based on when people can contribute. >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Davanum Srinivas :: https://twitter.com/dims > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Ken Giusti (kgiusti at gmail.com) From joao-sa-silva at alticelabs.com Tue Jan 2 15:20:53 2018 From: joao-sa-silva at alticelabs.com (=?iso-8859-1?Q?Jo=E3o_Paulo_S=E1_da_Silva?=) Date: Tue, 2 Jan 2018 15:20:53 +0000 Subject: [openstack-dev] [Zun] Containers in privileged mode Message-ID: Hello! Is it possible to create containers in privileged mode or to add caps as NET_ADMIN? Kind regards, João -------------- next part -------------- An HTML attachment was scrubbed... URL: From flavio at redhat.com Tue Jan 2 15:23:58 2018 From: flavio at redhat.com (Flavio Percoco) Date: Tue, 2 Jan 2018 16:23:58 +0100 Subject: [openstack-dev] [oslo] Oslo team updates In-Reply-To: References: Message-ID: <20180102152356.bk2756yoqp4fwptr@redhat.com> On 02/01/18 11:53 +0800, ChangBo Guo wrote: >In last two cycles some people's situation changed, can't focus on Oslo >code review, so I propose some changes in Oslo team. Remove following >people, thanks their past hard wok to make Oslo well, and welcome them back >if they want to join the team again. please +1/-1 for the change > >Generalist Code Reviewers: > Brant Knudson > >Specialist API Maintainers: >oslo-cache-core: Brant Kundson David Stanek >oslo-db-core: Viktor Serhieiev >oslo-messaging-core:Dmitriy Ukhlov Oleksii Zamiatin Viktor Serhieiev >oslo-policy-core: Brant Kundson David Stanek guang-yee >oslo-service-core: Marian Horban +1 Thanks everyone -- @flaper87 Flavio Percoco -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 862 bytes Desc: not available URL: From doug at doughellmann.com Tue Jan 2 15:35:57 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 02 Jan 2018 10:35:57 -0500 Subject: [openstack-dev] [castellan] Removing Keystoneauth Dependency in Castellan Discussion In-Reply-To: References: <1217D006B29F184ABD6ECC2CDBF528483A0DBCDD@MOSTLS1MSGUSREC.ITServices.sbc.com> <1512424722-sup-8072@lrrr.local> <1217D006B29F184ABD6ECC2CDBF528483A0DBE1C@MOSTLS1MSGUSREC.ITServices.sbc.com> <1512492768-sup-9368@lrrr.local> <1512597561-sup-7196@lrrr.local> <76392d8e-1288-e54e-cd96-df074924e3cf@oracle.com> <1513109658-sup-1071@lrrr.local> <1513120498-sup-1896@lrrr.local> Message-ID: <1514907188-sup-9906@lrrr.local> Excerpts from Gage Hugo's message of 2017-12-19 11:14:18 -0600: > On Tue, Dec 12, 2017 at 5:34 PM, Doug Hellmann > wrote: > > > Excerpts from Dave McCowan (dmccowan)'s message of 2017-12-12 21:36:51 > > +0000: > > > > > > On 12/12/17, 3:15 PM, "Doug Hellmann" wrote: > > > > > > >Excerpts from Dave McCowan (dmccowan)'s message of 2017-12-12 19:56:49 > > > >+0000: > > > >> > > > >> On 12/12/17, 10:38 AM, "Doug Hellmann" wrote: > > > >> > > > >> > > > > >> >> On Dec 12, 2017, at 9:42 AM, Paul Bourke > > > >>wrote: > > > >> >> > > > >> >> From my understanding it would be a cleanup operation - which to be > > > >> >>honest, would be very much welcomed. I recently did a little work > > with > > > >> >>Castellan to integrate it with Murano and found the auth code to be > > > >>very > > > >> >>messy, and flat out broken in some cases. If it's possible to let > > the > > > >> >>barbican client take care of this that sounds good to me. > > > >> >> > > > >> >> > Which mode is used the most in the services that consume > > castellan > > > >> >> > today? > > > >> >> > > > >> >> Afaik Barbican is the only backend that currently exists in > > Castellan > > > >> >>[0]. Looking again it seems some support has been added for vault > > > >>which > > > >> >>is great, but I reckon Barbican would still be the primary use. > > > >> >> > > > >> >> I haven't been hugely active in Castellan but if the team would > > like > > > >> >>some more input on this or reviews please do ping me, I'd be glad to > > > >> >>help. > > > >> > > > > >> >What I mean is, in the services consuming Castellan, how do they > > expect > > > >> >it to authenticate to Barbican? As the current user or as a > > hard-coded > > > >> >fixed user controlled by the deployer? I would think most services > > > >>would > > > >> >need to connect as the ³current² user talking to them so they can > > > >>access > > > >> >that user¹s secrets from Barbican. Removing the keystoneauth stuff > > from > > > >> >the driver would therefore break all of those applications. > > > >> > > > > >> >Doug > > > >> > > > >> We're a mix right now. Nova and Cinder pass through the a user's > > token > > > >>to > > > >> retrieve the user's key for encrypted volumes. Octavia uses its > > service > > > >> account to retrieve certificates for load balancing TLS connections. > > > >> Users must grant Octavia read permissions in advance. > > > > > > > >OK, so it sounds like we do need to continue to support both > > > >approaches to authentication. > > > > > > > >> Keystone is currently the only authentication option for Barbican. I > > > >> believe the proposal to decouple keystoneauth is advance work for > > adding > > > >> new auth methods and backends as future work. Vault and Custodia are > > > >>two > > > >> such backends in progress. They don't support keystoneauth and likely > > > >> won't, so we'll need alternatives. > > > > > > > >Each driver manages its own authentication, right? Why do we need to > > > >remove the keystoneauth stuff in the barbican driver in order to enable > > > >other drivers? > > > > > > I would use the word "decouple", with the intent to give the option of > > > using Castellan without having a dependency on keystoneauth. But, I > > don't > > > want to speak for original posters who used the word "remove" in case > > they > > > have other ideas. > > > > > > Until recently Barbican was the only secret store and Keystone was the > > > only authentication service, so we didn't have to sort through the > > > modularity. > > > > I'm sorry that I missed the conversation about this in Denver. It > > seems like everyone else understands what's being proposed in a way > > very different than I do, so I apologize for continuing to just ask > > the same questions. I'll try rephrasing, but it would be *very* > > helpful if someone would summarize that discussion and lay out the > > plan in more detail than "we want to remove the use of keystoneauth." > > If we can't do it by email, then maybe via an Oslo spec. > > > > > > The barbican driver has 2 modes for using keystoneauth. One is to > > use the use the execution context to authenticate using the same > > token that the current user passed in to the service calling into > > castellan. The other is to use credentials from the configuration > > file. > > > > Those options seem to be pretty well abstracted in the API, so that > > the application using castellan can just pass the right sort of > > context and get the right behavior from the driver, without having > > to know what the driver is. We currently only have a barbican driver, > > and that driver uses keystoneauth directly because that is the only > > way to control which authentication mode is used. Other drivers > > would presumably use some means other than keystoneauth to authenticate > > to the backends they talk to, with the difference in behavior (acting > > like the current user or acting like a service user) triggered by > > the context passed in. > > > > If we don't use keystoneauth inside the castellan driver before > > creating the barbican client, how will we support both modes in the > > castellan API without exposing to the application which secret store > > driver is being used? We can't, for example, require that an > > application configured to use the barbican driver pass more (or > > different) information to castellan than it would pass if castellan > > was configured to use custodia, because that would break the > > abstraction. > > > > I wonder if we could make keystoneauth a soft requirement instead for those > using > the Barbican driver as a way to de-couple it? Then if one were to use a > different > backend (Vault/Custodia/etc.) it wouldn't be needed. > > Not sure how having different backends will be though > (Barbican/Vault/Custodia) in terms > of breaking abstraction. I hope it doesn't break anything. The point of Castellan was to provide that abstraction, right? :-) We could work on making keystoneauth a driver-specific requirement for castellan during rocky. Effectively it's not going to make much difference, because currently everything that uses casetellan also relies on talking to keystone for other reasons so keystoneauth is still going to need to be installed. But we can adjust the requirements using the "extras" feature of setuptools to clarify which drivers use each dependency. > > > > > Are there more extensive changes planned for the public API of > > castellan, to use different mechanisms to get a driver handle for > > the different modes? Given our backwards-compatibility constraints, > > we can't change the API of the library in a breaking way without > > also updating the consuming apps, so we would have to *add* an API > > and deprecate the old one. I haven't seen anyone talk about a new > > API, though. > > > Are we planning to drop support for one access mode, and change the > > way castellan works more fundamentally? This possibility raises the > > same questions as changing the API does. Based on the compatibility > > constraints for an Oslo library, we need to continue to support > > both of modes until we are sure they are not being used by any of > > our applications. > > > > I'm not sure about these, maybe someone on the Castellan team could chime > in here. > > > > > Doug > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From openstack at nemebean.com Tue Jan 2 15:53:48 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 2 Jan 2018 09:53:48 -0600 Subject: [openstack-dev] [oslo] Oslo team updates In-Reply-To: References: Message-ID: A regretful +1. On 01/01/2018 09:53 PM, ChangBo Guo wrote: > In last two cycles some people's situation changed, can't focus on Oslo > code review, so I propose  some changes in Oslo team.  Remove following > people, thanks their past hard wok to make Oslo well, and welcome them > back if they want to join the team again. please +1/-1 for the change > > Generalist Code Reviewers: >  Brant Knudson > > Specialist API Maintainers: > oslo-cache-core:  Brant Kundson  David Stanek > oslo-db-core: Viktor Serhieiev > oslo-messaging-core:Dmitriy Ukhlov Oleksii Zamiatin Viktor Serhieiev > oslo-policy-core: Brant Kundson  David Stanek guang-yee > oslo-service-core: Marian Horban > > We welcome anyone join the team or contribution in Oslo. The Oslo > program brings together generalist code reviewers and specialist API > maintainers They share a common interest in tackling copy-and-paste > technical debt across the OpenStack project. For more information please > refer to wiki [1]. > > [1] https://wiki.openstack.org/wiki/Oslo > -- > ChangBo Guo(gcb) > Community Director @EasyStack > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From haleyb.dev at gmail.com Tue Jan 2 16:02:08 2018 From: haleyb.dev at gmail.com (Brian Haley) Date: Tue, 2 Jan 2018 11:02:08 -0500 Subject: [openstack-dev] [neutron] performance issue between virtual networks In-Reply-To: <1514381990.3229.13.camel@t-online.de> References: <1514381990.3229.13.camel@t-online.de> Message-ID: <6c120f8c-bc48-7f44-ec34-36edbb391d33@gmail.com> On 12/27/2017 08:39 AM, Kim-Norman Sahm wrote: > Hi, > > i've detected a performance issue by accessing an floating ip in a > different openstack network (same tenant). > > example: > i have one tenant with two internal networks. > each network has its own vrouter which is connectet to the extnet. > the physical network infrastructure is 10Gbit/s. > >          networkA >    VM1 ------|               extnet >              |----|vrouter1|----| >    VM2 ------|                  | >                                 |---ext >          networkB               | >    VM3 ------|                  | >              |----|vrouter2|----| >    VM4 ------| > > VM1 -> VM2 ~8,6Gbit/s > VM3 -> VM4 ~8,6GBit/s > VM1 -> vrouter1 ~8.6GBit/s > VM4 -> vrouter2 ~8,6GBit/s > vrouter1 -> vrouter2 ~8,6Gbits > VM1 -> VM4 ~2,5GBit/s > VM1 -> vrouter2 ~2,5Gbit/s I could only guess that vrouter1 and vrouter2 are on different nodes, so you're losing some performance going from virtual to physical and back (eg GSO). Have you tried this for reference: VM1 -> system on extnet VM4 -> system on extnet Also, are you sure when packets from VM1 -> VM4 leave the vrouter1 interface they are still at the higher MTU? -Brian > > detected with iperf3 > it's an openstack newton environment with openvswitch 2.6.1 > VXLAN mtu is 8950 and 9000 for physical interfaces > > does anybody has an idea what could be the cause of the performance > issue? > > Best regards > Kim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From miguel at mlavalle.com Tue Jan 2 16:06:32 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Tue, 2 Jan 2018 10:06:32 -0600 Subject: [openstack-dev] Tempest plugin for Neutron Stadium projects Message-ID: Hi Neutron community, During the last Neutron drivers meeting, we discussed whether all the Neutron Stadium projects should have their Tempest code in https://github.com/openstack/neutron-tempest-plugin/. It was decided to allow Stadium projects to get their tests in the consolidated plugin but it will not be a requirement. The assumption is that many projects might be stretched in resources and we don't want to create more work for them. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpichon at redhat.com Tue Jan 2 16:08:46 2018 From: jpichon at redhat.com (Julie Pichon) Date: Tue, 2 Jan 2018 16:08:46 +0000 Subject: [openstack-dev] [tripleo] CI promotion blockers In-Reply-To: References: Message-ID: Hi! On 27 December 2017 at 16:48, Emilien Macchi wrote: > - Keystone removed _member_ role management, so we stopped using it > (only Member is enough): https://review.openstack.org/#/c/529849/ There's been so many issues with the default member role and Horizon over the years, that one got my attention. I can see that puppet-horizon still expects '_member_' for role management [1]. However trying to understand the Keystone patch linked to in the commit, it looks like there's total freedom in which role name to use so we can't just change the default in puppet-horizon to use 'Member' as other consumers may expect and settle on '_member_' in their environment. (Right?) In this case, the proper way to fix this for TripleO deployments may be to make the change in instack-undercloud (I presume in [2]) so that the default role is explicitly set to 'Member' for us? Does that sound like the correct approach to get to a working Horizon? Julie [1] https://github.com/openstack/puppet-horizon/blob/master/manifests/init.pp#L458 [2] https://github.com/openstack/instack-undercloud/blob/master/elements/puppet-stack-config/puppet-stack-config.yaml.template#L622 From aschultz at redhat.com Tue Jan 2 16:30:36 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 2 Jan 2018 09:30:36 -0700 Subject: [openstack-dev] [tripleo] CI promotion blockers In-Reply-To: References: Message-ID: On Tue, Jan 2, 2018 at 9:08 AM, Julie Pichon wrote: > Hi! > > On 27 December 2017 at 16:48, Emilien Macchi wrote: >> - Keystone removed _member_ role management, so we stopped using it >> (only Member is enough): https://review.openstack.org/#/c/529849/ > > There's been so many issues with the default member role and Horizon > over the years, that one got my attention. I can see that > puppet-horizon still expects '_member_' for role management [1]. > However trying to understand the Keystone patch linked to in the > commit, it looks like there's total freedom in which role name to use > so we can't just change the default in puppet-horizon to use 'Member' > as other consumers may expect and settle on '_member_' in their > environment. (Right?) > > In this case, the proper way to fix this for TripleO deployments may > be to make the change in instack-undercloud (I presume in [2]) so that > the default role is explicitly set to 'Member' for us? Does that sound > like the correct approach to get to a working Horizon? > We probably should at least change _member_ to Member in puppet-horizon. That fixes both projects for the default case. Thanks, -Alex > Julie > > [1] https://github.com/openstack/puppet-horizon/blob/master/manifests/init.pp#L458 > [2] https://github.com/openstack/instack-undercloud/blob/master/elements/puppet-stack-config/puppet-stack-config.yaml.template#L622 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jpichon at redhat.com Tue Jan 2 16:38:49 2018 From: jpichon at redhat.com (Julie Pichon) Date: Tue, 2 Jan 2018 16:38:49 +0000 Subject: [openstack-dev] [tripleo] CI promotion blockers In-Reply-To: References: Message-ID: On 2 January 2018 at 16:30, Alex Schultz wrote: > On Tue, Jan 2, 2018 at 9:08 AM, Julie Pichon wrote: >> Hi! >> >> On 27 December 2017 at 16:48, Emilien Macchi wrote: >>> - Keystone removed _member_ role management, so we stopped using it >>> (only Member is enough): https://review.openstack.org/#/c/529849/ >> >> There's been so many issues with the default member role and Horizon >> over the years, that one got my attention. I can see that >> puppet-horizon still expects '_member_' for role management [1]. >> However trying to understand the Keystone patch linked to in the >> commit, it looks like there's total freedom in which role name to use >> so we can't just change the default in puppet-horizon to use 'Member' >> as other consumers may expect and settle on '_member_' in their >> environment. (Right?) >> >> In this case, the proper way to fix this for TripleO deployments may >> be to make the change in instack-undercloud (I presume in [2]) so that >> the default role is explicitly set to 'Member' for us? Does that sound >> like the correct approach to get to a working Horizon? >> > > We probably should at least change _member_ to Member in > puppet-horizon. That fixes both projects for the default case. Oh, I thought there was no longer a default and that TripleO was creating the 'Member' role by itself? Fixing it directly in puppet-horizon sounds ideal in general, if changing the default value isn't expected to cause other issues. Thanks, Julie > > Thanks, > -Alex > >> Julie >> >> [1] https://github.com/openstack/puppet-horizon/blob/master/manifests/init.pp#L458 >> [2] https://github.com/openstack/instack-undercloud/blob/master/elements/puppet-stack-config/puppet-stack-config.yaml.template#L622 From mark at stackhpc.com Tue Jan 2 17:34:23 2018 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 2 Jan 2018 17:34:23 +0000 Subject: [openstack-dev] [kayobe][kolla] Kayobe IRC channel Message-ID: Hi, I've registered the #openstack-kayobe channel for developer and operator discussion of kayobe [1]. Jump on if you're using kayobe or interested in doing so. Mark [1] https://github.com/stackhpc/kayobe -------------- next part -------------- An HTML attachment was scrubbed... URL: From mariusc at redhat.com Tue Jan 2 17:56:30 2018 From: mariusc at redhat.com (Marius Cornea) Date: Tue, 2 Jan 2018 18:56:30 +0100 Subject: [openstack-dev] [tripleo] tripleo-upgrade pike branch Message-ID: Hi everyone and Happy New Year! As the migration of tripleo-upgrade repo to the openstack namespace is now complete I think it's the time to create a Pike branch to capture the current state so we can use it for Pike testing and keep the master branch for Queens changes. The update/upgrade steps are changing between versions and the aim of branching the repo is to keep the update/upgrade steps clean per branch to avoid using conditionals based on release. Also tripleo-upgrade should be compatible with different tools used for deployment(tripleo-quickstart, infrared, manual deployments) which use different vars for the version release so in case of using conditionals we would need extra steps to normalize these variables. I wanted to bring this topic up for discussion to see if branching is the proper thing to do here. Thanks, Marius From hongbin034 at gmail.com Tue Jan 2 18:38:14 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Tue, 2 Jan 2018 13:38:14 -0500 Subject: [openstack-dev] [Zun] Containers in privileged mode In-Reply-To: References: Message-ID: Hi Joao, Right now, it is impossible to create containers with escalated privileged, such as setting privileged mode or adding additional caps. This is intentional for security reasons. Basically, what Zun currently provides is "serverless" containers, which means Zun is not using VMs to isolate containers (for people who wanted strong isolation as VMs, they can choose secure container runtime such as Clear Container). Therefore, it is insecure to give users control of any kind of privilege escalation. However, if you want this feature, I would love to learn more about the use cases. Best regards, Hongbin On Tue, Jan 2, 2018 at 10:20 AM, João Paulo Sá da Silva < joao-sa-silva at alticelabs.com> wrote: > Hello! > > Is it possible to create containers in privileged mode or to add caps as > NET_ADMIN? > > > > Kind regards, > > João > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joao-sa-silva at alticelabs.com Tue Jan 2 19:06:34 2018 From: joao-sa-silva at alticelabs.com (=?iso-8859-1?Q?Jo=E3o_Paulo_S=E1_da_Silva?=) Date: Tue, 2 Jan 2018 19:06:34 +0000 Subject: [openstack-dev] [Zun] Containers in privileged mode Message-ID: Thanks for your answer, Hongbin, it is very appreciated. The use case is to use Virtualized Network Functions in containers instead of virtual machines. The rational for using containers instead of VMs is better VNF density in resource constrained hosts. The goal is to have several VNFs (DHCP, FW, etc) running on severely resource constrained Openstack compute node. But without NET_ADMIN cap I can't even start dnsmasq. Is it possible to use clear container with zun/openstack? >From checking gerrit it seems that this point was already address and dropped? Regarding the security concerns I disagree, if users choose to allow such situation they should be allowed. It is the user responsibility to recognize the dangers and act accordingly. In Neutron you can go as far as fully disabling port security, this was implemented again with VNFs in mind. Kind regards, João >Hi Joao, > >Right now, it is impossible to create containers with escalated privileged, >such as setting privileged mode or adding additional caps. This is >intentional for security reasons. Basically, what Zun currently provides is >"serverless" containers, which means Zun is not using VMs to isolate >containers (for people who wanted strong isolation as VMs, they can choose >secure container runtime such as Clear Container). Therefore, it is >insecure to give users control of any kind of privilege escalation. >However, if you want this feature, I would love to learn more about the use >cases. > >Best regards, >Hongbin > >On Tue, Jan 2, 2018 at 10:20 AM, João Paulo Sá da Silva < >joao-sa-silva at alticelabs.com> wrote: > >> Hello! >> >> Is it possible to create containers in privileged mode or to add caps as >> NET_ADMIN? >> >> >> >> Kind regards, >> >> João >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Jan 2 19:18:29 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 2 Jan 2018 13:18:29 -0600 Subject: [openstack-dev] [nova] seeking your kind support and assistance regarding securing hypervisor and virtual machines against security attacks or threats In-Reply-To: References: Message-ID: On 12/24/2017 8:45 AM, Darshan Tank wrote: > How to secure hypervisor and virtual machine against security threats in > OpenStack? What are the methods, techniques or algorithms, OpenStack > community/developers followed to make openstack as secure cloud environment? Have you started by reading the Security Guide [1]? [1] https://docs.openstack.org/security-guide/ -- Thanks, Matt From hongbin034 at gmail.com Tue Jan 2 21:43:27 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Tue, 2 Jan 2018 16:43:27 -0500 Subject: [openstack-dev] [Zun] Containers in privileged mode In-Reply-To: References: Message-ID: Please find my reply inline. Best regards, Hongbin On Tue, Jan 2, 2018 at 2:06 PM, João Paulo Sá da Silva < joao-sa-silva at alticelabs.com> wrote: > Thanks for your answer, Hongbin, it is very appreciated. > > > > The use case is to use Virtualized Network Functions in containers instead > of virtual machines. The rational for using containers instead of VMs is > better VNF density in resource constrained hosts. > > The goal is to have several VNFs (DHCP, FW, etc) running on severely > resource constrained Openstack compute node. But without NET_ADMIN cap I > can’t even start dnsmasq. > Make sense. Would you help writing a blueprint for this feature: https://blueprints.launchpad.net/zun ? We use blueprint to track all requested features. > > > Is it possible to use clear container with zun/openstack? > Yes, it is possible. We are adding documentation about that: https://review.openstack.org/#/c/527611/ . > > > From checking gerrit it seems that this point was already address and > dropped? Regarding the security concerns I disagree, if users choose to > allow such situation they should be allowed. > > It is the user responsibility to recognize the dangers and act > accordingly. > > > > In Neutron you can go as far as fully disabling port security, this was > implemented again with VNFs in mind. > Make sense as well. IMHO, we should disallow privilege escalation by default, but I am open to introduce a configurable option to allow it. I can see this is necessary for some use cases. Cloud administrators should be reminded the security implication of doing that. > > > Kind regards, > > João > > > > > > >Hi Joao, > > > > > >Right now, it is impossible to create containers with escalated > privileged, > > >such as setting privileged mode or adding additional caps. This is > > >intentional for security reasons. Basically, what Zun currently provides > is > > >"serverless" containers, which means Zun is not using VMs to isolate > > >containers (for people who wanted strong isolation as VMs, they can choose > > >secure container runtime such as Clear Container). Therefore, it is > > >insecure to give users control of any kind of privilege escalation. > > >However, if you want this feature, I would love to learn more about the > use > > >cases. > > > > > >Best regards, > > >Hongbin > > > > > >On Tue, Jan 2, 2018 at 10:20 AM, João Paulo Sá da Silva < > > >joao-sa-silva at alticelabs.com> wrote: > > > > > >> Hello! > > >> > > >> Is it possible to create containers in privileged mode or to add caps as > > >> NET_ADMIN? > > >> > > >> > > >> > > >> Kind regards, > > >> > > >> João > > >> > > >> > > >> > > >> ____________________________________________________________ > ______________ > > >> OpenStack Development Mailing List (not for usage questions) > > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >> > > >> > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: attachments/20180102/e1ecb71a/attachment.html> > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lebre.adrien at free.fr Tue Jan 2 23:22:37 2018 From: lebre.adrien at free.fr (lebre.adrien at free.fr) Date: Wed, 3 Jan 2018 00:22:37 +0100 (CET) Subject: [openstack-dev] [FEMDC] Wed. 3 Jan - IRC Meeting Cancelled - Next meeting Wed.17. In-Reply-To: <690841999.263326520.1514935166633.JavaMail.root@zimbra29-e5> Message-ID: <837886476.263329540.1514935357427.JavaMail.root@zimbra29-e5> Dear all, Due to the Christmas/new year period, the meeting is cancelled. Next meeting is scheduled on Wed, the 17th. ad_ri3n_ From liujiong at gohighsec.com Wed Jan 3 01:43:30 2018 From: liujiong at gohighsec.com (Jiong Liu) Date: Wed, 3 Jan 2018 09:43:30 +0800 Subject: [openstack-dev] Proposing Luka Peschke (peschk_l) as core for cloudkitty Message-ID: <000601d38434$42a7b7d0$c7f72770$@gohighsec.com> > Hello developers mailing list folks, > I'd like to propose that we add Luka Peschke (peschk_l) as an OpenStack > cloudkitty core reviewer. > He has been a member of our community for years, contributing very > seriously in cloudkitty. He also provided many reviews on the project as > you can see in his activity logs > http://stackalytics.com/report/contribution/cloudkitty/60 > His willing to help whenever it is need has been really appreciated ! > Current Cloudkitty cores, please respond with +1 or explain your > opinion if voting against... If there are no objection in the next 5 > days I'll add him. +1, welcome Luka! From lbragstad at gmail.com Wed Jan 3 02:33:07 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 2 Jan 2018 20:33:07 -0600 Subject: [openstack-dev] [all] [tc] Policy in code gotcha Message-ID: <705e4d2e-cdfd-b40b-d586-1a77decd0180@gmail.com> Today in office hours someone reported a bug where keystone wasn't producing policy files based on the overrides on disk and the defaults in code [0]. After doing some digging, I found that keystone missed a step while moving policy into code. I have a patch [1] up that follows an approach from nova. With all of the policy in code work happening this release, I wanted to send a note in case folks were seeing this with their projects. Or just a reminder to test for it if you're unsure if your project will be affected. Let me know if you have any questions and I'll be happy to chip in if your project is experiencing issues. Lance [0] https://bugs.launchpad.net/keystone/+bug/1740951 [1] https://review.openstack.org/#/c/530828/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From 270162781 at qq.com Wed Jan 3 03:42:39 2018 From: 270162781 at qq.com (=?ISO-8859-1?B?MjcwMTYyNzgx?=) Date: Wed, 3 Jan 2018 11:42:39 +0800 Subject: [openstack-dev] [neutron]Why neutron-vpnaas support transport vpn mode? Message-ID: HI , Neutron guys: Currently, I found the neutron vpn service which is ipsec vpn support transport mode. And the current data model is that a vpn service must be associated with a Router instance. AFAIK, transport mode can not pass through a Router with NAT and usually used by P-to-P VPN not for the current site-to-site type. Just a question about it, is that we just support as the configuration/vpn backend support? What the usecase towards Openstack env? Thanks, Best Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Wed Jan 3 04:05:36 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 2 Jan 2018 20:05:36 -0800 Subject: [openstack-dev] [tripleo] containers-multinode is broken, what's our plan Message-ID: Mistral broke us with https://review.openstack.org/#/c/506185/ and we had a promotion yesterday so now our CI deploy Mistral with this patch. It breaks some Mistral actions, including the ones needed by config-download (in featureset010). Steve has a fix: https://review.openstack.org/#/c/530781 but there is no core review yet so we decided to proceed this way: 1) Carry Steve's patch in Mistral distgit: https://review.rdoproject.org/r/#/c/11140/ - DONE 2) Remove featureset010 from promotion requirements - DONE 3) Once we have a promotion, we'll be able to land https://review.openstack.org/#/c/530783/ - IN PROGRESS 4) Once https://review.openstack.org/#/c/530783/ is landed, and the upstream patch is landed, revert https://review.rdoproject.org/r/#/c/11140/ (otherwise RDO will become inconsistent) and failing to build on master) 5) Re-add featureset010 in promotion requirements (revert https://review.rdoproject.org/r/#/c/11142) so we'll catch the issue next time. Thanks, -- Emilien Macchi From zhipengh512 at gmail.com Wed Jan 3 07:13:30 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 3 Jan 2018 15:13:30 +0800 Subject: [openstack-dev] [acceleration]Cyborg Team Weekly Meeting 2018.01.03 Message-ID: Happy New Year ! Let's resume our weekly team meeting today as usual starting from UTC1500 at #openstack-cyborg. We are now in release countdown mode so we will mostly focus on Queens delivery development. -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar246 at gmail.com Wed Jan 3 08:12:40 2018 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 3 Jan 2018 13:42:40 +0530 Subject: [openstack-dev] [qa][all] QA Office Hours on 04th Jan, 2018 Message-ID: Hello All, a kind reminder that tomorrow at 9:00 UTC we'll start office hours for the QA team in the #openstack-qa channel. Please join us with any question/comment you may have related to tempest plugin split community goal, tempest and others QA tools. We'll triage bugs for QA projects from the past 7 days and then extend the time frame if there is time left. Thanks, Chandan Kumar From rico.lin.guanyu at gmail.com Wed Jan 3 10:02:13 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 3 Jan 2018 18:02:13 +0800 Subject: [openstack-dev] [heat]No meeting this week Message-ID: Hi all These few weeks seems to have some amount members on vacation, let's skip meeting this week (will hold our next meeting in next week) Feel free to connect to me if any question or needs any help:) Happy New Year everyone! -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcafarel at redhat.com Wed Jan 3 10:29:51 2018 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Wed, 3 Jan 2018 11:29:51 +0100 Subject: [openstack-dev] Tempest plugin for Neutron Stadium projects In-Reply-To: References: Message-ID: That makes sense, thanks! I will add the topic to the next networking-sfc meeting On 2 January 2018 at 17:06, Miguel Lavalle wrote: > Hi Neutron community, > > During the last Neutron drivers meeting, we discussed whether all the > Neutron Stadium projects should have their Tempest code in > https://github.com/openstack/neutron-tempest-plugin/. It was decided to > allow Stadium projects to get their tests in the consolidated plugin but it > will not be a requirement. The assumption is that many projects might be > stretched in resources and we don't want to create more work for them. > > Best regards > > Miguel > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Bernard Cafarelli From cdent+os at anticdent.org Wed Jan 3 10:48:26 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 3 Jan 2018 10:48:26 +0000 (GMT) Subject: [openstack-dev] [tc] [all] TC Report Review Message-ID: I haven't done a TC Report for this week, but I have written a sort of review of the many reports written last year. Find it at https://anticdent.org/tc-report-2017-in-review.html Thanks and Happy New Year. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From tobias at citynetwork.se Wed Jan 3 11:27:13 2018 From: tobias at citynetwork.se (Tobias Rydberg) Date: Wed, 3 Jan 2018 12:27:13 +0100 Subject: [openstack-dev] [publiccloud-wg] Reminder for todays meeting Message-ID: <0502bfd2-2840-ac24-5c40-10ba5c076d99@citynetwork.se> Hi all, Time again for a meeting for the Public Cloud WG - today at 1400 UTC in #openstack-meeting-3 Agenda and etherpad at: https://etherpad.openstack.org/p/publiccloud-wg See you later! Tobias Rydberg -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3945 bytes Desc: S/MIME Cryptographic Signature URL: From wenranxiao at gmail.com Wed Jan 3 13:05:03 2018 From: wenranxiao at gmail.com (wenran xiao) Date: Wed, 3 Jan 2018 21:05:03 +0800 Subject: [openstack-dev] [neutron] Which type of traffic can be collect by Metering. Message-ID: Hey, When I create label rule for ingress, it need a remote_ip, and it will be source address in neutron-meter-xxx's rule, I don't konw why here is remote_ip, It just can collect the traffic between namespce qg-xxx and remote_ip, If I want to get one of my vm's traffic, how can I do it? excluded others and give remote_ip 0.0.0.0/0? Which type of traffic can be collect by Metering? Any suggestions is welcome. Bests, Wenran Xiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Wed Jan 3 13:52:43 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Wed, 3 Jan 2018 07:52:43 -0600 Subject: [openstack-dev] Ohayo! Q1 2018 Message-ID: https://etherpad.openstack.org/p/TYO-ops-meetup-2018 ​ Hey everyone, What do you think about the new logo! Just a friendly reminder that the Ops Meetup for Spring 2018 is approaching March 7-8, 2018 in Tokyo and we are looking for additional topics. Spring 2018 will have NFV+General on day one and Enterprise+General on day two. Add additional topics to the etherpad or +/- 1 those already proposed. Additionally if you are attending and would like to moderate a session, add your name to the moderator list near the bottom of the etherpad. -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: opsmeetuplogo.png Type: image/png Size: 38873 bytes Desc: not available URL: From vetrisko at gmail.com Wed Jan 3 15:24:58 2018 From: vetrisko at gmail.com (milanisko k) Date: Wed, 03 Jan 2018 15:24:58 +0000 Subject: [openstack-dev] [ironic-inspector] Resigning my core-reviewer duties Message-ID: Folks, as announced already on the Ironic upstream meeting, I'm hereby resigning my core-reviewer duties. I've changed my downstream occupation recently and I won't be able to keep up anymore. Thank you all, I really enjoyed collaborating with the wonderful Ironic community! Best regards, milan -------------- next part -------------- An HTML attachment was scrubbed... URL: From joao-sa-silva at alticelabs.com Wed Jan 3 15:41:04 2018 From: joao-sa-silva at alticelabs.com (=?iso-8859-1?Q?Jo=E3o_Paulo_S=E1_da_Silva?=) Date: Wed, 3 Jan 2018 15:41:04 +0000 Subject: [openstack-dev] [Zun] Containers in privileged mode Message-ID: Hello, I created the BP: https://blueprints.launchpad.net/zun/+spec/add-capacities-to-containers . About the clear containers, I'm not quite sure how using them solves my capabilities situation. Can you elaborate on that? Will zun ever be able to launch LXD containers? Kind regards, João -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Jan 3 15:46:42 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 03 Jan 2018 10:46:42 -0500 Subject: [openstack-dev] [Release-job-failures][kuryr][release] Tag of openstack/kuryr-kubernetes failed In-Reply-To: References: Message-ID: <1514994337-sup-7973@lrrr.local> Excerpts from zuul's message of 2018-01-03 14:46:48 +0000: > Build failed. > > - publish-openstack-releasenotes http://logs.openstack.org/bf/bfe7b055d19537f2c59125678410b3d25d2cb61d/tag/publish-openstack-releasenotes/1d51c55/ : FAILURE in 5m 04s > Based on http://logs.openstack.org/bf/bfe7b055d19537f2c59125678410b3d25d2cb61d/tag/publish-openstack-releasenotes/1d51c55/ara/result/43697d56-2d99-457f-b8e8-44d6d7fcba33/ it looks like the issue is an incomplete list of dependencies: Running Sphinx v1.6.5 making output directory... Exception occurred: File "conf.py", line 60, in ImportError: No module named kuryr_kubernetes The full traceback has been saved in /tmp/sphinx-err-iNXwM7.log, if you want to report the issue to the developers. Please also report this if it was a user error, so that a better error message can be provided next time. A bug report can be filed in the tracker at . Thanks! From aj at suse.com Wed Jan 3 16:59:21 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 3 Jan 2018 17:59:21 +0100 Subject: [openstack-dev] [Release-job-failures][kuryr][release] Tag of openstack/kuryr-kubernetes failed In-Reply-To: <1514994337-sup-7973@lrrr.local> References: <1514994337-sup-7973@lrrr.local> Message-ID: On 2018-01-03 16:46, Doug Hellmann wrote: > Excerpts from zuul's message of 2018-01-03 14:46:48 +0000: >> Build failed. >> >> - publish-openstack-releasenotes http://logs.openstack.org/bf/bfe7b055d19537f2c59125678410b3d25d2cb61d/tag/publish-openstack-releasenotes/1d51c55/ : FAILURE in 5m 04s >> > > Based on > http://logs.openstack.org/bf/bfe7b055d19537f2c59125678410b3d25d2cb61d/tag/publish-openstack-releasenotes/1d51c55/ara/result/43697d56-2d99-457f-b8e8-44d6d7fcba33/ > it looks like the issue is an incomplete list of dependencies: > > Running Sphinx v1.6.5 > making output directory... > > Exception occurred: > File "conf.py", line 60, in > ImportError: No module named kuryr_kubernetes > The full traceback has been saved in /tmp/sphinx-err-iNXwM7.log, if you want to report the issue to the developers. > Please also report this if it was a user error, so that a better error message can be provided next time. > A bug report can be filed in the tracker at . Thanks! Fix is available since End of November: https://review.openstack.org/#/c/523290/ but not reviewed yet ;( Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From mdulko at redhat.com Wed Jan 3 17:16:19 2018 From: mdulko at redhat.com (mdulko at redhat.com) Date: Wed, 03 Jan 2018 18:16:19 +0100 Subject: [openstack-dev] [Release-job-failures][kuryr][release] Tag of openstack/kuryr-kubernetes failed In-Reply-To: References: <1514994337-sup-7973@lrrr.local> Message-ID: <1514999779.14365.25.camel@redhat.com> On Wed, 2018-01-03 at 17:59 +0100, Andreas Jaeger wrote: > On 2018-01-03 16:46, Doug Hellmann wrote: > > Excerpts from zuul's message of 2018-01-03 14:46:48 +0000: > > > Build failed. > > > > > > - publish-openstack-releasenotes http://logs.openstack.org/bf/bfe7b055d19537f2c59125678410b3d25d2cb61d/tag/publish-openstack-releasenotes/1d51c55/ : FAILURE in 5m 04s > > > > > > > Based on > > http://logs.openstack.org/bf/bfe7b055d19537f2c59125678410b3d25d2cb61d/tag/publish-openstack-releasenotes/1d51c55/ara/result/43697d56-2d99-457f-b8e8-44d6d7fcba33/ > > it looks like the issue is an incomplete list of dependencies: > > > > Running Sphinx v1.6.5 > > making output directory... > > > > Exception occurred: > > File "conf.py", line 60, in > > ImportError: No module named kuryr_kubernetes > > The full traceback has been saved in /tmp/sphinx-err-iNXwM7.log, if you want to report the issue to the developers. > > Please also report this if it was a user error, so that a better error message can be provided next time. > > A bug report can be filed in the tracker at . Thanks! > > Fix is available since End of November: > https://review.openstack.org/#/c/523290/ but not reviewed yet ;( > > Andreas Thanks for the notice, I've informed #openstack-kuryr and will make sure to push this through once core reviewers will be available. It probably won't be today though. From ranand at suse.com Wed Jan 3 17:50:32 2018 From: ranand at suse.com (Ritesh Anand) Date: Wed, 03 Jan 2018 10:50:32 -0700 Subject: [openstack-dev] [designate] need help with managed resource tenant ID References: <5A4D1033020000900001D997@prv-mh.provo.novell.com> Message-ID: <5A4D17E8020000900001D9AC@prv-mh.provo.novell.com> Hi Stackers, Happy new year!! I am working around Designate service and have been seeing this warning in designate-central.log: 2017-12-12 06:25:47.022 31171 WARNING designate.central.service [-] Managed Resource Tenant ID is not properly configured Looks like the config options: [service:central]/managed_resource_tenant_id = "" and managed_resource_email = "" are being used to hold resources internally created by Designate (namely PTR records for FIPs, etc.). I need your help in finding a good value for these configurations to support in our deployments. What are you using? Any suggestions welcome! Thanks, Ritesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From gr at ham.ie Wed Jan 3 17:58:54 2018 From: gr at ham.ie (Graham Hayes) Date: Wed, 3 Jan 2018 17:58:54 +0000 Subject: [openstack-dev] [designate] need help with managed resource tenant ID In-Reply-To: <5A4D17E8020000900001D9AC@prv-mh.provo.novell.com> References: <5A4D1033020000900001D997@prv-mh.provo.novell.com> <5A4D17E8020000900001D9AC@prv-mh.provo.novell.com> Message-ID: On 03/01/18 17:50, Ritesh Anand wrote: > Hi Stackers, > > Happy new year!! > > I am working around Designate service and have been seeing this warning > in designate-central.log: > 2017-12-12 06:25:47.022 31171 WARNING designate.central.service [-] > Managed Resource Tenant ID is not properly configured > > Looks like the config options: > [service:central]/managed_resource_tenant_id = "" and > managed_resource_email = "" > are being used to hold resources internally created by Designate (namely > PTR records for FIPs, etc.). > > I need your help in finding a good value for these configurations to > support in our deployments. > What are you using? We have previously used the project ID for the designate service project. As long as it is an actual unique project, it is fine. The email is used in the SOA record for reverse DNS zones, and as a temporary email while we import secondary zones. This means that it should be a per customer thing, and usually would be the email for the team running the DNS service. Thanks, Graham > Any suggestions welcome! > > Thanks, > Ritesh > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From hongbin034 at gmail.com Wed Jan 3 20:20:11 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Wed, 3 Jan 2018 15:20:11 -0500 Subject: [openstack-dev] [Zun] Containers in privileged mode In-Reply-To: References: Message-ID: On Wed, Jan 3, 2018 at 10:41 AM, João Paulo Sá da Silva < joao-sa-silva at alticelabs.com> wrote: > Hello, > > > > I created the BP: https://blueprints.launchpad. > net/zun/+spec/add-capacities-to-containers . > Thanks for creating the BP. > > > About the clear containers, I’m not quite sure how using them solves my > capabilities situation. Can you elaborate on that? > What I was trying to say is that Zun offers choice of container runtime: runc or clear container. I am not sure how clear container deal with capabilities and privilege escalation. I will leave this question to others. > > > Will zun ever be able to launch LXD containers? > Not for now. Only Docker is supported. > > > Kind regards, > > João > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Wed Jan 3 21:43:21 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 3 Jan 2018 15:43:21 -0600 Subject: [openstack-dev] [keystone] [ptg] Rocky PTG planning Message-ID: Hey all, It's about that time to start our pre-PTG planning activities. I've started an etherpad and bootstrapped it with some basic content [0]. Please take the opportunity to add topics to the schedule. It doesn't matter if it is cross-project or keystone specific. The sooner we get ideas flowing the easier it will be to coordinate cross-project tracks with other groups. We'll organize the content into a schedule after a couple week. Let me know if you have any questions. Thanks, Lance [0] https://etherpad.openstack.org/p/keystone-rocky-ptg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From mriedemos at gmail.com Thu Jan 4 00:02:53 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 3 Jan 2018 18:02:53 -0600 Subject: [openstack-dev] [nova] Working toward Queens feature freeze and RC1 Message-ID: Now that most people are getting back into work after the holiday break, I thought I'd try and recap some of where we're at in the release, and what to look forward to in the next few weeks. Schedule -------- https://wiki.openstack.org/wiki/Nova/Queens_Release_Schedule We have two big dates coming up: 1. Thursday, Jan 18 - non-client release freeze: This is the final release date for libraries like os-vif and os-traits. I'll be on vacation the week of the 15th so I'll actually be pushing for these releases on the 11th. *Let me know if you have work that depends on an os-vif or os-traits release so I can be sure it's in.* 2. Thursday, Jan 25 - queens-3 milestone, feature freeze and final novaclient release: We'll only have two weeks until RC1 (Feb 8) so I don't expect much in the way of feature freeze exceptions since we need that time to focus on stabilizing everything that made it in by the feature freeze, plus docs and upgrade work to prepare for Rocky. I've started building a list of things that need to be done by the time we get to RC1: https://etherpad.openstack.org/p/nova-queens-release-candidate-todo Blueprints ---------- Looking at the priorities for Queens: https://specs.openstack.org/openstack/nova-specs/priorities/queens-priorities.html * Alternate hosts is nearly done, but hit a snag today: https://bugs.launchpad.net/nova/+bug/1741125 * Nested resource providers: I'm going to need someone closer to this work like Jay or Eric to provide an update on where things are at in the series of changes and what absolutely needs to get done. I have personally found it hard to track what the main focus items are for the nested resource providers / traits / granular resource provider request changes so I need someone to summarize and lay out the review goals for the next two weeks. * Volume multiattach support: There are some lower-hanging fruit changes up for review at the bottom of the series. Steve Noyes and I are working on getting the integration testing in place for this [1]. We're going to need to figure out what to do, if anything, about checking the new microversion in the compute API for multiattach and failing in the API vs failing in the compute. See my thoughts in the API patch [2]. There are also changes on the Cinder side that need to happen for multiattach to work (in the volume drivers, volume type and policy rules). The rest of the blueprints are tracked here: https://etherpad.openstack.org/p/nova-queens-blueprint-status I've been trying to keep the stuff that has had a decent amount of review and progress sorted toward the top of the list since I'd like to focus on what can still get completed in Queens versus spreading the review load across everything that's still open. Reminders --------- * Our list of untriaged bugs is growing, we're up to 53. [3] * As mentioned earlier, I'm going to be on vacation the week of the 15th. I made the mistake of taking my laptop with me to Mexico last year around the Ocata FF/RC1 and I don't plan on doing that this time around. So if you need something from me, please be sure to bring it up before Friday the 12th at the latest. Once I'm back I'll be focusing on wrapping up for feature freeze and then RC1 and PTG planning. [1] https://review.openstack.org/#/c/266633/ [2] https://review.openstack.org/#/c/271047/37 [3] http://tiny.cc/psmwpy -- Thanks, Matt From mriedemos at gmail.com Thu Jan 4 00:46:36 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 3 Jan 2018 18:46:36 -0600 Subject: [openstack-dev] [nova] about backup instance booted from volume In-Reply-To: References: Message-ID: <65070e0f-21f6-25e0-65eb-ea5f1ba1d403@gmail.com> On 12/27/2017 9:52 PM, Jaze Lee wrote: > 2017-12-28 10:56 GMT+08:00 Jaze Lee : >> Hello, >> This is the spec about backup a instance booted from volume, anyone >> who is interested on booted from volume can help to review this. Any >> suggestion is welcome. > The spec is here. > https://review.openstack.org/#/c/530214/2 > > > Thanks a lot... >> >> >> >> >> -- >> 谦谦君子 > > > I dug up this old change from Fei Long Wang about the same thing: https://review.openstack.org/#/c/164494/ As I mentioned in the bug and the spec, you should reference, or we can restore, that change and you can take ownership of it if Fei Long doesn't plan on working on it. It will require a new microversion which should be called out in the spec review. This is something to discuss for the Rocky release at this point since we're past spec freeze for Queens. -- Thanks, Matt From whayutin at redhat.com Thu Jan 4 00:46:44 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 3 Jan 2018 19:46:44 -0500 Subject: [openstack-dev] [tripleo] containers-multinode is broken, what's our plan In-Reply-To: References: Message-ID: On Tue, Jan 2, 2018 at 11:05 PM, Emilien Macchi wrote: > Mistral broke us with https://review.openstack.org/#/c/506185/ and we > had a promotion yesterday so now our CI deploy Mistral with this > patch. > It breaks some Mistral actions, including the ones needed by > config-download (in featureset010). > > Steve has a fix: https://review.openstack.org/#/c/530781 but there is > no core review yet so we decided to proceed this way: > > 1) Carry Steve's patch in Mistral distgit: > https://review.rdoproject.org/r/#/c/11140/ - DONE > 2) Remove featureset010 from promotion requirements - DONE > 3) Once we have a promotion, we'll be able to land > https://review.openstack.org/#/c/530783/ - IN PROGRESS > 4) Once https://review.openstack.org/#/c/530783/ is landed, and the > upstream patch is landed, revert > https://review.rdoproject.org/r/#/c/11140/ (otherwise RDO will become > inconsistent) and failing to build on master) > 5) Re-add featureset010 in promotion requirements (revert > https://review.rdoproject.org/r/#/c/11142) so we'll catch the issue > next time. > > Thanks, > -- > Emilien Macchi > Thanks for all your hardwork and direction here. You kept the lights on while we were all out. We'll be monitoring the zuul queue for any errors, thanks! The TripleO-CI team is also working on the tools we'll need moving forward to help bring the jobs that were moved to non-voting back to voting. If anyone is interested or has suggestions our sprint work is spec'd out here [1]. We have a planning meeting tomorrow to work through the details. We'll also be taking on any unfinished work from the end of last year. Thanks [1] https://trello.com/b/U1ITy0cu/tripleo-ci-squad?menu=filter&filter=label:Sprint%206 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Thu Jan 4 08:59:35 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 4 Jan 2018 09:59:35 +0100 Subject: [openstack-dev] [ironic-inspector] Resigning my core-reviewer duties In-Reply-To: References: Message-ID: <3c0793d9-57b3-d8e0-2c78-4cd1da7a7965@redhat.com> On 01/03/2018 04:24 PM, milanisko k wrote: > Folks, > > as announced already on the Ironic upstream meeting, I'm hereby resigning my > core-reviewer duties. I've changed my downstream occupation recently and I won't > be able to keep up anymore. As I said many times, I'm really sad to hear it, but I'm glad that you've found new cool challenges :) I have removed your rights. I've also done a similar change to Yuiko, who is apparently no longer active in the community. Thanks both for your incredible contributions that allowed ironic-inspector to be what it is now! > > Thank you all, I really enjoyed collaborating with the wonderful Ironic community! > > Best regards, > milan > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From honjo.rikimaru at po.ntt-tx.co.jp Thu Jan 4 09:22:26 2018 From: honjo.rikimaru at po.ntt-tx.co.jp (Rikimaru Honjo) Date: Thu, 4 Jan 2018 18:22:26 +0900 Subject: [openstack-dev] [oslo][oslo.log]Error will be occurred if watch_log_file option is true Message-ID: Hello, The below bug was reported in Masakari's Launchpad. I think that this bug was caused by oslo.log. (And, the root cause is a bug of pyinotify using by oslo.log. The detail is written in the bug report.) * masakari-api failed to launch due to setting of watch_log_file and log_file https://bugs.launchpad.net/masakari/+bug/1740111 There is a possibility that this bug will affects all openstack components using oslo.log. (But, the processes working with uwsgi[1] wasn't affected when I tried to reproduce. I haven't solved the reason of this yet...) Could you help us? And, what should we do...? [1] e.g. nova-api, cinder-api, keystone... Best regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at po.ntt-tx.co.jp From chkumar246 at gmail.com Thu Jan 4 09:46:29 2018 From: chkumar246 at gmail.com (Chandan kumar) Date: Thu, 4 Jan 2018 15:16:29 +0530 Subject: [openstack-dev] [qa] Office Hours Report 2018-01-04 Message-ID: Hello, Thanks everyone for attending QA office hour. Since It's the starting of the year so attendance is low. We managed to triaged some bugs opened/changed in last 14 days. The IRC report [0] and full log [1] are available through meetbot. **Bug Traiged Summary** * Bug #1740194 in devstack: "Apache2 unable to start as 2 MPM modules enabled on Fedora 27" Status: Confirmed, Related Review: https://review.openstack.org/#/c/527048/ https://bugs.launchpad.net/devstack/+bug/1740194 * Bug #1740480 in devstack: "502 Proxy Error" Status: New, Action: Need help in traiging https://bugs.launchpad.net/devstack/+bug/1740480 * Bug #1740920 in devstack: "stable/newton branch does not work because keystone does not have stable/newton branch" Status: Invalid https://bugs.launchpad.net/devstack/+bug/1740920 * Bug #1741097 in devstack: "Installing pip fails on RHEL 7.4 with SSL error" status: In Progress, Related Review: https://review.openstack.org/#/c/530991/ https://bugs.launchpad.net/devstack/+bug/1741097 * Bug #1740544 in tempest: "Volume retype fails when migration occurs" Status: confirmed https://bugs.launchpad.net/tempest/+bug/1740544 * Bug #1739829 in tempest: "tempest-full job failing in stable/pike with 404 from keystone during tempest verify-config" Status: Confirmed, Related Review: https://review.openstack.org/#/c/530915/ (Needs to be backported for stable branches) https://bugs.launchpad.net/tempest/+bug/1739829 * Bug #1740258 in tempest: "[scenario]/img_dir is deprecated but required" Status: Undecided, Action: Needs discussion https://bugs.launchpad.net/tempest/+bug/1740258 Links: [0]. http://eavesdrop.openstack.org/meetings/qa_office_hours/2018/qa_office_hours.2018-01-04-09.02.html [1]. http://eavesdrop.openstack.org/meetings/qa_office_hours/2018/qa_office_hours.2018-01-04-09.02.log.html Thanks, Chandan Kumar From periyasamy.palanisamy at ericsson.com Thu Jan 4 10:29:49 2018 From: periyasamy.palanisamy at ericsson.com (Periyasamy Palanisamy) Date: Thu, 4 Jan 2018 10:29:49 +0000 Subject: [openstack-dev] [openstack-ansible] problems with bringing up openstack in AIO flavor with OVS Message-ID: Hi OSA Experts, I'm trying to bring up openstack using openstack-ansible in AIO flavor by having neutron ml2 plugin set to ovs. OSA is being executed from OPNFV XCI deployer with attached openstack_user_config and user_variables*.yml files. I can see installation [1] is being successful, but not able to boot nova VMs and VxLAN tunnel is not established between compute (vm) and neutron-agents container (inside the same vm). Here are the observations: 1. Timeout errors occur continuously in neutron-openvswitch-agent.log of the compute vm. It looks like this happens while accessing rpc for agent report status [2] . Increasing rpc_response_timeout to 180 sec and restarting openvswich-agent, l3, dhcp and neutron-server service also doesn't help. Here is the neutron-openvswitch-agent.log [3] and neutron agent-list CLI shows empty output. There is no weird errors in rabbitmq log. 1. Nova boot fails with "The requested availability zone is not available" error and corresponding error logs in neutron-server.log [4]. 2. On the compute vm, vxlan port is not created on the br-tun bridge and neutron-agents container doesn't have br-tun bridge itself [5]. Please have a look and let me know if you need more details. [1] https://paste.ubuntu.com/26318366/ [2] https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L312. [3] https://paste.ubuntu.com/26318305/ [4] https://paste.ubuntu.com/26318445/ [5] https://paste.ubuntu.com/26318474/ Thanks, Periyasamy -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: openstack_user_config.yml Type: application/octet-stream Size: 4209 bytes Desc: openstack_user_config.yml URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: user_variables_os-nosdn-nofeature.yml Type: application/octet-stream Size: 1179 bytes Desc: user_variables_os-nosdn-nofeature.yml URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: user_variables.yml Type: application/octet-stream Size: 4546 bytes Desc: user_variables.yml URL: From liam.young at canonical.com Thu Jan 4 10:35:12 2018 From: liam.young at canonical.com (Liam Young) Date: Thu, 4 Jan 2018 10:35:12 +0000 Subject: [openstack-dev] [charms] evolving the ha interface type Message-ID: Hi James, I like option 2 but I think there is a problem with it. I don't think the hacluster charm sets any data down the relation with the principle until it has first received data from the principle. As I understand it option 2 would change this behaviour so that hacluster immediately sets an api-version option for the principle to consume. The only problem is the principle does not know whether to wait for this api-version information or not. eg when the principle is deciding whether to json encode its data it cannot differentiate between: a) An old version of the hacluster charm which does not support api-version or json b) A new version of the hacluster charm which has not set the api-version yet. Thanks Liam From joao-sa-silva at alticelabs.com Thu Jan 4 10:42:27 2018 From: joao-sa-silva at alticelabs.com (=?iso-8859-1?Q?Jo=E3o_Paulo_S=E1_da_Silva?=) Date: Thu, 4 Jan 2018 10:42:27 +0000 Subject: [openstack-dev] [kolla] LXD driver in nova Message-ID: Hello! Is it possible to use the LXD driver for nova compute instead of the KVM? Kind regards, João -------------- next part -------------- An HTML attachment was scrubbed... URL: From dabarren at gmail.com Thu Jan 4 10:52:54 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Thu, 4 Jan 2018 11:52:54 +0100 Subject: [openstack-dev] [kolla] LXD driver in nova In-Reply-To: References: Message-ID: Hi João, It would be possible but there is not any container image with the nova-lxc code on it at the moment. (No binary rpm in RDO neither) Only supported drivers (for now) are: kvm, qemu, vmware and hyperv (xen in progress). Feel free to add lxd as driver into the project :) Regards 2018-01-04 11:42 GMT+01:00 João Paulo Sá da Silva < joao-sa-silva at alticelabs.com>: > Hello! > > > > Is it possible to use the LXD driver for nova compute instead of the KVM? > > > > Kind regards, > > João > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.chadin at servionica.ru Thu Jan 4 11:32:41 2018 From: a.chadin at servionica.ru (=?utf-8?B?0JDQu9C10LrRgdCw0L3QtNGAINCn0LDQtNC40L0gKEFsZXhhbmRlciBDaGFk?= =?utf-8?Q?in=29?=) Date: Thu, 4 Jan 2018 11:32:41 +0000 Subject: [openstack-dev] [watcher] January holidays Message-ID: <0190DC6B-918B-4F16-A105-F68C106A0DF8@servionica.ru> Hi Watcher team. I’m on vacation till January 9. Our next weekly meeting is scheduled on January 10, I’d be happy to see you all there:) ____ Alex From mark.baker at canonical.com Thu Jan 4 11:38:36 2018 From: mark.baker at canonical.com (Mark Baker) Date: Thu, 4 Jan 2018 11:38:36 +0000 Subject: [openstack-dev] [kolla] LXD driver in nova In-Reply-To: References: Message-ID: There is a python nova-lxd binary (.deb) as part of Ubuntu OpenStack. To enable this a good place to start is James Page's blog: https://javacruft.wordpress.com/2017/09/01/openstack-pike-for-ubuntu-16-04-lts/ The cloud archive wiki page is also worth checking: https://wiki.ubuntu.com/OpenStack/CloudArchive Best Regards Mark Baker On 4 January 2018 at 10:52, Eduardo Gonzalez wrote: > Hi João, > > It would be possible but there is not any container image with the > nova-lxc code on it at the moment. (No binary rpm in RDO neither) > > Only supported drivers (for now) are: kvm, qemu, vmware and hyperv (xen in > progress). > > Feel free to add lxd as driver into the project :) > > Regards > > 2018-01-04 11:42 GMT+01:00 João Paulo Sá da Silva < > joao-sa-silva at alticelabs.com>: > >> Hello! >> >> >> >> Is it possible to use the LXD driver for nova compute instead of the KVM? >> >> >> >> Kind regards, >> >> João >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Thu Jan 4 13:17:57 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 4 Jan 2018 13:17:57 +0000 (GMT) Subject: [openstack-dev] [qa] [api] [all] use gabbi and tempest with just YAML Message-ID: (this is a request for assistance and verification) Back in July[1] I wrote about some experiments with using gabbi[2] with tempest. In that message I said: At some point it may be interesting to explore the option of "put a gabbit in dir X" and tempest will run it for you. I've finally started working on that, and have a pull request to gabbi-tempest[3] with a WIP that allows you to do this: GABBI_TEMPEST_PATH=/path/one:/path/two tempest run --regex gabbi Within /path/one and /path/two (or whatever, or however many, paths you want) are yaml files containg tests in the gabbi format. If the env variable is not set a 'gabbits' directory in the current working dir is checked. This makes it possible for projects to make purely api-driven (and "clientless"[4]) integration tests by requiring the gabbi-tempest plugin and writing some YAML. The gabbi-tempest plugin is responsible for getting authentication and service catalog information (using standard tempest calls) from keystone and creating a suite of environment variables (such as PLACEMENT_SERVICE and COMPUTE_BASE). I have a sample file[5] that confirms resource provider and allocation handling across the process of booting a single server. It demonstrates some of the potential. Don't be too scared by the noisy YAML anchors at the top, that's just an experiment to see what can be done to manage URLs without having to know URLs. I'm pretty sure this can be useful for integration tests, cloud verification, and interop validation, but I'm suspiciously biased in favor of gabbi from years of positive use, so would like some additional input and feedback. Gabbi is already very valuable for functional tests. When integrating with tempest it gets the discovery power that tempest provides and things like FLAVOR_REF, both useful in integration or real cloud scenarios. Thanks for your attention. [1] http://lists.openstack.org/pipermail/openstack-dev/2017-July/120369.html [2] https://gabbi.readthedocs.org/ [3] https://github.com/cdent/gabbi-tempest/pull/2 [4] A bad term, in much the same way serverless is. Of course there is a client, but in this case the client is gabbi making raw http requests rather than a library which might impose its own expectations and obfuscations on the interactions. [5] https://github.com/cdent/gabbi-tempest/blob/d570f5da52ba80b6d4b75b18e10897c49e9b6aed/samples/multi.yaml -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From doug at doughellmann.com Thu Jan 4 14:12:25 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 04 Jan 2018 09:12:25 -0500 Subject: [openstack-dev] [oslo][oslo.log]Error will be occurred if watch_log_file option is true In-Reply-To: References: Message-ID: <1515074711-sup-5593@lrrr.local> Excerpts from Rikimaru Honjo's message of 2018-01-04 18:22:26 +0900: > Hello, > > The below bug was reported in Masakari's Launchpad. > I think that this bug was caused by oslo.log. > (And, the root cause is a bug of pyinotify using by oslo.log. The detail is > written in the bug report.) > > * masakari-api failed to launch due to setting of watch_log_file and log_file > https://bugs.launchpad.net/masakari/+bug/1740111 > > There is a possibility that this bug will affects all openstack components using oslo.log. > (But, the processes working with uwsgi[1] wasn't affected when I tried to reproduce. > I haven't solved the reason of this yet...) > > Could you help us? > And, what should we do...? > > [1] > e.g. nova-api, cinder-api, keystone... > > Best regards, The bug is in pyinotify. According to the git repo [1] that project was last updated in June of 2015. I recommend we move off of pyinotify entirely, since it appears to be unmaintained. If there is another library to do the same thing we should switch to it (there seem to be lots of options [2]). If there is no viable replacement or fork, we should deprecate that log watching feature (and anything else for which we use pyinotify) and remove it ASAP. We'll need a volunteer to do the evaluation and update oslo.log. Doug [1] https://github.com/seb-m/pyinotify [2] https://pypi.python.org/pypi?%3Aaction=search&term=inotify&submit=search From mriedemos at gmail.com Thu Jan 4 14:46:38 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 4 Jan 2018 08:46:38 -0600 Subject: [openstack-dev] zuulv3 log structure and format grumblings Message-ID: I've talked to a few people on the infra team about this but I'm not sure what is temporary and transitional and what is permanent and needs to be fixed, and how to fix it. The main issue is for newer jobs like tempest-full, the logs are under controller/logs/ and we lose the log analyze formatting for color, being able to filter on log level, and being able to link directly to a line in the logs. Should things be like logs/controller/* instead? If not, can someone point me to where the log analyze stuff runs so I can see if we need to adjust a path regex for the new structure? The other thing is zipped up files further down the directory structure now have to be downloaded, like the config files: http://logs.openstack.org/69/530969/1/check/tempest-full/223c175/controller/logs/etc/nova/ I think that's part of devstack-gate's post-test host cleanup routine where it modifies gz files so they can be viewed in the browser. Please let me know if there is something I can help with here because I really want to get the formatting back to help with debugging CI issues and I've taken for granted how nice things were for oh these many years. -- Thanks, Matt From openstack at nemebean.com Thu Jan 4 15:17:39 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 4 Jan 2018 09:17:39 -0600 Subject: [openstack-dev] [tripleo] containers-multinode is broken, what's our plan In-Reply-To: References: Message-ID: This is a situation where having temprevert/pin/cherrypick functionality again would have been really helpful. I realize that doesn't help in the immediate circumstance, but it's something to consider for the future. On 01/02/2018 10:05 PM, Emilien Macchi wrote: > Mistral broke us with https://review.openstack.org/#/c/506185/ and we > had a promotion yesterday so now our CI deploy Mistral with this > patch. > It breaks some Mistral actions, including the ones needed by > config-download (in featureset010). > > Steve has a fix: https://review.openstack.org/#/c/530781 but there is > no core review yet so we decided to proceed this way: > > 1) Carry Steve's patch in Mistral distgit: > https://review.rdoproject.org/r/#/c/11140/ - DONE > 2) Remove featureset010 from promotion requirements - DONE > 3) Once we have a promotion, we'll be able to land > https://review.openstack.org/#/c/530783/ - IN PROGRESS > 4) Once https://review.openstack.org/#/c/530783/ is landed, and the > upstream patch is landed, revert > https://review.rdoproject.org/r/#/c/11140/ (otherwise RDO will become > inconsistent) and failing to build on master) > 5) Re-add featureset010 in promotion requirements (revert > https://review.rdoproject.org/r/#/c/11142) so we'll catch the issue > next time. > > Thanks, > From sfinucan at redhat.com Thu Jan 4 15:39:17 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Thu, 04 Jan 2018 15:39:17 +0000 Subject: [openstack-dev] [os-api-ref][doc] Adding openstack-doc-core to os-api-ref Message-ID: <1515080357.32193.33.camel@redhat.com> I'm not sure what the procedure for this is but here goes. I've noticed that the 'os-api-ref' project seems to have its own group of cores [1], many of whom are no longer working on OpenStack (at least, not full-time), and has a handful of open patches against it [2]. Since the doc team has recently changed its scope from writing documentation to enabling individual projects to maintain their own docs, we've become mainly responsible for projects like 'openstack-doc- theme'. Given that the 'os-api-ref' project is a Sphinx thing required for multiple OpenStack projects, it seems like something that could/should fall into the doc team's remit. I'd like to move this project into the remit of the 'openstack-doc- core' team, by way of removing the 'os-api-ref-core' group or adding 'openstack-doc-core' to the list of included groups. In both cases, existing active cores will be retained. Do any of the existing 'os-api- ref' cores have any objections to this? Stephen PS: I'm not sure how this affects things from a release management perspective. Are there PTLs for these sorts of projects? [1] https://review.openstack.org/#/admin/groups/1391,members [2] https://review.openstack.org/#/q/project:openstack/os-api-ref+statu s:open From gr at ham.ie Thu Jan 4 16:06:53 2018 From: gr at ham.ie (Graham Hayes) Date: Thu, 4 Jan 2018 16:06:53 +0000 Subject: [openstack-dev] [os-api-ref][doc] Adding openstack-doc-core to os-api-ref In-Reply-To: <1515080357.32193.33.camel@redhat.com> References: <1515080357.32193.33.camel@redhat.com> Message-ID: <2c1703fa-8bba-2060-0d3b-dfc3f0311f59@ham.ie> On 04/01/18 15:39, Stephen Finucane wrote: > I'm not sure what the procedure for this is but here goes. > > I've noticed that the 'os-api-ref' project seems to have its own group > of cores [1], many of whom are no longer working on OpenStack (at > least, not full-time), and has a handful of open patches against it > [2]. Since the doc team has recently changed its scope from writing > documentation to enabling individual projects to maintain their own > docs, we've become mainly responsible for projects like 'openstack-doc- > theme'. Given that the 'os-api-ref' project is a Sphinx thing required > for multiple OpenStack projects, it seems like something that > could/should fall into the doc team's remit. > > I'd like to move this project into the remit of the 'openstack-doc- > core' team, by way of removing the 'os-api-ref-core' group or adding > 'openstack-doc-core' to the list of included groups. In both cases, > existing active cores will be retained. Do any of the existing 'os-api- > ref' cores have any objections to this? No objection from me > Stephen > > PS: I'm not sure how this affects things from a release management > perspective. Are there PTLs for these sorts of projects? It does seem like a docs tooling thing, so maybe moving it to the docs project umbrella might be an idea? > [1] https://review.openstack.org/#/admin/groups/1391,members > [2] https://review.openstack.org/#/q/project:openstack/os-api-ref+statu > s:open > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From gr at ham.ie Thu Jan 4 16:36:57 2018 From: gr at ham.ie (Graham Hayes) Date: Thu, 4 Jan 2018 16:36:57 +0000 Subject: [openstack-dev] [api] APIs schema consumption discussion In-Reply-To: <258619eb-6980-d5c1-39d6-2bebacef3a4c@ham.ie> References: <9be46605-73ad-d1f5-d4ef-e2218b7c6dba@redhat.com> <1510675620-sup-9724@lrrr.local> <2a309ee7-f262-f83b-f86f-94559366953b@redhat.com> <258619eb-6980-d5c1-39d6-2bebacef3a4c@ham.ie> Message-ID: <25138e16-af3e-29a7-a4e5-5b53e8ecd011@ham.ie> On 22/11/17 20:04, Graham Hayes wrote: > When I was talking to Gil about it, I suggested writing a new sphinx / > docutils formatter. I am not sure how feasible it would be, but it could > be possible (as sphinx has the whole page tree in memory when writing it > out, we may be able to output it in some sort of structured format. > > I would be hesitant to change how we write docs - this change took long > enough to get in place, and the ability to add / remove bits to suit > different projects is a good thing. Pages like [1] would be hard to do > in a standard machine readable format, and I think they definitely make > the docs better. > > - Graham > > 1 - https://developer.openstack.org/api-ref/compute/#servers-servers > > Ok, I have done a quick (read: very rough and hacky) prototype of the formatter here [1] It uses the sphinx formatter plugin system, and reads from what we already have in the api-ref/* folder. It outputs [2] yaml that describes each endpoint, and the fields in the request / response. If there is interest, I can clean up the patch, and look at supporting microversions. 1 - https://review.openstack.org/#/c/528801/ 2 - http://paste.openstack.org/show/629241/ - Graham > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From andrea.frittoli at gmail.com Thu Jan 4 16:37:28 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Thu, 04 Jan 2018 16:37:28 +0000 Subject: [openstack-dev] [QA] No QA meetings until next year In-Reply-To: References: Message-ID: Dear all, The first QA meeting of the year will be next week. Andrea Frittoli On Wed, 20 Dec 2017, 3:56 pm Andrea Frittoli, wrote: > Dear all, > > due to the holiday season, there will be no QA meetings until 2018. > > Andrea Frittoli (andreaf) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Thu Jan 4 17:15:44 2018 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Thu, 4 Jan 2018 18:15:44 +0100 Subject: [openstack-dev] [kolla] Re: About maridb 10.1 on kolla In-Reply-To: References: Message-ID: <76f5e411-d614-8c83-da98-00ce833e12a4@linaro.org> W dniu 29.12.2017 o 07:58, Jeffrey Zhang pisze: > recently, a series patches about mariadb is pushed. Current issue is > > - using different mariadb binary from different repo ( from percona, > Mariadb official, linux distro ) > - using different version number of mariadb ( 10.0 and 10.1 ) > > To make life easier, some patches are pushed to unify all of these. Here > is my thought about this > > - try to bump to 10.1, which is released long time ago > - use mariadb binary provided by linux disto as much as possible > > So here is plan > > - trying to upgrade to mariadb 10.1 [0][1] > - use mariadb 10.1 provided by RDO on redhat family distro [2] > - use mariadb 10.0 provided by UCA on ubuntu  >   - it is told that, it not work as excepted [3] >   - if this does not work. we can upgrade to mariadb 10.1 provides by >     mariadb official on ubuntu. > - use mariadb 10.1 provided by os repo on Debian. How we are with testing/merging? For Debian to be deployable we need 529199 in images as rest of changes are kolla-ansible and can be cherry-picked before deployment. > [0] https://review.openstack.org/#/c/529505/ - fix kolla-ansible for > mariadb 10.1 merged > [1] https://review.openstack.org/#/c/529199/ - Fix MariaDB bootstrap for 10.1 version > [2] https://review.openstack.org/#/c/468632/ - Consume RDO packaged mariadb version > [3] https://review.openstack.org/#/c/426953/ - Revert "Removed percona > from ubuntu repos" merged From msm at redhat.com Thu Jan 4 17:23:34 2018 From: msm at redhat.com (michael mccune) Date: Thu, 4 Jan 2018 12:23:34 -0500 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, Happy new year to all and welcome to the first API-SIG meeting of 2018. As the SIG is ramping back up after the holiday break we had a few topics to kick off the new year and get the ball rolling. The SIG is working to complete our year in review report that will be collected and distributed by the OpenStack foundation. Without spoiling the report, 2017 was a year of steady progress for the SIG with several efforts aimed at improving interoperability and expanding the inclusiveness of the group. Graham Hayes(mugsie) shared his in-progress work[7][8] towards generating machine readable output for API schemas. This is a topic that the SIG has studied in the past and that continues to generate interest from the community. At the core of this issue is the idea that if a project can provide API schemas in a common format with their documentation then the job of SDK implementors and other integrators will be greatly eased. If you have thoughts or opinions on this topic, please review mugsie's proposals and add your input. Monty Taylor(mordred) has been investigating how pagination is implemented across the OpenStack ecosystem. It appears that there are several differing implementations that exist and this is causing some friction in the SDK development process. Although there is already a guideline about pagination, the SIG is examining how best they can help projects move towards consistency in this area and will continue to discuss solutions in the next meeting. * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines None this week. # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. None this week # Guidelines Currently Under Review [3] * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 * WIP: Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://review.openstack.org/#/c/524467/ [8] https://review.openstack.org/#/c/528801/ Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg From mchandras at suse.de Thu Jan 4 17:57:44 2018 From: mchandras at suse.de (Markos Chandras) Date: Thu, 4 Jan 2018 17:57:44 +0000 Subject: [openstack-dev] [openstack-ansible] problems with bringing up openstack in AIO flavor with OVS In-Reply-To: References: Message-ID: <2fa526ca-0ac6-ab39-8974-665c0a518aa8@suse.de> Hello, On 04/01/18 10:29, Periyasamy Palanisamy wrote: > Hi OSA Experts, > >   > > I’m trying to bring up openstack using openstack-ansible in AIO flavor > by having neutron ml2 plugin set to ovs. > > OSA is being executed from OPNFV XCI deployer with attached > openstack_user_config and user_variables*.yml files. > Would it be possible to try and reproduce it with the openstack-ansible master branch? That will make our lives easier since we could eliminate all the XCI specific layers. -- markos SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg From cboylan at sapwetik.org Thu Jan 4 18:20:38 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 04 Jan 2018 10:20:38 -0800 Subject: [openstack-dev] zuulv3 log structure and format grumblings In-Reply-To: References: Message-ID: <1515090038.1999260.1224370696.77F60478@webmail.messagingengine.com> On Thu, Jan 4, 2018, at 6:46 AM, Matt Riedemann wrote: > I've talked to a few people on the infra team about this but I'm not > sure what is temporary and transitional and what is permanent and needs > to be fixed, and how to fix it. > > The main issue is for newer jobs like tempest-full, the logs are under > controller/logs/ and we lose the log analyze formatting for color, being > able to filter on log level, and being able to link directly to a line > in the logs. > > Should things be like logs/controller/* instead? If not, can someone > point me to where the log analyze stuff runs so I can see if we need to > adjust a path regex for the new structure? I don't think that is necessary, instead the next item you noticed is related to the issue. > > The other thing is zipped up files further down the directory structure > now have to be downloaded, like the config files: > > http://logs.openstack.org/69/530969/1/check/tempest-full/223c175/controller/logs/etc/nova/ The issue is that the wsgi os-loganalyze application is only applied to .txt log files if they are also gzipped: https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/templates/logs.vhost.erb#n95 As you've noticed the new job processes log files differently. In the case of etc contents they are now zipped when they weren't before and in the case of the service logs themselves are no longer gzipped when they were gzipped before. So if we want os-loganalyze to annotate these log files they should be gzipped by the job before getting copied to the log server (this also helps quite a bit with disk usage on the log server itself so is a good idea regardless). > > I think that's part of devstack-gate's post-test host cleanup routine > where it modifies gz files so they can be viewed in the browser. It was, but this new job does not use devstack-gate at all, there is only devstack + job config. Fixes for this will need to be applied to the new job itself rather than to devstack-gate. I've pushed up https://review.openstack.org/531208 as a quick check that this is indeed the general problem, but for longer term fix I think we want to update our log publishing ansible roles to compress everything that isn't already compressed. > > Please let me know if there is something I can help with here because I > really want to get the formatting back to help with debugging CI issues > and I've taken for granted how nice things were for oh these many years. Please check that the above change results in the os-loganalyze behavior that you expect and if adventurous you can help us updating the generic publishing role. Clark From openstack at fried.cc Thu Jan 4 18:38:06 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 4 Jan 2018 12:38:06 -0600 Subject: [openstack-dev] [nova] Working toward Queens feature freeze and RC1 In-Reply-To: References: Message-ID: <20e3c927-48c0-4169-dcf2-85f658c50747@fried.cc> Matt, et al- > * Nested resource providers: I'm going to need someone closer to this > work like Jay or Eric to provide an update on where things are at in the > series of changes and what absolutely needs to get done. I have > personally found it hard to track what the main focus items are for the > nested resource providers / traits / granular resource provider request > changes so I need someone to summarize and lay out the review goals for > the next two weeks. Overall goals for nested resource providers in Queens: (A) Virt drivers should be able to start expressing resource inventory as a hierarchy, including traits, and have that understood by the resource tracker and scheduler. (B) Ops should be able to create flavors requesting resources with traits, including e.g. same-class resources with different traits. Whereas many big pieces of the framework are merged: - Placement-side API changes giving providers parents/roots, allowing tree representation and querying. - A rudimentary ProviderTree class on the compute side for representation of tree structure and inventory; and basic usage thereof by the report client. - Traits affordance in the placement API. ...we're still missing the following pieces that actually enable those goals: - NRP affordance in GET /allocation_candidates . PATCHES: - . STATUS: Not proposed . PRIORITY: Critical . OWNER: jaypipes . DESCRIPTION: In the current master branch, the placement API will report allocation candidates from [(a single non-sharing provider) and (sharing providers associated via aggregate with that non-sharing provider)]. It needs to be enhanced to report allocation candidates from [(non-sharing providers in a tree) and (sharing providers associated via aggregate with any of those non-sharing providers)]. This is critical for two reasons: 1) Without it, NRP doesn't provide any interesting use cases; and 2) It is prerequisite to the remainder of the Queens NRP work, listed below. . ACTION: Jay to sling some code - Granular Resource Requests . PATCHES: Placement side: https://review.openstack.org/#/c/517757/ Report client side: https://review.openstack.org/#/c/515811/ . STATUS: WIP, blocked on the above . PRIORITY: High . OWNER: efried . DESCRIPTION: Ability to request separate groupings of resources from GET /allocation_candidates via flavor extra specs. The groundwork (ability to parse flavors, construct querystrings, parse querystrings, etc.) has already merged. The remaining patches need to do the appropriate join-fu in a new placement microversion; and flip the switch to send flavor-parsed request groupings from report client. The former needs to be able to make use of NRP affordance in GET /allocation_candidates, so is blocked on the above work item. The latter subsumes parsing of traits from flavors (the non-granular part of which actually got a separate blueprint, request-traits-in-nova). . ACTION: Wait for the above - ComputeDriver.update_provider_tree() . PATCHES: Series starting at https://review.openstack.org/#/c/521685/ . STATUS: Bottom ready for core reviews; top WIP. . PRIORITY: ? . OWNER: efried . DESCRIPTION: This is the next phase in the evolution of compute driver inventory reporting (get_available_resource => get_inventory => update_provider_tree). The series includes a bunch of enabling groundwork in SchedulerReportClient and ProviderTree. . ACTION: Reviews on the bottom (core reviewers); address comments/issues in the middle (efried); finish WIPs on top (efried). Also write up a mini-spec describing this piece in more detail (efried). Thanks, Eric (efried) . From openstack.org at sodarock.com Thu Jan 4 19:28:46 2018 From: openstack.org at sodarock.com (John Villalovos) Date: Thu, 4 Jan 2018 11:28:46 -0800 Subject: [openstack-dev] [Ironic] Removal of tempest plugin code from openstack/ironic & openstack/ironic-inspector In-Reply-To: References: Message-ID: Note: I am proposing in the next Ironic meeting ( https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting ) that we move forward on removing the tempest plugin code from openstack/ironic and openstack/ironic-inspector. It will have been over three weeks since the initial email in this thread (15-Dec-2017) about removing the code. Thanks, John On Fri, Dec 15, 2017 at 7:27 AM, John Villalovos wrote: > I wanted to send out a note to any 3rd Party CI or other users of the > tempest plugin code inside either openstack/ironic or > openstack/ironic-inspector. That code has been migrated to the > openstack/ironic-inspector-plugin repository. We have been busily ( > https://review.openstack.org/#/q/topic:ironic-tempest-plugin ) migrating > all of the projects to use this new repository. > > If you have a 3rd Party CI or something else that is depending on the > tempest plugin code please migrate it to use openstack/ironic-tempest- > plugin. > > We plan to remove the tempest plugin code on Tuesday 19-Dec-2017 from > openstack/ironic and openstack/ironic-tempest-plugin. And then after that > doing backports of those changes to the stable branches. > > openstack/ironic Removal patch > https://review.openstack.org/527733 > > openstack/ironic-inspector Removal patch > https://review.openstack.org/527743 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Thu Jan 4 20:54:07 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 4 Jan 2018 15:54:07 -0500 Subject: [openstack-dev] [nova] Working toward Queens feature freeze and RC1 In-Reply-To: <20e3c927-48c0-4169-dcf2-85f658c50747@fried.cc> References: <20e3c927-48c0-4169-dcf2-85f658c50747@fried.cc> Message-ID: On 01/04/2018 01:38 PM, Eric Fried wrote: > Matt, et al- > >> * Nested resource providers: I'm going to need someone closer to this >> work like Jay or Eric to provide an update on where things are at in the >> series of changes and what absolutely needs to get done. I have >> personally found it hard to track what the main focus items are for the >> nested resource providers / traits / granular resource provider request >> changes so I need someone to summarize and lay out the review goals for >> the next two weeks. > > > Overall goals for nested resource providers in Queens: > (A) Virt drivers should be able to start expressing resource inventory > as a hierarchy, including traits, and have that understood by the > resource tracker and scheduler. > (B) Ops should be able to create flavors requesting resources with > traits, including e.g. same-class resources with different traits. > > Whereas many big pieces of the framework are merged: > > - Placement-side API changes giving providers parents/roots, allowing > tree representation and querying. > - A rudimentary ProviderTree class on the compute side for > representation of tree structure and inventory; and basic usage thereof > by the report client. > - Traits affordance in the placement API. > > ...we're still missing the following pieces that actually enable those > goals: > > - NRP affordance in GET /allocation_candidates > . PATCHES: - > . STATUS: Not proposed > . PRIORITY: Critical > . OWNER: jaypipes > . DESCRIPTION: In the current master branch, the placement API will > report allocation candidates from [(a single non-sharing provider) and > (sharing providers associated via aggregate with that non-sharing > provider)]. It needs to be enhanced to report allocation candidates > from [(non-sharing providers in a tree) and (sharing providers > associated via aggregate with any of those non-sharing providers)]. > This is critical for two reasons: 1) Without it, NRP doesn't provide any > interesting use cases; and 2) It is prerequisite to the remainder of the > Queens NRP work, listed below. > . ACTION: Jay to sling some code Just as an aside... while I'm currently starting this work, until the virt drivers and eventually the generic device manager or PCI device manager is populating parent/child information for resource providers, there's nothing that will be returned in the GET /allocation_candidates response w.r.t. nested providers. So, yes, it's kind of a prerequisite, but until inventory records are being populated from the compute nodes, the allocation candidates work is going to be all academic/tests. Best, -jay > - Granular Resource Requests > . PATCHES: > Placement side: https://review.openstack.org/#/c/517757/ > Report client side: https://review.openstack.org/#/c/515811/ > . STATUS: WIP, blocked on the above > . PRIORITY: High > . OWNER: efried > . DESCRIPTION: Ability to request separate groupings of resources from > GET /allocation_candidates via flavor extra specs. The groundwork > (ability to parse flavors, construct querystrings, parse querystrings, > etc.) has already merged. The remaining patches need to do the > appropriate join-fu in a new placement microversion; and flip the switch > to send flavor-parsed request groupings from report client. The former > needs to be able to make use of NRP affordance in GET > /allocation_candidates, so is blocked on the above work item. The > latter subsumes parsing of traits from flavors (the non-granular part of > which actually got a separate blueprint, request-traits-in-nova). > . ACTION: Wait for the above > > - ComputeDriver.update_provider_tree() > . PATCHES: Series starting at https://review.openstack.org/#/c/521685/ > . STATUS: Bottom ready for core reviews; top WIP. > . PRIORITY: ? > . OWNER: efried > . DESCRIPTION: This is the next phase in the evolution of compute > driver inventory reporting (get_available_resource => get_inventory => > update_provider_tree). The series includes a bunch of enabling > groundwork in SchedulerReportClient and ProviderTree. > . ACTION: Reviews on the bottom (core reviewers); address > comments/issues in the middle (efried); finish WIPs on top (efried). > Also write up a mini-spec describing this piece in more detail (efried). > > Thanks, > Eric (efried) > . > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From arxcruz at redhat.com Thu Jan 4 21:17:06 2018 From: arxcruz at redhat.com (Arx Cruz) Date: Thu, 4 Jan 2018 22:17:06 +0100 Subject: [openstack-dev] [tripleo] TripleO CI end of sprint status Message-ID: Hello, On January 03 we came the end of sprint using our new team structure, and here’s the highlights. Sprint Review: This was a tech debt sprint, and due the holidays and mostly of the team out, we haven't set a goal for this sprint, leaving the team free to work on the tech debt cards, as much as the time permits. One can see the results of the sprint via https://trello.com/c/fvLpZMF6/ Tripleo CI community meeting - Promotion issues due mistral - http://lists.openstack.org/pipermail/openstack-dev/2018-January/125935.html - The plan (from Emilien's email) - Carry Steve's patch in Mistral distgit: - https://review.rdoproject.org/r/#/c/11140/ - DONE - Remove featureset010 from promotion requirements - DONE - Once we have a promotion, we'll be able to land https://review.openstack.org/#/c/530783/ - IN PROGRESS - Once https://review.openstack.org/#/c/530783/ is landed, and the upstream patch is landed, revert https://review.rdoproject.org/r/#/c/11140/ (otherwise RDO will become inconsistent) and failing to build on master) - Re-add featureset010 in promotion requirements (revert https://review.rdoproject.org/r/#/c/11142) so we'll catch the issue next time. - Landed in current-tripleo because we don't have voting in multinode job and scenario 001, 002 and 003 were non voting - Scenario jobs not voting due timeouts - http://lists.openstack.org/pipermail/openstack-dev/2018-January/125935.html - Which scenario / services we care about - We need an investigation to determine what we want to test in our scenario jobs, and what we don't, in order to release resources and focus our work on - Graphite report status - Working on grafana - Initially focused on OVB jobs Ruck and Rover What is Ruck and Rover One person in our team is designated Ruck and another Rover, one is responsible to monitoring the CI, checking for failures, opening bugs, participate on meetings, and this is your focal point to any CI issues. The other person, is responsible to work on these bugs, fix problems and the rest of the team are focused on the sprint. For more information about our structure, check [1] List of bugs that Ruck and Rover were working on: - https://bugs.launchpad.net/tripleo/+bug/1736113 CI: newton promotion fails because no stable/newton branch in aodh - https://bugs.launchpad.net/tripleo/+bug/1740940 Tempest test on Ocata failing with Error: No valid Host was found - https://bugs.launchpad.net/tripleo/+bug/1740934 Tracker Bug: Tempest fails with packaging error - python-oslo-db-tests - https://bugs.launchpad.net/tripleo/+bug/1739661 Tracker Bug: Intermittent failures creating OVB stacks on RDO Cloud since upgrade (** would like to close this bug - tenant has been cleaned up and is working) - https://bugs.launchpad.net/tripleo/+bug/1739639 ci.centos gate are failing with THT default change We also have our new Ruck and Rover for this week: - Ruck - Arx Cruz - arxcruz|ruck - Rover - Gabrielle Cerami - panda|rover If you have any questions and/or suggestions, please contact us [1] https://review.openstack.org/#/c/509280/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Thu Jan 4 21:35:14 2018 From: abishop at redhat.com (Alan Bishop) Date: Thu, 4 Jan 2018 16:35:14 -0500 Subject: [openstack-dev] [castellan] Transferring ownership of secrets to another user Message-ID: Has there been any previous discussion on providing a mechanism for transferring ownership of a secret from one user to another? Cinder supports the notion of transferring volume ownership to another user, who may be in another tenant/project. However, if the volume is encrypted it's possible (even likely) that the new owner will not be able to access the encryption secret. The new user will have the encryption key ID (secret ref), but may not have permission to access the secret, let alone delete the secret should the volume be deleted later. This issue is currently flagged as a cinder bug [1]. This is a use case where the ownership of the encryption secret should be transferred to the new volume owner. Alan [1] https://bugs.launchpad.net/cinder/+bug/1735285 From openstack at fried.cc Thu Jan 4 21:44:05 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 4 Jan 2018 15:44:05 -0600 Subject: [openstack-dev] [nova] Working toward Queens feature freeze and RC1 In-Reply-To: References: <20e3c927-48c0-4169-dcf2-85f658c50747@fried.cc> Message-ID: <10c568da-4bc9-71d6-d6e0-8dd4402cf109@fried.cc> Folks- >> - NRP affordance in GET /allocation_candidates >>    . PATCHES: - >>    . STATUS: Not proposed >>    . PRIORITY: Critical >>    . OWNER: jaypipes >>    . DESCRIPTION: In the current master branch, the placement API will >> report allocation candidates from [(a single non-sharing provider) and >> (sharing providers associated via aggregate with that non-sharing >> provider)].  It needs to be enhanced to report allocation candidates >> from [(non-sharing providers in a tree) and (sharing providers >> associated via aggregate with any of those non-sharing providers)]. >> This is critical for two reasons: 1) Without it, NRP doesn't provide any >> interesting use cases; and 2) It is prerequisite to the remainder of the >> Queens NRP work, listed below. >>    . ACTION: Jay to sling some code > > Just as an aside... while I'm currently starting this work, until the > virt drivers and eventually the generic device manager or PCI device > manager is populating parent/child information for resource providers, > there's nothing that will be returned in the GET /allocation_candidates > response w.r.t. nested providers. > > So, yes, it's kind of a prerequisite, but until inventory records are > being populated from the compute nodes, the allocation candidates work > is going to be all academic/tests. > > Best, > -jay Agree it's more of a tangled web than a linear sequence. My thought was that it doesn't make sense for virt drivers to expose their inventory in tree form until it's going to afford them some benefit. But to that point, I did forget to mention that Xen is trying to do just that in Queens for VGPU support. They already have a WIP [1] which would consume the WIPs at the top of the ComputeDriver.update_provider_tree() series [2]. [1] https://review.openstack.org/#/c/521041/ [2] https://review.openstack.org/#/c/521685/ I also don't necessarily agree that we need PCI manager changes or a generic device manager for this to work. As long as the virt driver knows how to a) expose the resources in its provider tree, b) consume the allocation candidate coming from the scheduler, and c) create/attach resources based on that info, those other pieces would just get in the way. I'm hoping the Xen VGPU use case proves that. E . From anlin.kong at gmail.com Thu Jan 4 21:45:26 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Fri, 5 Jan 2018 10:45:26 +1300 Subject: [openstack-dev] [all] propose to upgrade python kubernetes (the k8s python client) to 4.0.0 which breaks oslo.service In-Reply-To: References: Message-ID: On Tue, Jan 2, 2018 at 1:56 AM, Eyal Leshem wrote: > Hi , > > According to https://github.com/eventlet/eventlet/issues/147 - it's looks > that eventlet > has issue with "multiprocessing.pool". > > The ThreadPool used in code that auto-generated by swagger. > > Possible workaround for that is to monky-patch the client library , > and replace the pool with greenpool. > Hi, leyal, I'm not very familar with eventlet, but how can I monkey patch kubernetes python lib? The only way I can see now is to replace oslo.service with something else, e.g. cotyledon, avoid to use eventlet, that's a signaficant change though. I also found this bug https://bugs.launchpad.net/taskflow/+bug/1225275 in taskflow, they chose to not use multiprocessing module. Any other suggestions are welcomed! > > If someone has better workaround, please share that with us :) > > btw , I don't think that should be treated as compatibility issue > in the client python as it's an eventlet issue.. > > Thanks , > leyal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Jan 4 22:15:28 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 4 Jan 2018 16:15:28 -0600 Subject: [openstack-dev] zuulv3 log structure and format grumblings In-Reply-To: <1515090038.1999260.1224370696.77F60478@webmail.messagingengine.com> References: <1515090038.1999260.1224370696.77F60478@webmail.messagingengine.com> Message-ID: On 1/4/2018 12:20 PM, Clark Boylan wrote: > I've pushed uphttps://review.openstack.org/531208 as a quick check that this is indeed the general problem, but for longer term fix I think we want to update our log publishing ansible roles to compress everything that isn't already compressed. Yup this fixes the log formatting/color/linking stuff, thanks! I noted that the config files still have to be downloaded, and dmsimard pointed out that conf/ini/filters files will have to be mapped here: http://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/templates/logs.vhost.erb#n24 I don't know if we can just convert everything under etc/ or not? That seems better to me, but I don't know anything about modifying vhost files. -- Thanks, Matt From sshnaidm at redhat.com Thu Jan 4 22:26:45 2018 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Fri, 5 Jan 2018 00:26:45 +0200 Subject: [openstack-dev] [TripleO][CI] Which network templates to use in CI (with and without net isolation)? Message-ID: Hi, all we have now network templates in tripleo-ci repo[1] and we'd like to move them to tht repo[2] and to use them from there. We have also default templates defined in overcloud-deploy role[3]. So the question is - which templates should we use and how to configure them? One option for configuration is set network args (incl. isolation) in overcloud-deploy role[3] depending on other features (like docker, ipv6, etc). The other is to set them in featureset[4] files for each job. The question is also which network templates we want to gate in CI and should it be the same we have by default in tripleo-quickstart-extras? We have a few patches from James (@slagle) to address this topic[5] and from Arx for this issue[6]. Please feel free to share your thoughts what and where should be tested in CI from network templates. Thanks [1] https://github.com/openstack-infra/tripleo-ci/tree/821d84f34c851a79495f0205ad3c8dac928c286f/test-environments [2] https://github.com/openstack/tripleo-heat-templates/tree/master/ci/environments/network [3] https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-deploy/tasks/pre-deploy.yml#L21-L51 [4] https://github.com/openstack/tripleo-quickstart/blob/cf793bbb8368f89cd28214fe21adca2df48ef7f3/config/general_config/featureset001.yml#L26-L28 [5] https://review.openstack.org/#/c/531224/ https://review.openstack.org/#/c/525331 https://review.openstack.org/#/c/531221 [6] https://review.openstack.org/#/c/512225/ -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.slagle at gmail.com Thu Jan 4 22:39:15 2018 From: james.slagle at gmail.com (James Slagle) Date: Thu, 4 Jan 2018 17:39:15 -0500 Subject: [openstack-dev] [TripleO][CI] Which network templates to use in CI (with and without net isolation)? In-Reply-To: References: Message-ID: On Thu, Jan 4, 2018 at 5:26 PM, Sagi Shnaidman wrote: > Hi, all > > we have now network templates in tripleo-ci repo[1] and we'd like to move > them to tht repo[2] and to use them from there. They've already been moved from tripleo-ci to tripleo-heat-templates: https://review.openstack.org/#/c/476708/ > We have also default > templates defined in overcloud-deploy role[3]. > So the question is - which templates should we use and how to configure > them? We should use the ones for ci, not the examples under tripleo-heat-templates/network/config. Those examples (especially for multiple-nics) are meant to be clear and orderly so that users can easily understand how to adapt them to their own environments. Especially for multiple-nics, there isn't really a sane default, and I don't think we should make our examples match what we use in ci. It may be possible to update ovb so that it deploys virt environments such that the examples work. That feels like a lot of unecessary churn though. But even then ci is using mtu:1350, which we don't want in the examples. > One option for configuration is set network args (incl. isolation) in > overcloud-deploy role[3] depending on other features (like docker, ipv6, > etc). > The other is to set them in featureset[4] files for each job. > The question is also which network templates we want to gate in CI and > should it be the same we have by default in tripleo-quickstart-extras? > > We have a few patches from James (@slagle) to address this topic[5] What I'm trying to do in these patches is just use the templates and environments from tripleo-heat-templates that were copied from tripleo-ci in 476708. I gathered that was the intent since they were copied into tripleo-heat-templates. Otherwise, why do we need them there are at all? -- -- James Slagle -- From andrea.frittoli at gmail.com Thu Jan 4 22:40:03 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Thu, 04 Jan 2018 22:40:03 +0000 Subject: [openstack-dev] zuulv3 log structure and format grumblings In-Reply-To: References: <1515090038.1999260.1224370696.77F60478@webmail.messagingengine.com> Message-ID: On Thu, 4 Jan 2018, 11:15 pm Matt Riedemann, wrote: > On 1/4/2018 12:20 PM, Clark Boylan wrote: > > I've pushed uphttps://review.openstack.org/531208 as a quick check > that this is indeed the general problem, but for longer term fix I think we > want to update our log publishing ansible roles to compress everything that > isn't already compressed. > > Yup this fixes the log formatting/color/linking stuff, thanks! > > I noted that the config files still have to be downloaded, and dmsimard > pointed out that conf/ini/filters files will have to be mapped here: > > > http://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/templates/logs.vhost.erb#n24 The ansible role that stages files supports renaming files with specified extensions to .txt, but it looks like something is not going right. I'm on PTO until the 9th, if it's an issue still I'll look into it as soon as I'm back. Andrea (andreaf) > > > I don't know if we can just convert everything under etc/ or not? That > seems better to me, but I don't know anything about modifying vhost files. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wenranxiao at gmail.com Fri Jan 5 02:48:15 2018 From: wenranxiao at gmail.com (wenran xiao) Date: Fri, 5 Jan 2018 10:48:15 +0800 Subject: [openstack-dev] [neutron] Metering can't count traffic for floating ip, or internal ip. Message-ID: hi all, neutron metering can only count traffic that we send to *remote_ip*(egress), and *remote_ip* send to us(ingress), I think we should add method to count the traffic for floating ip or internal ip. Any suggestions is welcome. Best regards Ran -------------- next part -------------- An HTML attachment was scrubbed... URL: From wenranxiao at gmail.com Fri Jan 5 02:50:44 2018 From: wenranxiao at gmail.com (wenran xiao) Date: Fri, 5 Jan 2018 10:50:44 +0800 Subject: [openstack-dev] [neutron] Metering can't count traffic for floating ip, or internal ip. Message-ID: hi all, neutron metering can only count traffic that we send to *remote_ip*(egress), and *remote_ip* send to us(ingress), I think we should add method to count the traffic for floating ip or internal ip. Any suggestions is welcome. Best regards Ran -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhengzhenyulixi at gmail.com Fri Jan 5 02:53:56 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Fri, 5 Jan 2018 10:53:56 +0800 Subject: [openstack-dev] [nova][neutron] Filtering Instances by IP address performance improvement test result Message-ID: Hi All, We are working on patches to improve the performance filtering instance by IP address this cycle. As discussed in the previous ML[1], it contains both patches from Nova and Neutron[2][3][4][5][6]. As the POC is almost functional(the neutron extension part seems not working, it cannot be successfully listed in patchset 14 of [5] , I have to bypass the "if" condition for checking neutron "ip-substring-filtering" extension to make it work, but that seems easy to fix), I made some tests to check what kind of improvement has been done with those patches. In the tests, I wrote a simple script [7](the script is silly, please don't laugh at me:) ) which generated 2000 vm records in Nova DB with IP address allocated(one IP for each vm), and also 2000 port records with corresponding IP addresses in my local devstack env. Before adding those patches, querying instance with a specific IP filtering causes about 4000 ms, the test has been done several times, and I took the averaged result: [image: Inline image 1] After adding those patches(and some modifications as mentioned above) querying with the same request causes only about 400ms: [image: Inline image 2] So, the design seems working well. I also tested with a "Sub-String" manner filtering with IP address: 192.168.7.2, which will match 66 instances, and it takes about 900ms: [image: Inline image 3] It increased, but seems reasonable as it matches more instances, and still much better than current implementation. Please test out in your own env if interested, the script might need some modification as I hardcoded db connection, network_id and subnet_id. And also, please help review the patches :) [1] http://lists.openstack.org/pipermail/openstack-operators/2017-October/014459.html [2] https://review.openstack.org/#/c/509326/ [3] https://review.openstack.org/#/c/525505/ [4] https://review.openstack.org/#/c/518865/ [5] https://review.openstack.org/#/c/521683/ [6] https://review.openstack.org/#/c/525284/ [7] https://github.com/zhengzhenyu/groceries/blob/master/Ip_filtering_performance_test.py BR, Kevin Zheng -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 81123 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 93064 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 108188 bytes Desc: not available URL: From marcin.juszkiewicz at linaro.org Fri Jan 5 08:28:43 2018 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Fri, 5 Jan 2018 09:28:43 +0100 Subject: [openstack-dev] [nova] Working toward Queens feature freeze and RC1 In-Reply-To: References: Message-ID: <38c6ecd5-d11b-4140-af44-315ec2d9d060@linaro.org> W dniu 04.01.2018 o 01:02, Matt Riedemann pisze: > I've started building a list of things that need to be done by the time > we get to RC1: > > https://etherpad.openstack.org/p/nova-queens-release-candidate-todo Can I add two small tweaks needed for AArch64 architecture to your list? https://review.openstack.org/#/c/530965/ sets 'cpu_mode' to 'host-passthrough' as we do not have 'host-model' working due to big amount of vendors making cpu cores. https://review.openstack.org/#/c/489951/ uses UEFI as default boot method as we have only two options: UEFI or direct kernel+initrd. Both changes have tests added. From thierry at openstack.org Fri Jan 5 09:43:44 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 5 Jan 2018 10:43:44 +0100 Subject: [openstack-dev] [tc] Technical Committee Status update, January 5th Message-ID: Hi! This is the weekly summary of Technical Committee initiatives. You can find the full list of all open topics (updated twice a week) at: https://wiki.openstack.org/wiki/Technical_Committee_Tracker If you are working on something (or plan to work on something) governance-related that is not reflected on the tracker yet, please feel free to add to it ! == Recently-approved changes == * New repos: blazar-tempest-plugin * Goal updates: heat, aodh, vitrage, mistral Not much happened during the holidays, just a few updates on the Queens goals. == Voting in progress == The new wording to limit upgrade assertion tags to OpenStack "services" is still missing a couple of votes. Please see: https://review.openstack.org/#/c/528745/ == Under discussion == The same items as last week carried out, mostly waiting for more community input. Hopefully with the end of the holidays those changes can get more community attention: The discussion started by Graham Hayes to clarify how the testing of interoperability programs should be organized in the age of add-on trademark programs is still going on, with most people still trying to wrap their heads around the various options. We'd welcome more opinions on that thread, so please chime in on the review: https://review.openstack.org/521602 Matt Treinish proposed an update to the Python PTI for tests to be specific and explicit. Wider community input is needed on that topic. Please review at: https://review.openstack.org/519751 We still only have one goal proposed for Rocky. We need other proposals before we can make a call. See the thread: http://lists.openstack.org/pipermail/openstack-dev/2017-November/124976.html == TC member actions for the coming week(s) == With the new year, we should create ML threads to gather input on the stuck changes above. We'll also try to get to a conclusion on the release cycle length megathread as far as Rocky is concerned. We should also think about other Rocky goals as it's more than time for us to make progress there. == Office hours == To be more inclusive of all timezones and more mindful of people for which English is not the primary language, the Technical Committee dropped its dependency on weekly meetings. So that you can still get hold of TC members on IRC, we instituted a series of office hours on #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays For the coming week, I expect discussions on the stuck changes, as well as Rocky goals. Cheers, -- Thierry Carrez (ttx) From sean.k.mooney at intel.com Fri Jan 5 09:53:15 2018 From: sean.k.mooney at intel.com (Mooney, Sean K) Date: Fri, 5 Jan 2018 09:53:15 +0000 Subject: [openstack-dev] [neutron][neutron-lib]Service function defintion files In-Reply-To: References: Message-ID: <4B1BB321037C0849AAE171801564DFA6889A72F8@IRSMSX107.ger.corp.intel.com> From: CARVER, PAUL [mailto:pc2929 at att.com] Sent: Thursday, December 28, 2017 2:57 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron][neutron-lib]Service function defintion files It was a gating criteria for stadium status. The idea was that the for a stadium project the neutron team would have review authority over the API but wouldn't necessarily review or be overly familiar with the implementation. A project that didn't have it's API definition in neutron-lib could do anything it wanted with its API and wouldn't be a neutron subproject because the neutron team wouldn't necessarily know anything at all about it. For a neutron subproject there would at least theoretically be members of the neutron team who are familiar with the API and who ensure some sort of consistency across APIs of all neutron subprojects. This is also a gating criteria for publishing API documentation on api.openstack.org vs publishing somewhere else. Again, the idea being that the neutron team would be able, at least in some sense, to "vouch for" the OpenStack networking APIs, but only for "official" neutron stadium subprojects. Projects that don't meet the stadium criteria, including having api-def in neutron-lib, are "anything goes" and not part of neutron because no one from the neutron team is assumed to know anything about them. They may work just fine, it's just that you can't assume that anyone from neutron has anything to do with them or even knows what they do. [Mooney, Sean K] as paul said above this has been a requirement for stadium membership for some time. ocata was effectively the first release where this came in to effect https://github.com/openstack/neutron-specs/blob/master/specs/stadium/ocata.rst#how-reconcile-api-and-client-bindings but it was started in newton https://github.com/openstack/neutron-specs/blob/master/specs/newton/neutron-stadium.rst with the concept of a neutron-api project which was folded into neutron-lib when implemented instead of as an additional pure api project. -- Paul Carver V: 732.545.7377 C: 908.803.1656 -------- Original message -------- From: Ian Wells > Date: 12/27/17 21:57 (GMT-05:00) To: OpenStack Development Mailing List > Subject: [openstack-dev] [neutron][neutron-lib]Service function defintion files Hey, Can someone explain how the API definition files for several service plugins ended up in neutron-lib? I can see that they've been moved there from the plugins themselves (e.g. networking-bgpvpn has https://github.com/openstack/neutron-lib/commit/3d3ab8009cf435d946e206849e85d4bc9d149474#diff-11482323575c6bd25b742c3b6ba2bf17) and that there's a stadium element to it judging by some earlier commits on the same directory, but I don't understand the reasoning why such service plugins wouldn't be self-contained - perhaps someone knows the history? Thanks, -- Ian. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vetrisko at gmail.com Fri Jan 5 12:21:51 2018 From: vetrisko at gmail.com (milanisko k) Date: Fri, 05 Jan 2018 12:21:51 +0000 Subject: [openstack-dev] [ironic-inspector] Resigning my core-reviewer duties In-Reply-To: <3c0793d9-57b3-d8e0-2c78-4cd1da7a7965@redhat.com> References: <3c0793d9-57b3-d8e0-2c78-4cd1da7a7965@redhat.com> Message-ID: čt 4. 1. 2018 v 10:00 odesílatel Dmitry Tantsur napsal: > On 01/03/2018 04:24 PM, milanisko k wrote: > > Folks, > > > > as announced already on the Ironic upstream meeting, I'm hereby > resigning my > > core-reviewer duties. I've changed my downstream occupation recently and > I won't > > be able to keep up anymore. > > As I said many times, I'm really sad to hear it, but I'm glad that you've > found > new cool challenges :) > > Thanks man, was an honour! :) > I have removed your rights. I've also done a similar change to Yuiko, who > is > apparently no longer active in the community. Thanks both for your > incredible > contributions that allowed ironic-inspector to be what it is now! > > > > > Thank you all, I really enjoyed collaborating with the wonderful Ironic > community! > > > > Best regards, > > milan > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias at citynetwork.se Fri Jan 5 14:01:41 2018 From: tobias at citynetwork.se (Tobias Rydberg) Date: Fri, 5 Jan 2018 15:01:41 +0100 Subject: [openstack-dev] [publiccloud-wg] Missing features work session Message-ID: Hi everyone, During our last meeting we decided to get together at IRC for a work session dedicated to get the "Missing features list" up to date, and take the fist steps converting items into a more official list at launchpad - where we have a project [1]. Would be awesome to see as many of you as possible joining this. Where: #openstack-publiccloud When: Wednesday 10th January 1400 UTC Agenda: https://etherpad.openstack.org/p/publiccloud-wg This first effort of its kind is as you can see at the same time as bi-weekly meetings. Please send feedback of that, I'm happy to setup another session just like this - at a time that suites you better! Hope to see you there! Regards, Tobias Rydberg Chair Public Cloud WG [1] https://launchpad.net/openstack-publiccloud-wg -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3945 bytes Desc: S/MIME Cryptographic Signature URL: From sean.mcginnis at gmx.com Fri Jan 5 14:55:02 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 5 Jan 2018 08:55:02 -0600 Subject: [openstack-dev] [release] Release countdown for week R-7, January 6 - 12 Message-ID: <20180105145501.GA8381@sm-xps> Development Focus ----------------- Teams should be focused on implementing planned work for the cycle and bug fixes. General Information ------------------- The deadline for extra ATC's is coming up on January 12. If there is someone that contributes to your project in a way that is not reflected by the usual metrics. Extra-ATCs can be added by submitting an update to the reference/projects.yaml file in the openstack/governance repo. As we get closer to the end of the cycle, we have deadlines coming up for client and non-client libraries to ensure any dependency issues are worked out and we have time to make any critical fixes before the final release candidates. To this end, it is good practice to release libraries throughout the cycle once they have accumulated any significant functional changes. The following libraries appear to have some merged changes that have not been release that could potentially impact consumers of the library. It would be good to consider getting these released ahead of the deadline to make sure the changes have some run time: openstack/cliff openstack/keystoneauth openstack/kuryr openstack/neutron-lib openstack/os-brick openstack/osc-placement openstack/oslo.cache openstack/oslo.messaging openstack/oslo.policy openstack/oslo.service openstack/ovsdbapp openstack/python-brick-cinderclient-ext openstack/python-cinderclient openstack/python-freezerclient openstack/python-heatclient openstack/python-ironicclient openstack/python-manilaclient openstack/python-octaviaclient openstack/python-tripleoclient openstack/python-troveclient openstack/python-watcherclient openstack/python-zunclient Upcoming Deadlines & Dates -------------------------- Final non-client library release deadline: January 18 Final client library release deadline: January 25 Queens-3 Milestone: January 25 Rocky PTG in Dublin: Week of February 26, 2018 -- Sean McGinnis (smcginnis) From cdent+os at anticdent.org Fri Jan 5 15:29:30 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 5 Jan 2018 15:29:30 +0000 (GMT) Subject: [openstack-dev] [nova] [placement] resource providers update 18-01 Message-ID: First resource provider and placement update for 2018. This year I'll be labelling the report with %y-%W to distinguish from last year, so this is 18-01. The engine of activity is still warming up for the new year, so much of this is pre-existing stuff. # Most Important Matt posted a message with some words about getting to the end of Queens smoothly. In general, getting to the end of Queens smoothly is what's most important. The message: http://lists.openstack.org/pipermail/openstack-dev/2018-January/125953.html In there are some bits related to placement and resource providers but he also identified some gaps related to understanding what's up with nested resource providers. Eric provided a response with some of that info: http://lists.openstack.org/pipermail/openstack-dev/2018-January/125977.html Related to that, there's some open discussion on a review about whether or not the ProviderTree system (mentioned in Eric's mail) is going to track shared providers. See: https://review.openstack.org/#/c/526539/ # What's Changed The / of placement no longer requires auth (which helps support automated version discovery). Placement JSON schemas are now in their own directory rather than in the handler files. The report client now uses POST /allocations to set and or clear allocations for multiple consumer uuids in one request (meaning we no longer need the migration allocations theme, below). 'limit' on /allocation_candidates has been approved and should merge today. We should probably have the discussion on if/how to use it from nova-scheduler. I'll put it on the agenda for the next scheduler meeting. # Main Themes ## Nested Providers Mentioned above, the nested-resource-providers stack has grown a long tail of changes for managing nested providers rooted on a compute node: https://review.openstack.org/#/q/topic:bp/nested-resource-providers ## Alternate Hosts Having the scheduler request and use alternate hosts is real close: https://review.openstack.org/#/q/topic:bp/return-alternate-hosts but has hit a snag with resizes and some stuff with the CachingScheduler, such as https://review.openstack.org/#/c/531211/ Alternate hosts is something we want to bring to resolution as soon as possible so it gets as much exposure as possible. ## Misc Traits, Shared, Etc Cleanups There's a stack of code that fixes up a lot of things related to traits, sharing providers, test additions and fixes to those tests. At the moment the changes are in a bug topic: https://review.openstack.org/#/q/topic:bug/1702420 # Other * https://review.openstack.org/#/c/519462/ Log options at debug when starting API services under wsgi (Make any sense to split this into placement and nova versions? One seems easier than the other) * https://review.openstack.org/#/q/I0c4ca6a81f213277fe7219cb905a805712f81e36 Proper error handling by _ensure_resource_provider (This is already approved for master, but there are backports.) * https://review.openstack.org/#/q/topic:bp/placement-osc-plugin Build the placement osc plugin * https://review.openstack.org/#/q/topic:bp/request-traits-in-nova request traits in nova * https://review.openstack.org/#/c/513041/ Extract instance allocation removal code * https://review.openstack.org/#/c/493865/ cover migration cases with functional tests * https://review.openstack.org/#/c/527541/ Add nova-status check for ironic flavor migration * https://review.openstack.org/#/q/topic:bp/add-support-for-vgpu Add support for VGPU * https://review.openstack.org/#/q/topic:placement_schema_separation Put the json schema in their own directory (one left) * https://review.openstack.org/#/q/topic:bug/1734625 global request id passed from nova to placement in requests (makes logging life much easier) * https://review.openstack.org/#/c/529998/ Move body examples to an isolated directory * https://review.openstack.org/#/c/524506/ Add functional tests for resource class API * https://review.openstack.org/#/c/524094/ Add functional tests for traits API # End There's probably more but as I'm not fully up to review speed, I've not seen everything yet. Next week will likely be more complete. Between now and then there will probably also be some conversations on priorities and the state of things such that we can picking what's going to fall off the radar as we race to the end of Queens. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From lbragstad at gmail.com Fri Jan 5 16:53:52 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 5 Jan 2018 10:53:52 -0600 Subject: [openstack-dev] [policy] [keystone] Analyzing other access-control systems Message-ID: <65925814-981d-142b-9d74-1dd0032c1aaf@gmail.com> Hey all, This note is a continuation of a thread we started last year on analyzing other policy systems [0]. Now that we're back from the holidays and having policy meetings on Wednesdays [1], it'd be good to pick up the conversation again. We had a few good sessions a couple months ago going through AWS IAM policy bits and contrasting it to RBAC in OpenStack. Before we wrapped up those sessions we thought about doing the same thing with GKE or a more technical deep dive of the IAM stuff. Do we want to pick this back up in the next few weeks? We can use this thread to generate discussion about what we'd like to see and jot down ideas. It might be nice timing to get a session or two scheduled before the PTG, where we can have face-to-face discussions. Thoughts? [0] http://lists.openstack.org/pipermail/openstack-dev/2017-October/123069.html [1] http://eavesdrop.openstack.org/#Keystone_Policy_Meeting -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From colleen at gazlene.net Fri Jan 5 17:16:55 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 5 Jan 2018 18:16:55 +0100 Subject: [openstack-dev] Keystone Team Update - Weeks of 25 December 2017 and 1 January 2018 Message-ID: # Keystone Team Update - Weeks of 25 December 2017 and 1 January 2018 ## News Happy new year! Things have been slow during the holiday season so not much to report. The policy meeting was short but we talked about starting up our investigations into other RBAC systems again. Lance kicked off a thread to gauge interest[1]. We're also ready to start planning for the PTG. Please add your ideas to the etherpad[2] [1] http://lists.openstack.org/pipermail/openstack-dev/2018-January/125998.html [2] https://etherpad.openstack.org/p/keystone-rocky-ptg ## Open Specs Search query: https://goo.gl/pc8cCf None at this time ## Recently Merged Changes Search query: https://goo.gl/gu9yQa We merged 28 changes in the last two weeks. A lot of those were to convert the old dependency-injection mechanism for internal APIs to use a centralized provider manager. ## Changes that need Attention Search query: https://goo.gl/CkMmbK There are 74 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. In particular, there are a set of changes that reorganize our api-ref into a consistent format that are ready to go: https://review.openstack.org/#/q/status:open+topic:api-ref-reorganization ## Milestone Outlook https://releases.openstack.org/queens/schedule.html The extras-ATC deadline is next week, so if there are any non-code keystone contributors that we need to get on the list we should figure that out ASAP. The following week is the final non-client library release date, so things like keystonemiddleware, keystoneauth, oslo.policy, etc will need a final release. Feature freeze is the following week - Jan 22 -2 26 - so all of our in-flight major features will have to be merged by the end of that week. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From melwittt at gmail.com Fri Jan 5 18:13:41 2018 From: melwittt at gmail.com (melanie witt) Date: Fri, 5 Jan 2018 10:13:41 -0800 Subject: [openstack-dev] zuulv3 log structure and format grumblings In-Reply-To: References: Message-ID: <40cc7e65-bfc1-c30c-edb2-dbb09b4a3523@gmail.com> On Thu, 4 Jan 2018 08:46:38 -0600, Matt Riedemann wrote: > The main issue is for newer jobs like tempest-full, the logs are under > controller/logs/ and we lose the log analyze formatting for color, being > able to filter on log level, and being able to link directly to a line > in the logs. I also noticed we're missing testr_results.html.gz under controller/logs/, which was handy for seeing a summary of the tempest test results. I hope there's a way to get that back and if any infra peeps can point me in the right direction, I'm happy to help with it. -melanie From andrea.frittoli at gmail.com Fri Jan 5 18:26:36 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Fri, 05 Jan 2018 18:26:36 +0000 Subject: [openstack-dev] zuulv3 log structure and format grumblings In-Reply-To: <40cc7e65-bfc1-c30c-edb2-dbb09b4a3523@gmail.com> References: <40cc7e65-bfc1-c30c-edb2-dbb09b4a3523@gmail.com> Message-ID: On Fri, 5 Jan 2018, 7:14 pm melanie witt, wrote: > On Thu, 4 Jan 2018 08:46:38 -0600, Matt Riedemann wrote: > > The main issue is for newer jobs like tempest-full, the logs are under > > controller/logs/ and we lose the log analyze formatting for color, being > > able to filter on log level, and being able to link directly to a line > > in the logs. > > I also noticed we're missing testr_results.html.gz under > controller/logs/, which was handy for seeing a summary of the tempest > test results. > Uhm I'm pretty sure that used to be there, so something must have changed since. I cannot troubleshoot this on my mobile, but if you want to have a look, the process test results role in zuul-jobs is what is supposed to produce that. Andrea > > I hope there's a way to get that back and if any infra peeps can point > me in the right direction, I'm happy to help with it. > > -melanie > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Fri Jan 5 18:43:59 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 05 Jan 2018 10:43:59 -0800 Subject: [openstack-dev] zuulv3 log structure and format grumblings In-Reply-To: References: <40cc7e65-bfc1-c30c-edb2-dbb09b4a3523@gmail.com> Message-ID: <1515177839.586364.1225592056.79BA9B04@webmail.messagingengine.com> On Fri, Jan 5, 2018, at 10:26 AM, Andrea Frittoli wrote: > On Fri, 5 Jan 2018, 7:14 pm melanie witt, wrote: > > > On Thu, 4 Jan 2018 08:46:38 -0600, Matt Riedemann wrote: > > > The main issue is for newer jobs like tempest-full, the logs are under > > > controller/logs/ and we lose the log analyze formatting for color, being > > > able to filter on log level, and being able to link directly to a line > > > in the logs. > > > > I also noticed we're missing testr_results.html.gz under > > controller/logs/, which was handy for seeing a summary of the tempest > > test results. > > > > Uhm I'm pretty sure that used to be there, so something must have changed > since. > I cannot troubleshoot this on my mobile, but if you want to have a look, > the process test results role in zuul-jobs is what is supposed to produce > that. To expand a bit more on that what we are attempting to do is port the log handling code in devstack-gate [0] to zuul v3 jobs living in tempest [1]. The new job in tempest itself relies on the ansible process-test-results role which can be found here [2]. Chances are something in [1] and/or [2] will have to be updated to match the behavior in [0]. [0] https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/functions.sh#n524 [1] https://git.openstack.org/cgit/openstack/tempest/tree/playbooks/post-tempest.yaml#n8 [2] http://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/process-test-results Hope this helps, Clark From mriedemos at gmail.com Fri Jan 5 18:45:18 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 5 Jan 2018 12:45:18 -0600 Subject: [openstack-dev] [nova][oslo] API extension policy deprecation warnings Message-ID: <53aed9dc-92c6-0667-9d8c-a15665958242@gmail.com> I've noticed that our CI logs have API extension policy deprecation warnings in them on startup, even though we don't use any non-default policy rules in our CI runs, so everything is just loaded from policy in code. Jan 05 16:58:48.794318 ubuntu-xenial-rax-dfw-0001705089 nova-compute[11289]: DEBUG oslo_policy.policy [None req-2f69f372-721c-4550-9c28-5fa610a84201 None None] The policy file policy.json could not be found. {{(pid=11289) load_rules /usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:548}} Jan 05 16:58:48.797597 ubuntu-xenial-rax-dfw-0001705089 nova-compute[11289]: /usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:623: UserWarning: Policy "os_compute_api:os-extended-volumes":"rule:admin_or_owner" was deprecated for removal in 17.0.0. Reason: Nova API extension concept has been removed in Pike. Those extensions have their own policies enforcement. As there is no extensions now, "os_compute_api:os-extended-volumes" policy which was added for extensions is not needed any more. Its value may be silently ignored in the future. Isn't there a way to not log a warning if the rule isn't actually set in the policy file? Similar to deprecated config options, you only get the warning on those if you've set a deprecated config option in the file, but you don't get the warnings just because they are in code and not removed yet. -- Thanks, Matt From lbragstad at gmail.com Fri Jan 5 19:08:48 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 5 Jan 2018 13:08:48 -0600 Subject: [openstack-dev] [nova][oslo] API extension policy deprecation warnings In-Reply-To: <53aed9dc-92c6-0667-9d8c-a15665958242@gmail.com> References: <53aed9dc-92c6-0667-9d8c-a15665958242@gmail.com> Message-ID: <8abbde53-0fa9-527b-b7bb-d902f4839395@gmail.com> I thought we planned for that case, but it looks like we log a warning regardless (obviously from your trace) so that operators don't miss opportunities to clean up code. In addition to that, the removal of a policy might make a role obsolete, which is harder to check for than just seeing if they have overridden the policy from a file. I can dig into oslo.policy and see if there is a way to determine if a policy is coming from a file or in-code. [0] https://github.com/openstack/oslo.policy/blob/master/oslo_policy/policy.py#L610-L625 On 01/05/2018 12:45 PM, Matt Riedemann wrote: > I've noticed that our CI logs have API extension policy deprecation > warnings in them on startup, even though we don't use any non-default > policy rules in our CI runs, so everything is just loaded from policy > in code. > > Jan 05 16:58:48.794318 ubuntu-xenial-rax-dfw-0001705089 > nova-compute[11289]: DEBUG oslo_policy.policy [None > req-2f69f372-721c-4550-9c28-5fa610a84201 None None] The policy file > policy.json could not be found. {{(pid=11289) load_rules > /usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:548}} > Jan 05 16:58:48.797597 ubuntu-xenial-rax-dfw-0001705089 > nova-compute[11289]: > /usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:623: > UserWarning: Policy > "os_compute_api:os-extended-volumes":"rule:admin_or_owner" was > deprecated for removal in 17.0.0. Reason: Nova API extension concept > has been removed in Pike. Those extensions have their own policies > enforcement. As there is no extensions now, > "os_compute_api:os-extended-volumes" policy which was added for > extensions is not needed any more. Its value may be silently ignored > in the future. > > Isn't there a way to not log a warning if the rule isn't actually set > in the policy file? Similar to deprecated config options, you only get > the warning on those if you've set a deprecated config option in the > file, but you don't get the warnings just because they are in code and > not removed yet. > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From jaypipes at gmail.com Fri Jan 5 20:50:11 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 5 Jan 2018 15:50:11 -0500 Subject: [openstack-dev] [nova][neutron] Filtering Instances by IP address performance improvement test result In-Reply-To: References: Message-ID: Excellent work on this, Kevin. I'll review the patch series on Monday. Best, -jay On 01/04/2018 09:53 PM, Zhenyu Zheng wrote: > Hi All, > > We are working on patches to improve the performance filtering instance > by IP address this cycle. As discussed in the previous ML[1], it > contains both patches from Nova and Neutron[2][3][4][5][6]. > > As the POC is almost functional(the neutron extension part seems not > working, it cannot be successfully listed in patchset 14 of [5] , I have > to bypass the "if" condition for checking neutron > "ip-substring-filtering" extension to make it work, but that seems easy > to fix), I made some tests to check what kind of improvement has been > done with those patches. > > In the tests, I wrote a simple script [7](the script is silly, please > don't laugh at me:) ) which generated 2000 vm records in Nova DB with IP > address allocated(one IP for each vm), and also 2000 port records with > corresponding IP addresses in my local devstack env. > > Before adding those patches, querying instance with a specific IP > filtering causes about 4000 ms, the test has been done several times, > and I took the averaged result: > Inline image 1 > > After adding those patches(and some modifications as mentioned above) > querying with the same request causes only about 400ms: > Inline image 2 > > So, the design seems working well. > > I also tested with a "Sub-String" manner filtering with IP address: > 192.168.7.2, which will match 66 instances, and it takes about 900ms: > Inline image 3 > > It increased, but seems reasonable as it matches more instances, and > still much better than current implementation. > > Please test out in your own env if interested, the script might need > some modification as I hardcoded db connection, network_id and subnet_id. > > And also, please help review the patches :) > > [1] > http://lists.openstack.org/pipermail/openstack-operators/2017-October/014459.html > [2] https://review.openstack.org/#/c/509326/ > [3] https://review.openstack.org/#/c/525505/ > [4] https://review.openstack.org/#/c/518865/ > [5] https://review.openstack.org/#/c/521683/ > [6] https://review.openstack.org/#/c/525284/ > [7] > https://github.com/zhengzhenyu/groceries/blob/master/Ip_filtering_performance_test.py > > BR, > > Kevin Zheng > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From lbragstad at gmail.com Fri Jan 5 20:50:31 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 5 Jan 2018 14:50:31 -0600 Subject: [openstack-dev] [nova][oslo] API extension policy deprecation warnings In-Reply-To: <8abbde53-0fa9-527b-b7bb-d902f4839395@gmail.com> References: <53aed9dc-92c6-0667-9d8c-a15665958242@gmail.com> <8abbde53-0fa9-527b-b7bb-d902f4839395@gmail.com> Message-ID: <4fcb40d1-8c55-036d-1f11-57db6ccd44ff@gmail.com> I recreated this locally. Turns out I missed an attribute that the oslo_policy.policy:Enforcer class had called self.file_rules, which appear to the be specific policies pulled from policy.json or policy.yaml files. I modified the check to compare the deprecated policy against that instead of self.rules [0]. I'll slap together a test and we should be able to get this in before library freeze for sure. Thanks for raising the issue. [0] https://review.openstack.org/#/c/531497/ On 01/05/2018 01:08 PM, Lance Bragstad wrote: > I thought we planned for that case, but it looks like we log a warning > regardless (obviously from your trace) so that operators don't miss > opportunities to clean up code. In addition to that, the removal of a > policy might make a role obsolete, which is harder to check for than > just seeing if they have overridden the policy from a file. I can dig > into oslo.policy and see if there is a way to determine if a policy is > coming from a file or in-code. > > [0] > https://github.com/openstack/oslo.policy/blob/master/oslo_policy/policy.py#L610-L625 > > > On 01/05/2018 12:45 PM, Matt Riedemann wrote: >> I've noticed that our CI logs have API extension policy deprecation >> warnings in them on startup, even though we don't use any non-default >> policy rules in our CI runs, so everything is just loaded from policy >> in code. >> >> Jan 05 16:58:48.794318 ubuntu-xenial-rax-dfw-0001705089 >> nova-compute[11289]: DEBUG oslo_policy.policy [None >> req-2f69f372-721c-4550-9c28-5fa610a84201 None None] The policy file >> policy.json could not be found. {{(pid=11289) load_rules >> /usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:548}} >> Jan 05 16:58:48.797597 ubuntu-xenial-rax-dfw-0001705089 >> nova-compute[11289]: >> /usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:623: >> UserWarning: Policy >> "os_compute_api:os-extended-volumes":"rule:admin_or_owner" was >> deprecated for removal in 17.0.0. Reason: Nova API extension concept >> has been removed in Pike. Those extensions have their own policies >> enforcement. As there is no extensions now, >> "os_compute_api:os-extended-volumes" policy which was added for >> extensions is not needed any more. Its value may be silently ignored >> in the future. >> >> Isn't there a way to not log a warning if the rule isn't actually set >> in the policy file? Similar to deprecated config options, you only get >> the warning on those if you've set a deprecated config option in the >> file, but you don't get the warnings just because they are in code and >> not removed yet. >> > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From haleyb.dev at gmail.com Fri Jan 5 21:37:28 2018 From: haleyb.dev at gmail.com (Brian Haley) Date: Fri, 5 Jan 2018 16:37:28 -0500 Subject: [openstack-dev] [neutron] Metering can't count traffic for floating ip, or internal ip. In-Reply-To: References: Message-ID: On 01/04/2018 09:50 PM, wenran xiao wrote: > hi all, > neutron metering can only count traffic that we send to > *remote_ip*(egress), and *remote_ip* send to us(ingress), I think we > should add method to count the traffic for floating ip or internal ip. > Any suggestions is welcome. Neutron metering was originally created as a way to get input for billing tenants for usage, giving admins numbers for what stayed inside or exited a datacenter. It has languished over the past few cycles either because it is working perfectly, or not used very much - my guess is the latter. That said, if you want to propose an enhancement, there is an RFE process defined in https://docs.openstack.org/neutron/latest/contributor/policies/blueprints.html you can use, the neutron drivers team has weekly meetings to discuss things for inclusion into releases. -Brian From ekcs.openstack at gmail.com Fri Jan 5 23:16:32 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Fri, 05 Jan 2018 15:16:32 -0800 Subject: [openstack-dev] [infra][tempest][devstack][congress] tempest.config.CONF.service_available changed on Jan 2/3? Message-ID: Seems that sometime between 1/2 and 1/3 this year, tempest.config.CONF.service_available.aodh_plugin as well as ..service_available.mistral became unavailable in congress dsvm check/gate job. [1][2] I've checked the changes that went in to congress, tempest, devstack, devstack-gate, aodh, and mistral during that period but don't see obvious causes. Any suggestions on where to look next to fix the issue? Thanks very much! Eric Kao [1] test results from Jan 2; note that aodh is available: http://logs.openstack.org/54/530154/5/check/congress-devstack-api-mysql/6f8 2f93/logs/testr_results.html.gz [2] test results from Jan 3; note that aodh is skipped by this line [3], but aodh is in fact available as seen from the aodh logs [4]: http://logs.openstack.org/13/526813/11/gate/congress-devstack-api-mysql/7bf b025/logs/testr_results.html.gz [3] http://git.openstack.org/cgit/openstack/congress/tree/congress_tempest_test s/tests/scenario/congress_datasources/test_aodh.py#n32 [4] http://logs.openstack.org/13/526813/11/gate/congress-devstack-api-mysql/7bf b025/logs/ From ekcs.openstack at gmail.com Sat Jan 6 01:50:47 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Fri, 5 Jan 2018 17:50:47 -0800 Subject: [openstack-dev] [congress] generic push driver Message-ID: We've been discussing generic push drivers for Congress for quite a while. Finally sketching out something concrete and looking for some preliminary feedback. Below are sample interactions with a proposed generic push driver. A generic push driver could be used to receive push updates from vitrage, monasca, and many other sources. 1. creating a datasource: congress datasource create generic_push_driver vitrage --config schema=' { "tables":[ { "name":"alarms", "columns":[ "id", "name", "state", "severity", ] } ] } ' 2. Update an entire table: PUT '/v1/data-sources/vitrage/tables/alarms' with body: { "rows":[ { "id":"1-1", "name":"name1", "state":"active", "severity":1 }, [ "1-2", "name2", "active", 2 ] ] } Note that a row can be either a {} or [] 3. perform differential update: PUT '/v1/data-sources/vitrage/tables/alarms' with body: { "addrows":[ { "id":"1-1", "name":"name1", "state":"active", "severity":1 }, [ "1-2", "name2", "active", 2 ] ] } OR { "deleterows":[ { "id":"1-1", "name":"name1", "state":"active", "severity":1 }, [ "1-2", "name2", "active", 2 ] ] } Note 1: we may allow 'rows', 'addrows', and 'deleterows' to be used together with some well defined semantics. Alternatively we may mandate that each request can have only one of the three pieces. Note 2: we leave it as the responsibility of the sender to send and confirm the requests for differential updates in correct order. We could add sequencing in future work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaosorior at gmail.com Sat Jan 6 08:26:44 2018 From: jaosorior at gmail.com (Juan Antonio Osorio) Date: Sat, 6 Jan 2018 10:26:44 +0200 Subject: [openstack-dev] [castellan] Transferring ownership of secrets to another user In-Reply-To: References: Message-ID: On 4 Jan 2018 23:35, "Alan Bishop" wrote: Has there been any previous discussion on providing a mechanism for transferring ownership of a secret from one user to another? For castellan there isn't a discussion AFAIK. But it sounds like something you can enable with Barbican's ACLs. https://docs.openstack.org/barbican/latest/api/reference/acls.html You would need to leverage Barbican's API instead of castellan though. Cinder supports the notion of transferring volume ownership to another user, who may be in another tenant/project. However, if the volume is encrypted it's possible (even likely) that the new owner will not be able to access the encryption secret. The new user will have the encryption key ID (secret ref), but may not have permission to access the secret, let alone delete the secret should the volume be deleted later. This issue is currently flagged as a cinder bug [1]. This is a use case where the ownership of the encryption secret should be transferred to the new volume owner. Alan [1] https://bugs.launchpad.net/cinder/+bug/1735285 __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar246 at gmail.com Sat Jan 6 10:11:17 2018 From: chkumar246 at gmail.com (Chandan kumar) Date: Sat, 6 Jan 2018 15:41:17 +0530 Subject: [openstack-dev] [infra][tempest][devstack][congress] tempest.config.CONF.service_available changed on Jan 2/3? In-Reply-To: References: Message-ID: Hello Eric, On Sat, Jan 6, 2018 at 4:46 AM, Eric K wrote: > Seems that sometime between 1/2 and 1/3 this year, > tempest.config.CONF.service_available.aodh_plugin as well as > ..service_available.mistral became unavailable in congress dsvm check/gate > job. [1][2] > > I've checked the changes that went in to congress, tempest, devstack, > devstack-gate, aodh, and mistral during that period but don't see obvious > causes. Any suggestions on where to look next to fix the issue? Thanks > very much! > The aodh tempest plugin [https://review.openstack.org/#/c/526299/] is moved to telemetry-tempest-plugin [https://github.com/openstack/telemetry-tempest-plugin]. I have sent a patch to Congress project to fix the issue: https://review.openstack.org/#/c/531534/ The mistral bundled intree tempest plugin [https://review.openstack.org/#/c/526918/] is also moved to mistral-tempest-plugin repo [https://github.com/openstack/mistral-tempest-plugin] Tests are moved to a new repo as a part of Tempest Plugin Split goal [https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html]. Feel free to consume the new tempest plugin and let me know if you need any more help. Thanks, Chandan Kumar From gaetan at xeberon.net Sat Jan 6 12:33:20 2018 From: gaetan at xeberon.net (Gaetan) Date: Sat, 6 Jan 2018 13:33:20 +0100 Subject: [openstack-dev] [reno] questions about reno Message-ID: Hello I played this week with reno and i really like it, but I have a bunch of question: - unreleased notes appear in release note with a title such as "0.1.0-2". Is it possible to not have any title or use "0.1.0-dev2" pattern like pbr ? - I guess that all notes should stay in the same folder version after versions, and the release notes of all versions will keep being automatically generated. Don't you think it might get difficult to manage all theses files? Is is possible to move them in different folder (at least a folder "archives?) - it is possible to generate the NEWS file using reno ? I started trying conversion with pandoc but the result are not great. If you find some features interesting, I'll be happy to contribute ! ----- Gaetan / Stibbons -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifat.afek at nokia.com Sun Jan 7 12:00:10 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Sun, 7 Jan 2018 12:00:10 +0000 Subject: [openstack-dev] [congress] generic push driver In-Reply-To: References: Message-ID: Hi Eric, I have two questions: 1. An alarm is usually raised on a resource, and in Vitrage we can send you the details of that resource. Is there a way in Congress for the alarm to reference a resource that exists in another table? And what if the resource does not exist in Congress? 2. Do you plan to support also updateRows? This can be useful for alarm state changes. Thanks, Ifat From: Eric K Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Saturday, 6 January 2018 at 3:50 To: "OpenStack Development Mailing List (not for usage questions)" Subject: [openstack-dev] [congress] generic push driver We've been discussing generic push drivers for Congress for quite a while. Finally sketching out something concrete and looking for some preliminary feedback. Below are sample interactions with a proposed generic push driver. A generic push driver could be used to receive push updates from vitrage, monasca, and many other sources. 1. creating a datasource: congress datasource create generic_push_driver vitrage --config schema=' { "tables":[ { "name":"alarms", "columns":[ "id", "name", "state", "severity", ] } ] } ' 2. Update an entire table: PUT '/v1/data-sources/vitrage/tables/alarms' with body: { "rows":[ { "id":"1-1", "name":"name1", "state":"active", "severity":1 }, [ "1-2", "name2", "active", 2 ] ] } Note that a row can be either a {} or [] 3. perform differential update: PUT '/v1/data-sources/vitrage/tables/alarms' with body: { "addrows":[ { "id":"1-1", "name":"name1", "state":"active", "severity":1 }, [ "1-2", "name2", "active", 2 ] ] } OR { "deleterows":[ { "id":"1-1", "name":"name1", "state":"active", "severity":1 }, [ "1-2", "name2", "active", 2 ] ] } Note 1: we may allow 'rows', 'addrows', and 'deleterows' to be used together with some well defined semantics. Alternatively we may mandate that each request can have only one of the three pieces. Note 2: we leave it as the responsibility of the sender to send and confirm the requests for differential updates in correct order. We could add sequencing in future work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Sun Jan 7 12:44:02 2018 From: lijie at unitedstack.com (=?utf-8?B?5p2O5p2w?=) Date: Sun, 7 Jan 2018 20:44:02 +0800 Subject: [openstack-dev] [nova] about rescue instance booted from volume Message-ID: Hi,all This is the change about rescue a instance booted from volume, anyone who is interested in booted from volume can help to review this. Any suggestion is welcome. The link is here. https://review.openstack.org/#/c/531524/ Re:the related bp:https://blueprints.launchpad.net/nova/+spec/volume-backed-server-rescue Best Regards Lijie -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at crystone.com Sun Jan 7 13:39:25 2018 From: tobias.urdin at crystone.com (Tobias Urdin) Date: Sun, 7 Jan 2018 13:39:25 +0000 Subject: [openstack-dev] [puppet] Ubuntu problems + Help needed In-Reply-To: <973cb7a5f8764b2a80a234b73ee05306@mb01.staff.ognet.se> References: <30cda8e5e7884cb9bcd794d2f02075ee@mb01.staff.ognet.se> <8bac50d36b6441939c086d31e123886b@mb01.staff.ognet.se>, <973cb7a5f8764b2a80a234b73ee05306@mb01.staff.ognet.se> Message-ID: <1515332256810.46750@crystone.com> Hello everyone and a happy new year! I will follow this thread up with some information about the tempest failure that occurs on Ubuntu. Saw it happen on my recheck tonight and took some time now to check it out properly. * Here is the job: http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/ * The following test is failing but only sometimes: tempest.api.compute.servers.test_create_server.ServersTestManualDisk http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/job-output.txt.gz#_2018-01-07_01_56_31_072370 * Checking the nova API log is fails the request against neutron server http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/logs/nova/nova-api.txt.gz#_2018-01-07_01_46_47_301 So this is the call that times out: https://github.com/openstack/nova/blob/3800cf6ae2a1370882f39e6880b7df4ec93f4b93/nova/api/openstack/compute/attach_interfaces.py#L61 The timeout occurs at 01:46:47 but the first try is done at 01:46:17, checking the log http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/logs/neutron/neutron-server.txt.gz and searching for "GET /v2.0/ports?device_id=285061f8-2e8e-4163-9534-9b02900a8887" You can see that neutron-server reports all request as 200 OK, so what I think is that neutron-server performs the request properly but for some reason nova-api does not get the reply and hence the timeout. This is where I get stuck because since I can see all requests coming in there is no real way of seeing the replies. At the same time you can see nova-api and neutron-server are continously handling requests so they are working but just that reply that neutron-server should send to nova-api does not occur. Does anybody have any clue to why? Otherwise I guess the only way is to start running the tests on a local machine until I get that issue, which does not occur regularly. Maybe loop in the neutron and/or Canonical OpenStack team on this one. Best regards Tobias ________________________________________ From: Tobias Urdin Sent: Friday, December 22, 2017 2:44 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [puppet] Ubuntu problems + Help needed Follow up, have been testing some integration runs on a tmp machine. Had to fix the following: * Ceph repo key E84AC2C0460F3994 perhaps introduced in [0] * Run glance-manage db_sync (have not seen in integration tests) * Run neutron-db-manage upgrade heads (have not seen in integration tests) * Disable l2gw because of https://bugs.launchpad.net/ubuntu/+source/networking-l2gw/+bug/1739779 proposed temp fix until resolved as [1] [0] https://review.openstack.org/#/c/507925/ [1] https://review.openstack.org/#/c/529830/ Best regards On 12/22/2017 10:44 AM, Tobias Urdin wrote: > Ignore that, seems like it's the networking-l2gw package that fails[0] > Seems like it hasn't been packaged for queens yet[1] or more it seems > like a release has not been cut for queens for networking-l2gw[2] > > Should we try to disable l2gw like done in[3] recently for CentOS? > > [0] > http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_23_10_05_564 > [1] > http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/queens_versions.html > [2] https://git.openstack.org/cgit/openstack/networking-l2gw/refs/ > [3] https://review.openstack.org/#/c/529711/ > > > On 12/22/2017 10:19 AM, Tobias Urdin wrote: >> Follow up on Alex[1] point. The db sync upgrade for neutron fails here[0]. >> >> [0] http://paste.openstack.org/show/629628/ >> >> On 12/22/2017 04:57 AM, Alex Schultz wrote: >>>> Just a note, the queens repo is not currently synced in the infra so >>>> the queens repo patch is failing on Ubuntu jobs. I've proposed adding >>>> queens to the infra configuration to resolve this: >>>> https://review.openstack.org/529670 >>>> >>> As a follow up, the mirrors have landed and two of the four scenarios >>> now pass. Scenario001 is failing on ceilometer-api which was removed >>> so I have a patch[0] to remove it. Scenario004 is having issues with >>> neutron and the db looks to be very unhappy[1]. >>> >>> Thanks, >>> -Alex >>> >>> [0] https://review.openstack.org/529787 >>> [1] http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_22_58_37_338 >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ey.leshem at gmail.com Sun Jan 7 13:47:46 2018 From: ey.leshem at gmail.com (Eyal Leshem) Date: Sun, 7 Jan 2018 15:47:46 +0200 Subject: [openstack-dev] [all] propose to upgrade python kubernetes (the k8s python client) to 4.0.0 which breaks oslo.service In-Reply-To: References: Message-ID: Hi Lingxian, I uploaded a patch for kuryr-kubernetes that monkey-patch the ThreadPool with GreenPool ( https://review.openstack.org/#/c/530655/4/kuryr_kubernetes/thread_pool_patch.py ). It's support only apply_async - but that should be enough for k8s. That can be dangers - if you use ThreadPool in other places in your code, but in such case you can't run with eventlet anyway. hope that helps, leyal On 4 January 2018 at 23:45, Lingxian Kong wrote: > On Tue, Jan 2, 2018 at 1:56 AM, Eyal Leshem wrote: > >> Hi , >> >> According to https://github.com/eventlet/eventlet/issues/147 - it's >> looks that eventlet >> has issue with "multiprocessing.pool". >> >> The ThreadPool used in code that auto-generated by swagger. >> >> Possible workaround for that is to monky-patch the client library , >> and replace the pool with greenpool. >> > > Hi, leyal, I'm not very familar with eventlet, but how can I monkey patch > kubernetes python lib? > The only way I can see now is to replace oslo.service with something else, > e.g. cotyledon, avoid to use eventlet, that's a signaficant change though. > I also found this bug https://bugs.launchpad.net/taskflow/+bug/1225275 in > taskflow, they chose to not use multiprocessing module. > > Any other suggestions are welcomed! > > >> >> If someone has better workaround, please share that with us :) >> >> btw , I don't think that should be treated as compatibility issue >> in the client python as it's an eventlet issue.. >> >> Thanks , >> leyal >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Sun Jan 7 21:10:05 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Mon, 8 Jan 2018 10:10:05 +1300 Subject: [openstack-dev] [all] propose to upgrade python kubernetes (the k8s python client) to 4.0.0 which breaks oslo.service In-Reply-To: References: Message-ID: Thanks, leyal. I've already changed the service framework from oslo.service to cotyledon https://review.openstack.org/#/c/530428/, and it works perfectly fine. Cheers, Lingxian Kong (Larry) On Mon, Jan 8, 2018 at 2:47 AM, Eyal Leshem wrote: > Hi Lingxian, > > I uploaded a patch for kuryr-kubernetes that monkey-patch the ThreadPool > with > GreenPool (https://review.openstack.org/#/c/530655/4/kuryr_kubernetes/ > thread_pool_patch.py). > > It's support only apply_async - but that should be enough for k8s. > > That can be dangers - if you use ThreadPool in other places in your code, > but in such case you can't run with eventlet anyway. > > hope that helps, > leyal > > > > > On 4 January 2018 at 23:45, Lingxian Kong wrote: > >> On Tue, Jan 2, 2018 at 1:56 AM, Eyal Leshem wrote: >> >>> Hi , >>> >>> According to https://github.com/eventlet/eventlet/issues/147 - it's >>> looks that eventlet >>> has issue with "multiprocessing.pool". >>> >>> The ThreadPool used in code that auto-generated by swagger. >>> >>> Possible workaround for that is to monky-patch the client library , >>> and replace the pool with greenpool. >>> >> >> Hi, leyal, I'm not very familar with eventlet, but how can I monkey patch >> kubernetes python lib? >> The only way I can see now is to replace oslo.service with something >> else, e.g. cotyledon, avoid to use eventlet, that's a signaficant change >> though. I also found this bug https://bugs.launchpad.net >> /taskflow/+bug/1225275 in taskflow, they chose to not use >> multiprocessing module. >> >> Any other suggestions are welcomed! >> >> >>> >>> If someone has better workaround, please share that with us :) >>> >>> btw , I don't think that should be treated as compatibility issue >>> in the client python as it's an eventlet issue.. >>> >>> Thanks , >>> leyal >>> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Sun Jan 7 21:37:46 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Mon, 8 Jan 2018 10:37:46 +1300 Subject: [openstack-dev] [faas] [qinling] project update - 2 Message-ID: Hi, all Happy new year! This project update is posted by-weekly, but feel free to get in touch in #openstack-qinling anytime. - Introduce etcd in qinling for distributed locking and storing the resources that need to be updated frequently. - Get function workers (admin only) - Support to detach function from underlying orchestrator (admin only) - Support positional args in users function - More unit tests and functional tests added - Powerful resource query filtering of qinling openstack CLI - Conveniently delete all executions of one or more functions in CLI You can find previous emails below. Have a good day :-) Cheers, Lingxian Kong (Larry) ---------- Forwarded message ---------- From: Lingxian Kong Date: Tue, Dec 12, 2017 at 10:18 PM Subject: [openstack-dev] [qinling] [faas] project update ​ - 1​ To: OpenStack Development Mailing List Hi, all Maybe there are aleady some people interested in faas implementation in openstack, and also deployed other openstack services to be integrated with (e.g. trigger function by object uploading in swift), Qinling is the thing you probably don't want to miss out. The main motivation I creatd Qinling project is from frequent requirements of our public cloud customers. For people who have not heard about Qinling before, please take a look at my presentation in Sydney Summit: https://youtu.be/NmCmOfRBlIU There is also a simple demo video: https://youtu.be/K2SiMZllN_A As the first project update email, I will just list the features implemented for now: - Python runtime - Sync/Async function execution - Job (invoke function on schedule) - Function defined in swift object storage service - Function defined in docker image - Easy to interact with openstack services in function - Function autoscaling based on request rate - RBAC operation - Function resource limitation - Simple documentation I will keep posting the project update by-weekly, but feel free to get in touch in #openstack-qinling anytime. -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Mon Jan 8 01:00:01 2018 From: abishop at redhat.com (Alan Bishop) Date: Sun, 7 Jan 2018 20:00:01 -0500 Subject: [openstack-dev] [castellan] Transferring ownership of secrets to another user In-Reply-To: References: Message-ID: On Sat, Jan 6, 2018 at 3:26 AM, Juan Antonio Osorio wrote: > > > On 4 Jan 2018 23:35, "Alan Bishop" wrote: > > Has there been any previous discussion on providing a mechanism for > transferring ownership of a secret from one user to another? > > For castellan there isn't a discussion AFAIK. But it sounds like something > you can enable with Barbican's ACLs. Conceptually, the goal is to truly transfer ownership. I considered Barbican ACLs as a workaround, but that approach isn't sufficient. A Barbican ACL would allow the new owner to read the secret, but woun't take into account whether the new owner happens to be an admin. Barbican secrets owned by an admin can be read by other admins, but an ACL would not allow other admins to read the secret. The bigger problem, though, is what happens when the new owner attempts to delete the volume. This requires deleting the secret, but the new volume owner only has read access to the secret. Cinder blocks attempts to delete encrypted volumes when the secret cannot be deleted. Otherwise, deleting a volume would cause the secret to be leaked (not exposed, but unmanaged by any owner). > https://docs.openstack.org/barbican/latest/api/reference/acls.html > > You would need to leverage Barbican's API instead of castellan though. > > > Cinder supports the notion of transferring volume ownership to another > user, who may be in another tenant/project. However, if the volume is > encrypted it's possible (even likely) that the new owner will not be > able to access the encryption secret. > > The new user will have the > encryption key ID (secret ref), but may not have permission to access > the secret, let alone delete the secret should the volume be deleted > later. This issue is currently flagged as a cinder bug [1]. > > This is a use case where the ownership of the encryption secret should > be transferred to the new volume owner. > > Alan > > [1] https://bugs.launchpad.net/cinder/+bug/1735285 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From ghanshyammann at gmail.com Mon Jan 8 05:27:26 2018 From: ghanshyammann at gmail.com (Ghanshyam Mann) Date: Mon, 8 Jan 2018 10:57:26 +0530 Subject: [openstack-dev] [infra][tempest][devstack][congress] tempest.config.CONF.service_available changed on Jan 2/3? In-Reply-To: References: Message-ID: On Sat, Jan 6, 2018 at 3:41 PM, Chandan kumar wrote: > Hello Eric, > > On Sat, Jan 6, 2018 at 4:46 AM, Eric K wrote: >> Seems that sometime between 1/2 and 1/3 this year, >> tempest.config.CONF.service_available.aodh_plugin as well as >> ..service_available.mistral became unavailable in congress dsvm check/gate >> job. [1][2] >> >> I've checked the changes that went in to congress, tempest, devstack, >> devstack-gate, aodh, and mistral during that period but don't see obvious >> causes. Any suggestions on where to look next to fix the issue? Thanks >> very much! These config options should stay there even separating the tempest plugin. I have checked aodh and mistral config options and there are present as tempest config. - https://github.com/openstack/telemetry-tempest-plugin/blob/b30a19214d0036141de75047b444d48ae0d0b656/telemetry_tempest_plugin/config.py#L27 - https://github.com/openstack/mistral-tempest-plugin/blob/63a0fe20f98e0cb8316beb81ca77249ffdda29c5/mistral_tempest_tests/config.py#L18 Issue occurred because of removing the in-tree plugins before congress was setup to use new repo. We should not remove the in-tree plugin before gate setup of consuming the new plugin is complete for each consumer of plugings. >> > > The aodh tempest plugin [https://review.openstack.org/#/c/526299/] is > moved to telemetry-tempest-plugin > [https://github.com/openstack/telemetry-tempest-plugin]. > I have sent a patch to Congress project to fix the issue: > https://review.openstack.org/#/c/531534/ Thanks Chandan, this will fix congress issue for Aodh, we need same fix for mistral case too. > > The mistral bundled intree tempest plugin > [https://review.openstack.org/#/c/526918/] is also moved to > mistral-tempest-plugin repo > [https://github.com/openstack/mistral-tempest-plugin] > > Tests are moved to a new repo as a part of Tempest Plugin Split goal > [https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html]. > Feel free to consume the new tempest plugin and let me know if you > need any more help. > > Thanks, > > Chandan Kumar > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From iwienand at redhat.com Mon Jan 8 06:41:59 2018 From: iwienand at redhat.com (Ian Wienand) Date: Mon, 8 Jan 2018 17:41:59 +1100 Subject: [openstack-dev] [requirements][vitrage] Networkx version 2.0 In-Reply-To: <623096DD-1612-46D8-B6C4-326255B276C8@nokia.com> References: <623096DD-1612-46D8-B6C4-326255B276C8@nokia.com> Message-ID: On 12/21/2017 02:51 AM, Afek, Ifat (Nokia - IL/Kfar Sava) wrote: > There is an open bug in launchpad about the new release of Networkx > 2.0, that is backward incompatible with versions 1.x [1]. From diskimage-builder's POV, we can pretty much switch whenever ready, just a matter of merging [2] after constraints is bumped. It's kind of annoying in the code supporting both versions at once. If we've got changes ready to go with all the related projects in [1] bumping *should* be minimal disruption. -i > [1] https://bugs.launchpad.net/diskimage-builder/+bug/1718576 [2] https://review.openstack.org/#/c/506524/ From silvan at quobyte.com Mon Jan 8 11:48:54 2018 From: silvan at quobyte.com (Silvan Kaiser) Date: Mon, 8 Jan 2018 12:48:54 +0100 Subject: [openstack-dev] [manila] [NEEDACTION] CI changes due to manila tempest tests new repository In-Reply-To: References: Message-ID: Hi all! Some late info for those not running DevStack gate, my migration took three additions: 1) add the git checkout command as recommended 2) add "enable_plugin manila-tempest-plugin https://github.com/openstack/manila-tempest-plugin" to DevStacks local.conf 3) add running the setup.py command as recommended after stack.sh has been run Best regards & thanks for the migration hints! Silvan 2017-12-13 14:22 GMT+01:00 Victoria Martínez de la Cruz < victoria at vmartinezdelacruz.com>: > Hi all, > > As part of the effort of splitting tempest plugins to their own > repositories [0], we are calling all Manila 3rd party CI maintainers to > adjust their gate scripts to use the new tempest test repository > > If the third party CIs are configured to run with DevStack Gate, they only > need to make a one line change to their gate scripts, manila-tempest-plugin > can be installed and configured by Devstack gate prior to Devstack by using > the "PROJECTS" variable, for example: > > export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" > > This is how we used to set up python-manilaclient, so: > > export PROJECTS="openstack/python-manilaclient openstack/manila-tempest-plugin > $PROJECTS" > > For those that are not using DevStack gate, you could just do this before > Devstack: > > git clone https://git.openstack.org/openstack/manila-tempest-plugin > /opt/stack/manila-tempest-plugin > sudo python /opt/stack/manila-tempest-plugin/setup.py develop > > Both these methods will clone and install manila-tempest-plugin from > git.openstack.org into $DEST/manila-tempest-plugin. > > We intend to make this change [1] effective after next weekly meeting, on > Thursday 14th. > > It is important to note that CI mainteiners can make this change right > away and it would not break CIs when this patch merges since all manila > tempest tests > are available in the new repo. > > Thanks, > > Victoria > > [0] https://governance.openstack.org/tc/goals/queens/split- > tempest-plugins.html > [1] https://review.openstack.org/#/c/512300/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Dr. Silvan Kaiser Quobyte GmbH Hardenbergplatz 2, 10623 Berlin - Germany +49-30-814 591 800 - www.quobyte.com Amtsgericht Berlin-Charlottenburg, HRB 149012B Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtomasek at redhat.com Mon Jan 8 11:56:05 2018 From: jtomasek at redhat.com (Jiri Tomasek) Date: Mon, 8 Jan 2018 12:56:05 +0100 Subject: [openstack-dev] [tripleo] FFE Select Roles TripleO-UI Message-ID: Hello, I’d like to request an FFE to finish GUI work on roles management, specifically listing of roles and selection of roles for deployment. This feature is one of the main goals of current cycle. The pending patches are ready to be merged, mostly just waiting for tripleo-common patches to land (those already have FFE). Blueprints: https://blueprints.launchpad.net/tripleo/+spec/tripleo-ui-select-roles https://blueprints.launchpad.net/openstack/?searchtext=roles-crud-ui Patches: https://review.openstack.org/#/q/topic:bp/tripleo-ui-select-roles+(status:open+OR+status:merged) — Jiri Tomasek -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Mon Jan 8 13:05:49 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 8 Jan 2018 08:05:49 -0500 Subject: [openstack-dev] [puppet] Ubuntu problems + Help needed In-Reply-To: <1515332256810.46750@crystone.com> References: <30cda8e5e7884cb9bcd794d2f02075ee@mb01.staff.ognet.se> <8bac50d36b6441939c086d31e123886b@mb01.staff.ognet.se> <973cb7a5f8764b2a80a234b73ee05306@mb01.staff.ognet.se> <1515332256810.46750@crystone.com> Message-ID: Hi Tobias, I think that's mainly the biggest issue we were dealing with which forced us to stop Ubuntu from being voting. I'm really not sure why this is happening but it's happening only in Ubuntu. Thanks, Mohammed On Sun, Jan 7, 2018 at 8:39 AM, Tobias Urdin wrote: > Hello everyone and a happy new year! > > I will follow this thread up with some information about the tempest failure that occurs on Ubuntu. > Saw it happen on my recheck tonight and took some time now to check it out properly. > > * Here is the job: http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/ > > * The following test is failing but only sometimes: tempest.api.compute.servers.test_create_server.ServersTestManualDisk > http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/job-output.txt.gz#_2018-01-07_01_56_31_072370 > > * Checking the nova API log is fails the request against neutron server > http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/logs/nova/nova-api.txt.gz#_2018-01-07_01_46_47_301 > > So this is the call that times out: https://github.com/openstack/nova/blob/3800cf6ae2a1370882f39e6880b7df4ec93f4b93/nova/api/openstack/compute/attach_interfaces.py#L61 > > The timeout occurs at 01:46:47 but the first try is done at 01:46:17, checking the log http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/logs/neutron/neutron-server.txt.gz and searching for "GET /v2.0/ports?device_id=285061f8-2e8e-4163-9534-9b02900a8887" > > You can see that neutron-server reports all request as 200 OK, so what I think is that neutron-server performs the request properly but for some reason nova-api does not get the reply and hence the timeout. > > This is where I get stuck because since I can see all requests coming in there is no real way of seeing the replies. > At the same time you can see nova-api and neutron-server are continously handling requests so they are working but just that reply that neutron-server should send to nova-api does not occur. > > Does anybody have any clue to why? Otherwise I guess the only way is to start running the tests on a local machine until I get that issue, which does not occur regularly. > > Maybe loop in the neutron and/or Canonical OpenStack team on this one. > > Best regards > Tobias > > > ________________________________________ > From: Tobias Urdin > Sent: Friday, December 22, 2017 2:44 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [puppet] Ubuntu problems + Help needed > > Follow up, have been testing some integration runs on a tmp machine. > > Had to fix the following: > * Ceph repo key E84AC2C0460F3994 perhaps introduced in [0] > * Run glance-manage db_sync (have not seen in integration tests) > * Run neutron-db-manage upgrade heads (have not seen in integration tests) > * Disable l2gw because of > https://bugs.launchpad.net/ubuntu/+source/networking-l2gw/+bug/1739779 > proposed temp fix until resolved as [1] > > [0] https://review.openstack.org/#/c/507925/ > [1] https://review.openstack.org/#/c/529830/ > > Best regards > > On 12/22/2017 10:44 AM, Tobias Urdin wrote: >> Ignore that, seems like it's the networking-l2gw package that fails[0] >> Seems like it hasn't been packaged for queens yet[1] or more it seems >> like a release has not been cut for queens for networking-l2gw[2] >> >> Should we try to disable l2gw like done in[3] recently for CentOS? >> >> [0] >> http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_23_10_05_564 >> [1] >> http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/queens_versions.html >> [2] https://git.openstack.org/cgit/openstack/networking-l2gw/refs/ >> [3] https://review.openstack.org/#/c/529711/ >> >> >> On 12/22/2017 10:19 AM, Tobias Urdin wrote: >>> Follow up on Alex[1] point. The db sync upgrade for neutron fails here[0]. >>> >>> [0] http://paste.openstack.org/show/629628/ >>> >>> On 12/22/2017 04:57 AM, Alex Schultz wrote: >>>>> Just a note, the queens repo is not currently synced in the infra so >>>>> the queens repo patch is failing on Ubuntu jobs. I've proposed adding >>>>> queens to the infra configuration to resolve this: >>>>> https://review.openstack.org/529670 >>>>> >>>> As a follow up, the mirrors have landed and two of the four scenarios >>>> now pass. Scenario001 is failing on ceilometer-api which was removed >>>> so I have a patch[0] to remove it. Scenario004 is having issues with >>>> neutron and the db looks to be very unhappy[1]. >>>> >>>> Thanks, >>>> -Alex >>>> >>>> [0] https://review.openstack.org/529787 >>>> [1] http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_22_58_37_338 >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From balazs.gibizer at ericsson.com Mon Jan 8 13:58:26 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 8 Jan 2018 14:58:26 +0100 Subject: [openstack-dev] [nova] Notification update week 2 Message-ID: <1515419906.18267.0@smtp.office365.com> Hi, Here is the status update / focus settings mail for 2018 w2. Bugs ---- [High] https://bugs.launchpad.net/nova/+bug/1737201 TypeError when sending notification during attach_interface Fix merged to master. Backports have been proposed: * Pike: https://review.openstack.org/#/c/531745/ * Queens: https://review.openstack.org/#/c/531746/ [High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations fail to complete with versioned notifications if payload contains unset non-nullable fields Patch has been proposed: https://review.openstack.org/#/c/529194/ [Low] https://bugs.launchpad.net/nova/+bug/1487038 nova.exception._cleanse_dict should use oslo_utils.strutils._SANITIZE_KEYS Old abandoned patches exist: * https://review.openstack.org/#/c/215308/ * https://review.openstack.org/#/c/388345/ Versioned notification transformation ------------------------------------- Here are the patches ready to review the rest are in merge conflict or failing tests: * https://review.openstack.org/#/c/410297 Transform missing delete notifications * https://review.openstack.org/#/c/476459 Send soft_delete from context manager * https://review.openstack.org/#/c/403660 Transform instance.exists notification Introduce instance.lock and instance.unlock notifications ----------------------------------------------------------- A specless bp has been proposed to the Rocky cycle https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances Some preliminary discussion happened in an earlier patch https://review.openstack.org/#/c/526251/ Factor out duplicated notification sample ----------------------------------------- https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open There are two ongoing patches to look at. Weekly meeting -------------- The first meeting of 2018 is expected will be held on 9th of January. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180109T170000 Cheers, gibi -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Jan 8 14:29:30 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 08 Jan 2018 09:29:30 -0500 Subject: [openstack-dev] [reno] questions about reno In-Reply-To: References: Message-ID: <1515421459-sup-1398@lrrr.local> Excerpts from Gaetan's message of 2018-01-06 13:33:20 +0100: > Hello > > I played this week with reno and i really like it, but I have a bunch of > question: > - unreleased notes appear in release note with a title such as "0.1.0-2". > Is it possible to not have any title or use "0.1.0-dev2" pattern like pbr ? I'm not sure why it matters, but if you want to work on that patch I'll help with reviews. > - I guess that all notes should stay in the same folder version after > versions, and the > release notes of all versions will keep being automatically generated. > Don't you think > it might get difficult to manage all theses files? Is is possible to move > them in different folder (at least a folder "archives?) We've put off doing anything like that until we have a project with enough notes that we can observe the problems and decide how to fix them. Have you already reached that point or are you anticipating problems in the future? > - it is possible to generate the NEWS file using reno ? I started trying > conversion with pandoc but the result are not great. How is the NEWS file different from CHANGES.txt that pbr produces? Is it the format, or the content? > > If you find some features interesting, I'll be happy to contribute ! > > > ----- > Gaetan / Stibbons From doug at doughellmann.com Mon Jan 8 14:55:26 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 08 Jan 2018 09:55:26 -0500 Subject: [openstack-dev] [oslo] proposing Stephen Finucan for oslo-core Message-ID: <1515423211-sup-8000@lrrr.local> Stephen (sfinucan) has been working on pbr, oslo.config, and oslo.policy and reviewing several of the other Oslo libraries for a while now. His reviews are always helpful and I think he would make a good addition to the oslo-core team. As per our usual practice, please reply here with a +1 or -1 and any reservations. Doug From jaypipes at gmail.com Mon Jan 8 14:58:02 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 8 Jan 2018 09:58:02 -0500 Subject: [openstack-dev] [oslo] proposing Stephen Finucan for oslo-core In-Reply-To: <1515423211-sup-8000@lrrr.local> References: <1515423211-sup-8000@lrrr.local> Message-ID: <70393a13-83f5-6e7b-a662-2fb41e3a317e@gmail.com> big +1 from me. On 01/08/2018 09:55 AM, Doug Hellmann wrote: > Stephen (sfinucan) has been working on pbr, oslo.config, and > oslo.policy and reviewing several of the other Oslo libraries for > a while now. His reviews are always helpful and I think he would > make a good addition to the oslo-core team. > > As per our usual practice, please reply here with a +1 or -1 and > any reservations. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From balazs.gibizer at ericsson.com Mon Jan 8 14:59:06 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 8 Jan 2018 15:59:06 +0100 Subject: [openstack-dev] [nova] [libvirt] [scaleio] ScaleIO libvirt volume driver native mode Message-ID: <1515423546.18267.2@smtp.office365.com> Hi, Two years ago a patch merged [1] that set AIO mode of some of the libivrt volume drivers to 'native' instead of the default 'threading'. At that time the ScaleIO driver was not modified. Recently we did some measurements (on Mitaka base) and we think that the ScaleIO volume driver could also benefit from the 'native' mode. So in Rocky we would like to propose a small change to set the 'native' mode for the ScaleIO volume driver too. Does anybody have opposing measurements or views? Cheers, gibi [1] https://review.openstack.org/#/c/251829/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Mon Jan 8 15:02:15 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 8 Jan 2018 09:02:15 -0600 Subject: [openstack-dev] [oslo] proposing Stephen Finucan for oslo-core In-Reply-To: <70393a13-83f5-6e7b-a662-2fb41e3a317e@gmail.com> References: <1515423211-sup-8000@lrrr.local> <70393a13-83f5-6e7b-a662-2fb41e3a317e@gmail.com> Message-ID: <71a0e046-8eea-b094-75fb-0a4be03e701b@gmail.com> +1 from an oslo.policy perspective, his reviews have really been helping me out there. On 01/08/2018 08:58 AM, Jay Pipes wrote: > big +1 from me. > > On 01/08/2018 09:55 AM, Doug Hellmann wrote: >> Stephen (sfinucan) has been working on pbr, oslo.config, and >> oslo.policy and reviewing several of the other Oslo libraries for >> a while now. His reviews are always helpful and I think he would >> make a good addition to the oslo-core team. >> >> As per our usual practice, please reply here with a +1 or -1 and >> any reservations. >> >> Doug >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From davanum at gmail.com Mon Jan 8 15:02:25 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Mon, 8 Jan 2018 10:02:25 -0500 Subject: [openstack-dev] [oslo] proposing Stephen Finucan for oslo-core In-Reply-To: <70393a13-83f5-6e7b-a662-2fb41e3a317e@gmail.com> References: <1515423211-sup-8000@lrrr.local> <70393a13-83f5-6e7b-a662-2fb41e3a317e@gmail.com> Message-ID: +1 from me. Thanks, Dims On Mon, Jan 8, 2018 at 9:58 AM, Jay Pipes wrote: > big +1 from me. > > > On 01/08/2018 09:55 AM, Doug Hellmann wrote: >> >> Stephen (sfinucan) has been working on pbr, oslo.config, and >> oslo.policy and reviewing several of the other Oslo libraries for >> a while now. His reviews are always helpful and I think he would >> make a good addition to the oslo-core team. >> >> As per our usual practice, please reply here with a +1 or -1 and >> any reservations. >> >> Doug >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims From doug at doughellmann.com Mon Jan 8 15:08:55 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 08 Jan 2018 10:08:55 -0500 Subject: [openstack-dev] [oslo] proposing Stephen Finucan for oslo-core In-Reply-To: <1515423211-sup-8000@lrrr.local> References: <1515423211-sup-8000@lrrr.local> Message-ID: <1515424087-sup-9325@lrrr.local> Excerpts from Doug Hellmann's message of 2018-01-08 09:55:26 -0500: > Stephen (sfinucan) has been working on pbr, oslo.config, and Oops, that's stephenfin. Sorry for the confusion. Doug From tim at styra.com Mon Jan 8 15:31:28 2018 From: tim at styra.com (Tim Hinrichs) Date: Mon, 08 Jan 2018 15:31:28 +0000 Subject: [openstack-dev] [congress] generic push driver In-Reply-To: References: Message-ID: It's probably worth considering PATCH instead of PUT for updating the table. http://restcookbook.com/HTTP%20Methods/patch/ You could also think about using JSON-patch to describe the requested update. It provides fine-grained update semantics: https://tools.ietf.org/html/rfc6902 Tim On Fri, Jan 5, 2018 at 5:50 PM Eric K wrote: > We've been discussing generic push drivers for Congress for quite a while. > Finally sketching out something concrete and looking for some preliminary > feedback. Below are sample interactions with a proposed generic push > driver. A generic push driver could be used to receive push updates from > vitrage, monasca, and many other sources. > > 1. creating a datasource: > > congress datasource create generic_push_driver vitrage --config schema=' > { > "tables":[ > { > "name":"alarms", > "columns":[ > "id", > "name", > "state", > "severity", > ] > } > ] > } > ' > > 2. Update an entire table: > > PUT '/v1/data-sources/vitrage/tables/alarms' with body: > { > "rows":[ > { > "id":"1-1", > "name":"name1", > "state":"active", > "severity":1 > }, > [ > "1-2", > "name2", > "active", > 2 > ] > ] > } > Note that a row can be either a {} or [] > > > 3. perform differential update: > > PUT '/v1/data-sources/vitrage/tables/alarms' with body: > { > "addrows":[ > { > "id":"1-1", > "name":"name1", > "state":"active", > "severity":1 > }, > [ > "1-2", > "name2", > "active", > 2 > ] > ] > } > > OR > > { > "deleterows":[ > { > "id":"1-1", > "name":"name1", > "state":"active", > "severity":1 > }, > [ > "1-2", > "name2", > "active", > 2 > ] > ] > } > > Note 1: we may allow 'rows', 'addrows', and 'deleterows' to be used > together with some well defined semantics. Alternatively we may mandate > that each request can have only one of the three pieces. > > Note 2: we leave it as the responsibility of the sender to send and > confirm the requests for differential updates in correct order. We could > add sequencing in future work. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Mon Jan 8 15:42:37 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Mon, 8 Jan 2018 15:42:37 +0000 Subject: [openstack-dev] [magnum] fedora atomic image with kubernetes with a CRI = frakti or clear containers Message-ID: Hey there, I am currently running magnum with the fedora-atomic image that is installed as part of the devstack installation of magnum. This fedora-atomic image has kubernetes with a CRI of the standard docker container. Where can i find (or how do i build) a fedora-atomic image with kubernetes and either frakti or clear containers (runV) as the CRI ? Greg. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Mon Jan 8 15:56:32 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 8 Jan 2018 07:56:32 -0800 Subject: [openstack-dev] [tripleo] FFE Select Roles TripleO-UI In-Reply-To: References: Message-ID: On Mon, Jan 8, 2018 at 3:56 AM, Jiri Tomasek wrote: > Hello, > > I’d like to request an FFE to finish GUI work on roles management, > specifically listing of roles and selection of roles for deployment. This > feature is one of the main goals of current cycle. The pending patches are > ready to be merged, mostly just waiting for tripleo-common patches to land > (those already have FFE). > > Blueprints: > https://blueprints.launchpad.net/tripleo/+spec/tripleo-ui-select-roles > https://blueprints.launchpad.net/openstack/?searchtext=roles-crud-ui > > Patches: > https://review.openstack.org/#/q/topic:bp/tripleo-ui-select-roles+(status:open+OR+status:merged) Since it's only affecting tripleo-ui and no impact on other projects, +1 for me. It's indeed an important feature for Queens. Thanks, -- Emilien Macchi From kgiusti at gmail.com Mon Jan 8 16:38:55 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Mon, 8 Jan 2018 11:38:55 -0500 Subject: [openstack-dev] [oslo] proposing Stephen Finucan for oslo-core In-Reply-To: <1515423211-sup-8000@lrrr.local> References: <1515423211-sup-8000@lrrr.local> Message-ID: +1 for Stephen! On Mon, Jan 8, 2018 at 9:55 AM, Doug Hellmann wrote: > Stephen (sfinucan) has been working on pbr, oslo.config, and > oslo.policy and reviewing several of the other Oslo libraries for > a while now. His reviews are always helpful and I think he would > make a good addition to the oslo-core team. > > As per our usual practice, please reply here with a +1 or -1 and > any reservations. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Ken Giusti (kgiusti at gmail.com) From gaetan at xeberon.net Mon Jan 8 16:59:55 2018 From: gaetan at xeberon.net (Gaetan) Date: Mon, 8 Jan 2018 17:59:55 +0100 Subject: [openstack-dev] [reno] questions about reno In-Reply-To: <1515421459-sup-1398@lrrr.local> References: <1515421459-sup-1398@lrrr.local> Message-ID: Hello Thanks for your answers! - unreleased notes appear in release note with a title such as "0.1.0-2". > > Is it possible to not have any title or use "0.1.0-dev2" pattern like > pbr ? > > I'm not sure why it matters, but if you want to work on that patch I'll > help with reviews. > Ok, I'll see what I can do :) > > > - I guess that all notes should stay in the same folder version after > > versions, and the > > release notes of all versions will keep being automatically generated. > > Don't you think > > it might get difficult to manage all theses files? Is is possible to move > > them in different folder (at least a folder "archives?) > > We've put off doing anything like that until we have a project with > enough notes that we can observe the problems and decide how to fix > them. Have you already reached that point or are you anticipating > problems in the future? > No, just started using reno and just see that this folder might get messy quickly. Maybe I over-think this, I agree with you to observe first > > > - it is possible to generate the NEWS file using reno ? I started trying > > conversion with pandoc but the result are not great. > > How is the NEWS file different from CHANGES.txt that pbr produces? Is it > the format, or the content? > So, I like PBR that generates ChangeLog from the git history, but it has lot of details (maybe too much). So, I was thinking to store in NEWS only the release note As an example, you can look how I plan to use it for Guake: https://github.com/Guake/guake I usually write the NEWS at each release manually ( https://github.com/Guake/guake/blob/master/NEWS), and that's where reno shines in my eyes :) Regards, Gaetan -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkovar at redhat.com Mon Jan 8 17:12:09 2018 From: pkovar at redhat.com (Petr Kovar) Date: Mon, 8 Jan 2018 18:12:09 +0100 Subject: [openstack-dev] [os-api-ref][doc] Adding openstack-doc-core to os-api-ref In-Reply-To: <2c1703fa-8bba-2060-0d3b-dfc3f0311f59@ham.ie> References: <1515080357.32193.33.camel@redhat.com> <2c1703fa-8bba-2060-0d3b-dfc3f0311f59@ham.ie> Message-ID: <20180108181209.bc2ee6b7571a193e6b183048@redhat.com> On Thu, 4 Jan 2018 16:06:53 +0000 Graham Hayes wrote: > On 04/01/18 15:39, Stephen Finucane wrote: > > I'm not sure what the procedure for this is but here goes. > > > > I've noticed that the 'os-api-ref' project seems to have its own group > > of cores [1], many of whom are no longer working on OpenStack (at > > least, not full-time), and has a handful of open patches against it > > [2]. Since the doc team has recently changed its scope from writing > > documentation to enabling individual projects to maintain their own > > docs, we've become mainly responsible for projects like 'openstack-doc- > > theme'. Given that the 'os-api-ref' project is a Sphinx thing required > > for multiple OpenStack projects, it seems like something that > > could/should fall into the doc team's remit. > > > > I'd like to move this project into the remit of the 'openstack-doc- > > core' team, by way of removing the 'os-api-ref-core' group or adding > > 'openstack-doc-core' to the list of included groups. In both cases, > > existing active cores will be retained. Do any of the existing 'os-api- > > ref' cores have any objections to this? > > No objection from me > > > Stephen > > > > PS: I'm not sure how this affects things from a release management > > perspective. Are there PTLs for these sorts of projects? > > It does seem like a docs tooling thing, so maybe moving it to the docs > project umbrella might be an idea? What we do for other projects under that umbrella (such as contributor-guide) is that we add openstack-doc-core as a group member, as Stephen mentioned. This allows for other contributors to become cores even if they are not interested in other aspects of the docs team's work. But I'm fine with whatever works for the current cores. Thanks, pk From zhipengh512 at gmail.com Mon Jan 8 17:26:21 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 9 Jan 2018 01:26:21 +0800 Subject: [openstack-dev] [ResMgmt SIG]Proposal to form Resource Management SIG Message-ID: Hi all, With the maturing of resource provider/placement feature landing in OpenStack in recent release, and also in light of Kubernetes community increasing attention to the similar effort, I want to propose to form a Resource Management SIG as a contact point for OpenStack community to communicate with Kubernetes Resource Management WG[0] and other related SIGs. The formation of the SIG is to provide a gathering of similar interested parties and establish an official channel. Currently we have already OpenStack developers actively participating in kubernetes discussion (e.g. [1]), we would hope the ResMgmt SIG could further help such activities and better align the resource mgmt mechanism, especially the data modeling between the two communities (or even more communities with similar desire). I have floated the idea with Jay Pipes and Chris Dent and received positive feedback. The SIG will have a co-lead structure so that people could spearheading in the area they are most interested in. For example for me as Cyborg dev, I will mostly lead in the area of acceleration[2]. If you are also interested please reply to this thread, and let's find a efficient way to form this SIG. Efficient means no extra unnecessary meetings and other undue burdens. [0] https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU/edit?usp=sharing [1] https://github.com/kubernetes/community/pull/782 [2] https://github.com/kubernetes/kubernetes/labels/area%2Fhw-accelerators -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Jan 8 17:33:00 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 08 Jan 2018 17:33:00 +0000 Subject: [openstack-dev] [Women of OpenStack] Kicking Off a New Year! Message-ID: Hello, Today, Monday January 8th, we will be kicking off a new year of meetings for the Women of OpenStack! Things have evolved a lot in the last few months and we are excited to get started on our continued refresh. If you are interested in what we are up to and want to get involved, or just want to see what we are all about, please stop by our IRC Channel #openstack-women at 20:00 UTC to join us for the meeting! If you are interested in the proposed agenda or have something you want to add, check it out here[0]. For more information about what we are about, check out our wiki page[1]! Need help getting on IRC? Directions here[2]. I am also available by email if you need extra help (knelson at openstack.org) Can't wait to see you there! -Kendall Nelson (diablo_rojo) [0] https://etherpad.openstack.org/p/WOS_Agenda_Tracker [1] https://wiki.openstack.org/wiki/Women_of_OpenStack [2] https://docs.openstack.org/contributors/irc.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Jan 8 17:34:24 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 8 Jan 2018 11:34:24 -0600 Subject: [openstack-dev] [oslo] proposing Stephen Finucan for oslo-core In-Reply-To: <1515423211-sup-8000@lrrr.local> References: <1515423211-sup-8000@lrrr.local> Message-ID: <646ac9cf-c518-e1b6-b807-003ddf9d6abf@nemebean.com> Definite +1 On 01/08/2018 08:55 AM, Doug Hellmann wrote: > Stephen (sfinucan) has been working on pbr, oslo.config, and > oslo.policy and reviewing several of the other Oslo libraries for > a while now. His reviews are always helpful and I think he would > make a good addition to the oslo-core team. > > As per our usual practice, please reply here with a +1 or -1 and > any reservations. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mriedemos at gmail.com Mon Jan 8 18:33:17 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 8 Jan 2018 12:33:17 -0600 Subject: [openstack-dev] [nova] Rocky PTG early planning Message-ID: As the Queens release winds to a close, I've started thinking about topics for Rocky that can be discussed at the PTG. I've created an etherpad [1] for just throwing various topics in there, completely free-form at this point; just remember to add your name next to any topic you add. [1] https://etherpad.openstack.org/p/nova-ptg-rocky -- Thanks, Matt From mriedemos at gmail.com Mon Jan 8 18:35:15 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 8 Jan 2018 12:35:15 -0600 Subject: [openstack-dev] [nova] [libvirt] [scaleio] ScaleIO libvirt volume driver native mode In-Reply-To: <1515423546.18267.2@smtp.office365.com> References: <1515423546.18267.2@smtp.office365.com> Message-ID: <6d5b6f05-e564-9957-b5f6-252e1d00188b@gmail.com> On 1/8/2018 8:59 AM, Balázs Gibizer wrote: > Two years ago a patch merged [1] that set AIO mode of some of the > libivrt volume drivers to 'native' instead of the default 'threading'. > At that time the ScaleIO driver was not modified. Recently we did some > measurements (on Mitaka base) and we think that the ScaleIO volume > driver could also benefit from the 'native' mode. So in Rocky we would > like to propose a small change to set the 'native' mode for the ScaleIO > volume driver too. Does anybody have opposing measurements or views? You should probably talk to Eric Young (eric at aceshome.com) who is working on adding the scaleio image backend for the libvirt driver. -- Thanks, Matt From Eric.Young at dell.com Mon Jan 8 18:52:27 2018 From: Eric.Young at dell.com (young, eric) Date: Mon, 8 Jan 2018 18:52:27 +0000 Subject: [openstack-dev] [nova] [libvirt] [scaleio] ScaleIO libvirt volume driver native mode In-Reply-To: <6d5b6f05-e564-9957-b5f6-252e1d00188b@gmail.com> References: <1515423546.18267.2@smtp.office365.com> <6d5b6f05-e564-9957-b5f6-252e1d00188b@gmail.com> Message-ID: <6F85C95E-094E-403B-B0D9-60BC021F63B8@emc.com> On the surface, I have no problems with this. Can you send me the measurements and an idea of what the patch would look like? Please use my work email at eric.young at dell.com Eric On 1/8/18, 1:35 PM, "Matt Riedemann" wrote: >On 1/8/2018 8:59 AM, Balázs Gibizer wrote: >> Two years ago a patch merged [1] that set AIO mode of some of the >> libivrt volume drivers to 'native' instead of the default 'threading'. >> At that time the ScaleIO driver was not modified. Recently we did some >> measurements (on Mitaka base) and we think that the ScaleIO volume >> driver could also benefit from the 'native' mode. So in Rocky we would >> like to propose a small change to set the 'native' mode for the ScaleIO >> volume driver too. Does anybody have opposing measurements or views? > >You should probably talk to Eric Young (eric at aceshome.com) who is >working on adding the scaleio image backend for the libvirt driver. > >-- > >Thanks, > >Matt > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Mon Jan 8 19:03:37 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 08 Jan 2018 14:03:37 -0500 Subject: [openstack-dev] [reno] questions about reno In-Reply-To: References: <1515421459-sup-1398@lrrr.local> Message-ID: <1515437772-sup-7395@lrrr.local> Excerpts from Gaetan's message of 2018-01-08 17:59:55 +0100: > Hello > > Thanks for your answers! > > - unreleased notes appear in release note with a title such as "0.1.0-2". > > > Is it possible to not have any title or use "0.1.0-dev2" pattern like > > pbr ? > > > > I'm not sure why it matters, but if you want to work on that patch I'll > > help with reviews. > > > > Ok, I'll see what I can do :) > > > > > > - I guess that all notes should stay in the same folder version after > > > versions, and the > > > release notes of all versions will keep being automatically generated. > > > Don't you think > > > it might get difficult to manage all theses files? Is is possible to move > > > them in different folder (at least a folder "archives?) > > > > We've put off doing anything like that until we have a project with > > enough notes that we can observe the problems and decide how to fix > > them. Have you already reached that point or are you anticipating > > problems in the future? > > > > No, just started using reno and just see that this folder might get messy > quickly. Maybe I over-think this, I agree with you to observe first > > > > > > - it is possible to generate the NEWS file using reno ? I started trying > > > conversion with pandoc but the result are not great. > > > > How is the NEWS file different from CHANGES.txt that pbr produces? Is it > > the format, or the content? > > > > So, I like PBR that generates ChangeLog from the git history, but it has > lot of details (maybe too much). So, I was thinking to store in NEWS only > the release note > As an example, you can look how I plan to use it for Guake: > https://github.com/Guake/guake > I usually write the NEWS at each release manually ( > https://github.com/Guake/guake/blob/master/NEWS), and that's where reno > shines in my eyes :) That looks similar to the output of the report command, although maybe with less detail. I can see a couple of ways to do this. 1. Use the report format as it is now. 2. Add a section to the note file to include a "highlight" or "news" entry that is ignored most of the time but can be used to produce summaries like this. 3. Try to somehow derive a summary from the text in the notes automatically. Option 1 might work, although maybe you would end up with more detail than you really want? If you were able to go this route, you could take advantage of our plans to include release notes in source distributions automatically (so you wouldn't even need to check the file into git). Option 2 is do-able, but I'm a little concerned that having "magic" sections makes the processing of the input files more complicated so I'll have to think about whether it's really a good idea or not. Option 3 sounds relatively hard, given that release notes just need to be valid restructuredtext (meaning they don't need to be a list or other structure that would be easy to take the "first" part of. Doug From eharney at redhat.com Mon Jan 8 19:05:35 2018 From: eharney at redhat.com (Eric Harney) Date: Mon, 8 Jan 2018 14:05:35 -0500 Subject: [openstack-dev] [infra][requirements][cinder] Handling requirements for driverfixes branches Message-ID: <5b2dae09-57e1-6163-4868-6e9e055e143b@redhat.com> Hi all, I'm trying to sort out how to run unit tests on Cinder driverfixes branches. These branches are similar to stable branches, but live longer (and have a different set of rules for what changes are appropriate). In order for unit tests to work on these branches, requirements need to be pinned in the same way they are for stable branches (i.e. driverfixes/ocata matches stable/ocata's requirements). Currently, unit test jobs on these branches end up using requirements from master. It is not clear how I can pin requirements on these branches, since they aren't recognized as equivalent to stable branches by any of the normal tooling used in CI. I tried manually adding an upper-constraints.txt here [1] but this does not result in the correct dependencies being used. Where do changes need to be made for us to set the requirements/upper-constraints correctly for these branches? [1] https://review.openstack.org/#/c/503711/ Thanks, Eric From ramamani.yeleswarapu at intel.com Mon Jan 8 19:09:20 2018 From: ramamani.yeleswarapu at intel.com (Yeleswarapu, Ramamani) Date: Mon, 8 Jan 2018 19:09:20 +0000 Subject: [openstack-dev] [ironic] this week's priorities and subteam reports Message-ID: Hi, We are glad to present this week's priorities and subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. This Week's Priorities (as of the weekly ironic meeting) ======================================================== 1. ironic-lib patches to finish before the freeze 1.1. fix waiting for partition: https://review.openstack.org/#/c/529325/ 2. Traits 2.1. https://review.openstack.org/#/c/528238/ 3. Rescue: 3.1. RPC https://review.openstack.org/#/c/509336/ 3.2. network interface update: https://review.openstack.org/#/c/509342 4. Routed Networks - Review for input only 4.1. Add baremetal neutron agent https://review.openstack.org/#/c/456235/ 5. Finishing the CI for the ansible deploy work 5.1. https://review.openstack.org/529640 5.2. https://review.openstack.org/#/c/529383/ 6. BIOS interface spec: 6.1. https://review.openstack.org/#/c/496481/ Vendor priorities ----------------- cisco-ucs: Patches in works for SDK update, but not posted yet, currently rebuilding third party CI infra after a disaster... idrac: RFE and first several patches for adding UEFI support will be posted by Tuesday, 1/9 ilo: https://review.openstack.org/#/c/530838/ - OOB Raid spec for iLO5 irmc: None oneview: Introduce hpOneView and ilorest to OneView - https://review.openstack.org/#/c/523943/ Subproject priorities --------------------- bifrost: Broken on recent fedora releases - TheJulia is working on it, patch should be up this week. ironic-inspector (or its client): (dtantsur) config options refactoring: https://review.openstack.org/#/c/515786/ networking-baremetal: neutron baremetal agent https://review.openstack.org/#/c/456235/ sushy and the redfish driver: (dtantsur) implement redfish sessions: https://review.openstack.org/#/c/471942/ Bugs (dtantsur, vdrok, TheJulia) -------------------------------- - Stats (diff between 18 Dec 2017 and 08 Jan 2018) - Ironic: 219 bugs (+1) + 260 wishlist items (-1). 2 new, 158 in progress, 0 critical, 34 high (+2) and 28 incomplete (-2) - Inspector: 15 bugs (-2) + 28 wishlist items (-1). 0 new, 10 in progress (-5), 0 critical, 3 high (-1) and 5 incomplete - Nova bugs with Ironic tag: 13 (+1). 1 new (-1), 0 critical, 0 high - via http://dashboard-ironic.7e14.starter-us-west-2.openshiftapps.com/ - HIGH bugs with patches to review: - Clean steps are not tested in gate https://bugs.launchpad.net/ironic/+bug/1523640: Add manual clean step ironic standalone test https://review.openstack.org/#/c/429770/15 - Needs to be reproposed to the ironic tempest plugin repository. - prepare_instance() is not called for whole disk images with 'agent' deploy interface https://bugs.launchpad.net/ironic/+bug/1713916: - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` https://review.openstack.org/#/c/499050/ - (TheJulia) Currently WF-1, as revision is required for deprecation. - If provisioning network is changed, Ironic conductor does not behave correctly https://bugs.launchpad.net/ironic/+bug/1679260: Ironic conductor works correctly on changes of networks: https://review.openstack.org/#/c/462931/ - (rloo) needs some direction - may be fixed as part of https://review.openstack.org/#/c/460564/ - IPA may not find partition created by conductor https://bugs.launchpad.net/ironic-lib/+bug/1739421 - Fix proposed: https://review.openstack.org/#/c/529325/ - Inspector: Spurious race conditions detected white-/black-listing MAC addresses in dnsmasq PXE filter - https://bugs.launchpad.net/ironic-inspector/+bug/1741035 - Milan's legacy - needs triaging CI refactoring and missing test coverage ---------------------------------------- - not considered a priority, it's a 'do it always' thing - Standalone CI tests (vsaienk0) - next patch to be reviewed, needed for 3rd party CI: https://review.openstack.org/#/c/429770/ - localboot with partitioned image patches: - Ironic - add localboot partitioned image test: https://review.openstack.org/#/c/502886/ - when previous are merged TODO (vsaienko) - Upload tinycore partitioned image to tarbals.openstack.org - Switch ironic to use tinyipa partitioned image by default - Missing test coverage (all) - portgroups and attach/detach tempest tests: https://review.openstack.org/382476 - adoption: https://review.openstack.org/#/c/344975/ - should probably be changed to use standalone tests - root device hints: TODO - node take over - resource classes integration tests: https://review.openstack.org/#/c/443628/ - radosgw (https://bugs.launchpad.net/ironic/+bug/1737957) Essential Priorities ==================== Ironic client API version negotiation (TheJulia, dtantsur) ---------------------------------------------------------- - RFE https://bugs.launchpad.net/python-ironicclient/+bug/1671145 - Nova bug https://bugs.launchpad.net/nova/+bug/1739440 - gerrit topic: https://review.openstack.org/#/q/topic:bug/1671145 - status as of 08 Jan 2017: - Nova request was accepted as a bug for now: https://bugs.launchpad.net/nova/+bug/1739440 - we will upgrade it to a blueprint if it starts looking a feature; no spec is probably needed - TODO: - easier access to versions in ironicclient - see https://etherpad.openstack.org/p/ironic-api-version-negotiation - discussion of various ways to implement it happened on the midcycle - dtantsur wants to have an API-SIG guideline on consuming versions in SDKs - still TODO - patches for ironicclient by TheJulia: - expose negotiated latest: https://review.openstack.org/531029 - accept list of versions: https://review.openstack.org/#/c/531271/ - establish foundation for using version negotiation in nova External project authentication rework (pas-ha, TheJulia) --------------------------------------------------------- - gerrit topic: https://review.openstack.org/#/q/topic:bug/1699547 - status as of 08 Jan 2017: - Ironic Done - 2 inspector patches left - https://review.openstack.org/#/c/515786/ - https://review.openstack.org/#/c/515787 Classic drivers deprecation (dtantsur) -------------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/classic-drivers-future.html - status as of 08 Jan 2017: - dev documentation for hardware types: TODO - switch documentation to hardware types: - status https://etherpad.openstack.org/p/ironic-switch-to-hardware-types - admin guide update (minus vendor bits): https://review.openstack.org/#/c/528337/ - need review + help from vendors updating their pages - migration of classic drivers to hardware types, in discussion... - http://lists.openstack.org/pipermail/openstack-dev/2017-November/124509.html - spec update: https://review.openstack.org/#/c/528308/ Traits support planning (mgoddard, johnthetubaguy, TheJulia, dtantsur) ---------------------------------------------------------------------- - http://specs.openstack.org/openstack/ironic-specs/specs/approved/node-traits.html - status as of 8 Jan 2018: - deploy templates spec: https://review.openstack.org/504952 needs reviews - depends on deploy-steps spec: https://review.openstack.org/#/c/412523 - patches for traits API - https://review.openstack.org/#/c/528238/ - johnthetubaguy is picking the ironic side of traits up now, mgoddard is taking a look at the nova virt driver side Reference architecture guide (dtantsur, sambetts) ------------------------------------------------- - status as of 08 Jan 2017: - dtantsur needs volunteers to help move this forward - list of cases from https://etherpad.openstack.org/p/ironic-queens-ptg-open-discussion - Admin-only provisioner - small and/or rare: TODO - large and/or frequent: TODO - Bare metal cloud for end users - smaller single-site: TODO - larger single-site: TODO - larger multi-site: TODO High Priorities =============== Neutron event processing (vdrok, vsaienk0, sambetts) ---------------------------------------------------- - status as of 27 Sep 2017: - spec at https://review.openstack.org/343684, ready for reviews, replies from authors - WIP code at https://review.openstack.org/440778 Routed network support (sambetts, vsaienk0, bfournie, hjensas) -------------------------------------------------------------- - status as of 18 Dec 2017: - hjensas taken over as main contributor from sambetts - Patches: - https://review.openstack.org/456235 Add baremetal neutron agent - https://review.openstack.org/524709 Make the agent distributed using hashring and notifications - https://review.openstack.org/521838 Switch from MechanismDriver to SimpleAgentMechanismDriverBase Rescue mode (rloo, stendulker, aparnav) --------------------------------------- - Status as on 18 Dec 2017 - spec: http://specs.openstack.org/openstack/ironic-specs/specs/approved/implement-rescue-mode.html - code: https://review.openstack.org/#/q/topic:bug/1526449+status:open - ironic side: - All patches are up-to-date, being actively reviewed and updated - Tempest tests based on standalone ironic is WIP. - nova side: - https://blueprints.launchpad.net/nova/+spec/ironic-rescue-mode: approved for Queens; waiting for ironic part to be done first. Queens feature freeze is week of Jan 22. - To get the nova patch merged, we need: - release new python-ironicclient - update ironicclient version in upper-constraints (this patch will be posted automatically) - update ironicclient version in global-requirement (this patch needs to be posted manually) - code patch: https://review.openstack.org/#/c/416487/ Clean up deploy interfaces (vdrok) ---------------------------------- - status as of 8 Jan 2017: - patch https://review.openstack.org/524433 Zuul v3 jobs in-tree (sambetts, derekh, jlvillal, rloo) ------------------------------------------------------- - etherpad tracking zuul v3 -> intree: https://etherpad.openstack.org/p/ironic-zuulv3-intree-tracking - cleaning up/centralizing job descriptions (eg 'irrelevant-files'): all done except inspector/client because stable/ocata CI is failing :-( - (dtantsur) nice to do, but not a priority. - Next TODO is to convert jobs on master, to proper ansible. NOT a high priority though. - (pas-ha) DNM experimental patch with "devstack-tempest" as base job https://review.openstack.org/#/c/520167/ Graphical console interface (pas-ha, vdrok, rpioso) --------------------------------------------------- - status as of 8 Jan 2017: - spec on review: https://review.openstack.org/#/c/306074/ - there is nova part here, which has to be approved too - dtantsur is worried by absence of progress here - (TheJulia) I think for rocky, it might be worth making it a prime focus, or making it a background goal. BIOS config framework (dtantsur, yolanda, rpioso) ------------------------------------------------- - status as of 8 Jan 2017: - spec under active review: https://review.openstack.org/#/c/496481/ Ansible deploy interface (pas-ha) --------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ansible-deploy-driver.html - status as of 8 Jan 2017: - code merged - TODO - CI job - https://review.openstack.org/529640 - https://review.openstack.org/#/c/529383/ - docs: https://review.openstack.org/#/c/525501/ OpenStack Priorities ==================== Python 3.5 compatibility (Nisha, Ankit) --------------------------------------- - Topic: https://review.openstack.org/#/q/topic:goal-python35+NOT+project:openstack/governance+NOT+project:openstack/releases - this include all projects, not only ironic - please tag all reviews with topic "goal-python35" - TODO submit the python3 job for IPA - for ironic and ironic-inspector job enabled by disabling swift as swift is still lacking py3.5 support. - anupn to update the python3 job to build tinyipa with python3 - (anupn): Talked with swift folks and there is a bug upstream opened https://review.openstack.org/#/c/401397 for py3 support in swift. But this is not on their priority - Right now patch pass all gate jobs except agent_- drivers. - we need to make the ironic job voting eventually. but we need to check that nova, glance and neutron already have voting python 3 jobs, otherwise they may break us. - nova seems to have python 3 jobs voting, here are our patches: - ironic https://review.openstack.org/#/c/531398/ - ironic-inspector https://review.openstack.org/#/c/531400/ Deploying with Apache and WSGI in CI (pas-ha, vsaienk0) ------------------------------------------------------- - ironic is mostly finished - (pas-ha) needs to be rewritten for uWSGI, patches on review: - https://review.openstack.org/#/c/507011/ +A - https://review.openstack.org/#/c/507067 Needs revision - inspector is TODO and depends on https://review.openstack.org/#/q/topic:bug/1525218 - may be delayed to after Queens, as the HA work seems to take a different direction Split away the tempest plugin (jlvillal) ---------------------------------------- - https://etherpad.openstack.org/p/ironic-tempest-plugin-migration - Current (8-Jan-2018) (jlvillal): All projects now using tempest plugin code from openstack/ironic-tempest-plugin - Need to remove plugin code from master branch of openstack/ironic and openstack/ironic-inspector - Plugin code will NOT be removed from the stable branches of openstack/ironic and openstack/ironic-inspector - (jlvillal) 3rd Party CI has had over 3 weeks to prepare for removal. We should now move forward - README, setup.cfg and docs cleanup: https://review.openstack.org/#/c/529538/ Subprojects =========== Inspector (dtantsur) -------------------- - trying to flip dsvm-discovery to use the new dnsmasq pxe filter and failing because of bash :Dhttps://review.openstack.org/#/c/525685/6/devstack/plugin.sh at 202 - follow-ups being merged/reviewed; working on state consistency enhancements https://review.openstack.org/#/c/510928/ too (HA demo follow-up) Bifrost (TheJulia) ------------------ - Also seems a recent authenticaiton change in keystoneauth1 has broken processing of the clouds.yaml files, i.e. `openstack` command does not work. - Fedora support is currently broken, thejulia has a patch in progress Drivers: -------- DRAC (rpioso, dtantsur) ~~~~~~~~~~~~~~~~~~~~~~~ - Dell Ironic CI is being rebuilt, its back and running now (10/17/2017) OneView (ricardoas/fellypefca) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Re-submitting reverted patches for migration from python-oneviewclient to python-hpOneView + python-ilorest-library - Check weekly priorities for most import patch to review Cisco UCS (sambetts) ~~~~~~~~~~~~~~~~~~~~ - Currently rebuilding third party CI from the ground up after it bit the dust - Patches for updating the UCS python SDKs are in the works and should be posted soon ......... Until next week, --Rama [0] https://etherpad.openstack.org/p/IronicWhiteBoard -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Jan 8 19:10:42 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 08 Jan 2018 14:10:42 -0500 Subject: [openstack-dev] [infra][requirements][cinder] Handling requirements for driverfixes branches In-Reply-To: <5b2dae09-57e1-6163-4868-6e9e055e143b@redhat.com> References: <5b2dae09-57e1-6163-4868-6e9e055e143b@redhat.com> Message-ID: <1515438577-sup-2584@lrrr.local> Excerpts from Eric Harney's message of 2018-01-08 14:05:35 -0500: > Hi all, > > I'm trying to sort out how to run unit tests on Cinder driverfixes branches. > > These branches are similar to stable branches, but live longer (and have > a different set of rules for what changes are appropriate). > > In order for unit tests to work on these branches, requirements need to > be pinned in the same way they are for stable branches (i.e. > driverfixes/ocata matches stable/ocata's requirements). Currently, unit > test jobs on these branches end up using requirements from master. > > It is not clear how I can pin requirements on these branches, since they > aren't recognized as equivalent to stable branches by any of the normal > tooling used in CI. I tried manually adding an upper-constraints.txt > here [1] but this does not result in the correct dependencies being used. > > Where do changes need to be made for us to set the > requirements/upper-constraints correctly for these branches? > > > [1] https://review.openstack.org/#/c/503711/ > > Thanks, > Eric > You'll want to just update the UPPER_CONSTRAINTS_FILE setting in tox.ini to point to the one for the relevant stable branch. If the branch no longer exists, you should be able to refer to the version of the file using the $release-eol tag instead of the branch name. Doug From haleyb.dev at gmail.com Mon Jan 8 19:59:33 2018 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 8 Jan 2018 14:59:33 -0500 Subject: [openstack-dev] [neutron][all] nova_metadata_ip removal Message-ID: <2225f62d-eba2-825f-55e9-3fa1aff50e17@gmail.com> Hi, As part of the normal deprecate/removal process, the 'nova_metadata_ip' option is being removed from neutron as it was replaced with 'nova_metadata_host' in the Pike cycle, https://review.openstack.org/#/c/518836/ Codesearch did find various repos still using the old value, so I posted a number of cleanups for them since it was pretty painless, https://review.openstack.org/#/q/topic:nova_metadata_ip_deprecated+(status:open+OR+status:merged) This is just an FYI for anyone else that might trip over the old value going away once it finally merges, most will probably never notice. -Brian From edmondsw at us.ibm.com Mon Jan 8 20:11:08 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Mon, 8 Jan 2018 15:11:08 -0500 Subject: [openstack-dev] [nova] Working toward Queens feature freeze and RC1 In-Reply-To: References: Message-ID: > From: Matt Riedemann > To: "OpenStack Development Mailing List (not for usage questions)" > > Date: 01/03/2018 07:03 PM > Subject: [openstack-dev] [nova] Working toward Queens feature freeze and RC1 > ... snip ... > The rest of the blueprints are tracked here: > > https://urldefense.proofpoint.com/v2/url? > u=https-3A__etherpad.openstack.org_p_nova-2Dqueens-2Dblueprint-2Dstatus&d=DwIGaQ&c=jf_iaSHvJObTbx- > siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=HVyvQHTZ4ft1C3JEJ9ij0uXwEy5_y3egSY7kNu_BvcU&s=mmvsEIKWRecnDlvYgLPwBAfPlVQQV5HEtHYMdDuaRME&e= I updated that etherpad with the latest status for the powervm blueprint. Should have 2 of the 3 remaining patches ready for review in the next day or two, and the last later in the week. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Mon Jan 8 20:12:40 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 8 Jan 2018 15:12:40 -0500 Subject: [openstack-dev] [ResMgmt SIG]Proposal to form Resource Management SIG In-Reply-To: References: Message-ID: <11ce8607-0a59-401d-0605-c36c2a901cf9@gmail.com> On 01/08/2018 12:26 PM, Zhipeng Huang wrote: > Hi all, > > With the maturing of resource provider/placement feature landing in > OpenStack in recent release, and also in light of Kubernetes community > increasing attention to the similar effort, I want to propose to form a > Resource Management SIG as a contact point for OpenStack community to > communicate with Kubernetes Resource Management WG[0] and other related > SIGs. > > The formation of the SIG is to provide a gathering of similar interested > parties and establish an official channel. Currently we have already > OpenStack developers actively participating in kubernetes discussion > (e.g. [1]), we would hope the ResMgmt SIG could further help such > activities and better align the resource mgmt mechanism, especially the > data modeling between the two communities (or even more communities with > similar desire). > > I have floated the idea with Jay Pipes and Chris Dent and received > positive feedback. The SIG will have a co-lead structure so that people > could spearheading in the area they are most interested in. For example > for me as Cyborg dev, I will mostly lead in the area of acceleration[2]. > > If you are also interested please reply to this thread, and let's find a > efficient way to form this SIG. Efficient means no extra unnecessary > meetings and other undue burdens. +1 From the Nova perspective, the scheduler meeting (which is Mondays at 1400 UTC) is the primary meeting where resource tracking and accounting issues are typically discussed. Chris Dent has done a fabulous job recording progress on the resource providers and placement work over the last couple releases by issuing status emails to the openstack-dev@ mailing list each Friday. I think having a bi-weekly cross-project (or even cross-ecosystem if we're talking about OpenStack+k8s) status email reporting any big events in the resource tracking world would be useful. As far as regular meetings for a resource management SIG, I'm +0 on that. I prefer to have targeted topical meetings over regular meetings. Best, -jay From miguel at mlavalle.com Mon Jan 8 20:51:18 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 8 Jan 2018 14:51:18 -0600 Subject: [openstack-dev] [Neutron] Bug deputy report Message-ID: - https://bugs.launchpad.net/neutron/+bug/1741954: create_and_list_trunk_subports rally scenario failed with timeouts. Armando Migliaccio assigned to it - CRITICAL: https://bugs.launchpad.net/neutron/+bug/1741889 functional: DbAddCommand sometimes times out after 10 seconds. This is causing repeated failures of the functional tests jobs. - https://bugs.launchpad.net/neutron/+bug/1741411: Centralized floating ip Error status. Needs environment to be reproduced. Will ask Swami for input - https://bugs.launchpad.net/neutron/+bug/1741407: L3 HA: 2 masters after restart of l3 agent. Needs environment to be reproduced. Will ask Swami for input - https://bugs.launchpad.net/neutron/+bug/1741079: Deleting heat stack doesn't delete dns records. Seems to be a problem in Heat. Gathering data from submitter - https://bugs.launchpad.net/neutron/+bug/1740885: Security group updates fail when port hasn't been initialized yet. Jakub Libosvar assigned on it -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Mon Jan 8 20:57:50 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 8 Jan 2018 14:57:50 -0600 Subject: [openstack-dev] [ResMgmt SIG]Proposal to form Resource Management SIG In-Reply-To: <11ce8607-0a59-401d-0605-c36c2a901cf9@gmail.com> References: <11ce8607-0a59-401d-0605-c36c2a901cf9@gmail.com> Message-ID: <4f417f03-00f5-9163-8a53-fcade5670ccf@fried.cc> > I think having a bi-weekly cross-project (or even cross-ecosystem if > we're talking about OpenStack+k8s) status email reporting any big events > in the resource tracking world would be useful. As far as regular > meetings for a resource management SIG, I'm +0 on that. I prefer to have > targeted topical meetings over regular meetings. Agree with this. That said, please include me (efried) in whatever shakes out. On 01/08/2018 02:12 PM, Jay Pipes wrote: > On 01/08/2018 12:26 PM, Zhipeng Huang wrote: >> Hi all, >> >> With the maturing of resource provider/placement feature landing in >> OpenStack in recent release, and also in light of Kubernetes community >> increasing attention to the similar effort, I want to propose to form >> a Resource Management SIG as a contact point for OpenStack community >> to communicate with Kubernetes Resource Management WG[0] and other >> related SIGs. >> >> The formation of the SIG is to provide a gathering of similar >> interested parties and establish an official channel. Currently we >> have already OpenStack developers actively participating in kubernetes >> discussion (e.g. [1]), we would hope the ResMgmt SIG could further >> help such activities and better align the resource mgmt mechanism, >> especially the data modeling between the two communities (or even more >> communities with similar desire). >> >> I have floated the idea with Jay Pipes and Chris Dent and received >> positive feedback. The SIG will have a co-lead structure so that >> people could spearheading in the area they are most interested in. For >> example for me as Cyborg dev, I will mostly lead in the area of >> acceleration[2]. >> >> If you are also interested please reply to this thread, and let's find >> a efficient way to form this SIG. Efficient means no extra unnecessary >> meetings and other undue burdens. > > +1 > > From the Nova perspective, the scheduler meeting (which is Mondays at > 1400 UTC) is the primary meeting where resource tracking and accounting > issues are typically discussed. > > Chris Dent has done a fabulous job recording progress on the resource > providers and placement work over the last couple releases by issuing > status emails to the openstack-dev@ mailing list each Friday. > > I think having a bi-weekly cross-project (or even cross-ecosystem if > we're talking about OpenStack+k8s) status email reporting any big events > in the resource tracking world would be useful. As far as regular > meetings for a resource management SIG, I'm +0 on that. I prefer to have > targeted topical meetings over regular meetings. > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From prometheanfire at gentoo.org Mon Jan 8 21:00:29 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 8 Jan 2018 15:00:29 -0600 Subject: [openstack-dev] [requirements][mistral][vitrage][octavia][taskflow][watcher] Networkx version 2.0 In-Reply-To: <20171220165650.n3kqgbsrstgs63he@gentoo.org> References: <623096DD-1612-46D8-B6C4-326255B276C8@nokia.com> <20171220165650.n3kqgbsrstgs63he@gentoo.org> Message-ID: <20180108210029.wlevjmgie3jdt26t@gentoo.org> On 17-12-20 10:56:50, Matthew Thode wrote: > On 17-12-20 15:51:17, Afek, Ifat (Nokia - IL/Kfar Sava) wrote: > > Hi, > > > > There is an open bug in launchpad about the new release of Networkx 2.0, that is backward incompatible with versions 1.x [1]. > > Is there a plan to change the Networkx version in the global requirements in Queens? We need to make some code refactoring in Vitrage, and I’m trying to understand how urgent it is. > > > > [1] https://bugs.launchpad.net/diskimage-builder/+bug/1718576 > > > > Mistral, Vitrage, Octavia, Taskflow, Watcher > > Those are the projects using NetworkX that'd need to be updated. > http://codesearch.openstack.org/?q=networkx&i=nope&files=.*requirements.*&repos= > > I'm open to uncapping networkx if these projects have buyin. > I've created https://review.openstack.org/531902 that your patches can depend upon. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From miguel at mlavalle.com Mon Jan 8 22:20:53 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 8 Jan 2018 16:20:53 -0600 Subject: [openstack-dev] [neutron] Rocky PTG planning Message-ID: Hi, I have started an etherpad ( https://etherpad.openstack.org/p/neutron-ptg-rocky) to start planning the topics we are going to discuss in the Neutron sessions during the Rocky PTG in Dublin. At this point in time, it is entirely free form. Just make sure you put your name and / or IRC nickname next to the topics you add See you in Dublin! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Mon Jan 8 22:43:00 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Mon, 08 Jan 2018 14:43:00 -0800 Subject: [openstack-dev] [congress] generic push driver In-Reply-To: References: Message-ID: Hi Ifat, From: "Afek, Ifat (Nokia - IL/Kfar Sava)" Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Sunday, January 7, 2018 at 4:00 AM To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [congress] generic push driver > Hi Eric, > > I have two questions: > > 1. An alarm is usually raised on a resource, and in Vitrage we can send > you the details of that resource. Is there a way in Congress for the alarm to > reference a resource that exists in another table? And what if the resource > does not exist in Congress? First, the columns I chose are just a minimal sample to illustrate the generic nature of the driver. In use with vitrage, we would probably also want to include columns such as `resource_id`. Does that address the need to reference a resource? That resource referenced by ID may or may not exist in another part of Congress. It would be the job of the policy to resolve references when taking appropriate actions. If referential integrity is needed, additional policy rules can be specified to catch breakage. This brings up a related question I had about vitrage: Looking at the vertex properties listed here: https://github.com/openstack/vitrage/blob/master/vitrage/common/constants.py #L17 Where can I find more information about the type and content of data in each property? Exapmle: - is the `resource` property an ID string or a python object reference? - what does the property `is_real_vitrage_id` represent? - what is the difference between `resource_id` and `vitrage_resource_id` ? > 2. Do you plan to support also updateRows? This can be useful for alarm > state changes. Are you thinking about updating an entire row or updating a specific field of a row? That is, update Row {"id":"1-1", "name":"name1", "state":"active", "severity":1} to become {"id":"1-1", "name":"name1", "state":"active", "severity":100} Vs Update the severity field of row with id "1-1" to severity 100. Both could be supported, but the second one is more complex to support efficiently. Thanks! Eric > > Thanks, > Ifat > > > > From: Eric K > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > > Date: Saturday, 6 January 2018 at 3:50 > To: "OpenStack Development Mailing List (not for usage questions)" > > Subject: [openstack-dev] [congress] generic push driver > > > > We've been discussing generic push drivers for Congress for quite a while. > Finally sketching out something concrete and looking for some preliminary > feedback. Below are sample interactions with a proposed generic push driver. A > generic push driver could be used to receive push updates from vitrage, > monasca, and many other sources. > > > > 1. creating a datasource: > > > > congress datasource create generic_push_driver vitrage --config schema=' > > { > > "tables":[ > > { > > "name":"alarms", > > "columns":[ > > "id", > > "name", > > "state", > > "severity", > > ] > > } > > ] > > } > > ' > > > > 2. Update an entire table: > > > > PUT '/v1/data-sources/vitrage/tables/alarms' with body: > > { > > "rows":[ > > { > > "id":"1-1", > > "name":"name1", > > "state":"active", > > "severity":1 > > }, > > [ > > "1-2", > > "name2", > > "active", > > 2 > > ] > > ] > > } > > Note that a row can be either a {} or [] > > > > > > 3. perform differential update: > > > > PUT '/v1/data-sources/vitrage/tables/alarms' with body: > > { > > "addrows":[ > > { > > "id":"1-1", > > "name":"name1", > > "state":"active", > > "severity":1 > > }, > > [ > > "1-2", > > "name2", > > "active", > > 2 > > ] > > ] > > } > > > > OR > > > > { > > "deleterows":[ > > { > > "id":"1-1", > > "name":"name1", > > "state":"active", > > "severity":1 > > }, > > [ > > "1-2", > > "name2", > > "active", > > 2 > > ] > > ] > > } > > > > Note 1: we may allow 'rows', 'addrows', and 'deleterows' to be used together > with some well defined semantics. Alternatively we may mandate that each > request can have only one of the three pieces. > > > > Note 2: we leave it as the responsibility of the sender to send and confirm > the requests for differential updates in correct order. We could add > sequencing in future work. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Mon Jan 8 22:59:47 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Mon, 08 Jan 2018 14:59:47 -0800 Subject: [openstack-dev] [congress] generic push driver In-Reply-To: References: Message-ID: From: Tim Hinrichs Date: Monday, January 8, 2018 at 7:31 AM To: Eric Kao Cc: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [congress] generic push driver > It's probably worth considering PATCH instead of PUT for updating the table. Ah right of course. PATCH makes more sense here. > > http://restcookbook.com/HTTP%20Methods/patch/ > > You could also think about using JSON-patch to describe the requested update. > It provides fine-grained update semantics: > > https://tools.ietf.org/html/rfc6902 Hmm it would be very nice to follow an existing standard. Unfortunately the json patch path specifications seem like an awkward fit with the set semantics of congress tables. Removal, for example, must be done by specifying the array index of the row to be removed. But perhaps we can borrow the style of json patch for patching sets. For example: PATCH '/v1/data-sources/vitrage/tables/alarms' with body: [ { "op":"add", "path":"/", "value":{ "id":"1-1", "name":"name1", "state":"active", "severity":1 } }, { "op":"add", "path":"/", "value":[ "1-2", "name2", "active", 2 ] }, { "op":"remove", "path":"/", "value":[ "1-2", "name2", "active", 2 ] } ] Would that work well? At least there will be well-defined semantic based on sequential operation. > > Tim > > On Fri, Jan 5, 2018 at 5:50 PM Eric K wrote: >> We've been discussing generic push drivers for Congress for quite a while. >> Finally sketching out something concrete and looking for some preliminary >> feedback. Below are sample interactions with a proposed generic push driver. >> A generic push driver could be used to receive push updates from vitrage, >> monasca, and many other sources. >> >> 1. creating a datasource: >> >> congress datasource create generic_push_driver vitrage --config schema=' >> { >> "tables":[ >> { >> "name":"alarms", >> "columns":[ >> "id", >> "name", >> "state", >> "severity", >> ] >> } >> ] >> } >> ' >> >> 2. Update an entire table: >> >> PUT '/v1/data-sources/vitrage/tables/alarms' with body: >> { >> "rows":[ >> { >> "id":"1-1", >> "name":"name1", >> "state":"active", >> "severity":1 >> }, >> [ >> "1-2", >> "name2", >> "active", >> 2 >> ] >> ] >> } >> Note that a row can be either a {} or [] >> >> >> 3. perform differential update: >> >> PUT '/v1/data-sources/vitrage/tables/alarms' with body: >> { >> "addrows":[ >> { >> "id":"1-1", >> "name":"name1", >> "state":"active", >> "severity":1 >> }, >> [ >> "1-2", >> "name2", >> "active", >> 2 >> ] >> ] >> } >> >> OR >> >> { >> "deleterows":[ >> { >> "id":"1-1", >> "name":"name1", >> "state":"active", >> "severity":1 >> }, >> [ >> "1-2", >> "name2", >> "active", >> 2 >> ] >> ] >> } >> >> Note 1: we may allow 'rows', 'addrows', and 'deleterows' to be used together >> with some well defined semantics. Alternatively we may mandate that each >> request can have only one of the three pieces. >> >> Note 2: we leave it as the responsibility of the sender to send and confirm >> the requests for differential updates in correct order. We could add >> sequencing in future work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ranand at suse.com Mon Jan 8 23:02:16 2018 From: ranand at suse.com (Ritesh Anand) Date: Mon, 08 Jan 2018 16:02:16 -0700 Subject: [openstack-dev] [designate] state of pdns4 backend Message-ID: <5A53F878020000900001E0E0@prv-mh.provo.novell.com> Hi Stackers, I see that we moved from PowerDNS Backend to PDNS4 backend, I have a few questions in that regard: 1. Should powerdns 3.4 (with PowerDNS backend) continue to work fine on Pike OpenStack? 2. Why did we change the default backend to BIND9? 3. How feasible is moving from one backend to other? Say if we move from PowerDNS to BIND9 backend, if I generate BIND zone files from PowerDNS mysql backed, and make necessary designate config changes, is that sufficient? Thanks again for your help! Best, Ritesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Tue Jan 9 00:40:34 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 9 Jan 2018 08:40:34 +0800 Subject: [openstack-dev] [ResMgmt SIG]Proposal to form Resource Management SIG In-Reply-To: <11ce8607-0a59-401d-0605-c36c2a901cf9@gmail.com> References: <11ce8607-0a59-401d-0605-c36c2a901cf9@gmail.com> Message-ID: Agree 100% to avoid regular meeting and it is better to have bi-weekly email report. Meeting should be arranged event based, and I think given the status of OpenStack community's work on resource provider, mostly what we need to do is attend k8s meetings (sig-scheduler, wg-resource-management, etc.) BTW for the RM SIG proposed here, let's not limit the scope to k8s only since we might have broader collaborative efforts happening in the future. k8s is our first primary target community to sync up with. On Tue, Jan 9, 2018 at 4:12 AM, Jay Pipes wrote: > On 01/08/2018 12:26 PM, Zhipeng Huang wrote: > >> Hi all, >> >> With the maturing of resource provider/placement feature landing in >> OpenStack in recent release, and also in light of Kubernetes community >> increasing attention to the similar effort, I want to propose to form a >> Resource Management SIG as a contact point for OpenStack community to >> communicate with Kubernetes Resource Management WG[0] and other related >> SIGs. >> >> The formation of the SIG is to provide a gathering of similar interested >> parties and establish an official channel. Currently we have already >> OpenStack developers actively participating in kubernetes discussion (e.g. >> [1]), we would hope the ResMgmt SIG could further help such activities and >> better align the resource mgmt mechanism, especially the data modeling >> between the two communities (or even more communities with similar desire). >> >> I have floated the idea with Jay Pipes and Chris Dent and received >> positive feedback. The SIG will have a co-lead structure so that people >> could spearheading in the area they are most interested in. For example for >> me as Cyborg dev, I will mostly lead in the area of acceleration[2]. >> >> If you are also interested please reply to this thread, and let's find a >> efficient way to form this SIG. Efficient means no extra unnecessary >> meetings and other undue burdens. >> > > +1 > > From the Nova perspective, the scheduler meeting (which is Mondays at 1400 > UTC) is the primary meeting where resource tracking and accounting issues > are typically discussed. > > Chris Dent has done a fabulous job recording progress on the resource > providers and placement work over the last couple releases by issuing > status emails to the openstack-dev@ mailing list each Friday. > > I think having a bi-weekly cross-project (or even cross-ecosystem if we're > talking about OpenStack+k8s) status email reporting any big events in the > resource tracking world would be useful. As far as regular meetings for a > resource management SIG, I'm +0 on that. I prefer to have targeted topical > meetings over regular meetings. > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Tue Jan 9 00:41:45 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 9 Jan 2018 08:41:45 +0800 Subject: [openstack-dev] [ResMgmt SIG]Proposal to form Resource Management SIG In-Reply-To: <4f417f03-00f5-9163-8a53-fcade5670ccf@fried.cc> References: <11ce8607-0a59-401d-0605-c36c2a901cf9@gmail.com> <4f417f03-00f5-9163-8a53-fcade5670ccf@fried.cc> Message-ID: Hi Eric, Glad to count you in :) On Tue, Jan 9, 2018 at 4:57 AM, Eric Fried wrote: > > I think having a bi-weekly cross-project (or even cross-ecosystem if > > we're talking about OpenStack+k8s) status email reporting any big events > > in the resource tracking world would be useful. As far as regular > > meetings for a resource management SIG, I'm +0 on that. I prefer to have > > targeted topical meetings over regular meetings. > > Agree with this. That said, please include me (efried) in whatever > shakes out. > > On 01/08/2018 02:12 PM, Jay Pipes wrote: > > On 01/08/2018 12:26 PM, Zhipeng Huang wrote: > >> Hi all, > >> > >> With the maturing of resource provider/placement feature landing in > >> OpenStack in recent release, and also in light of Kubernetes community > >> increasing attention to the similar effort, I want to propose to form > >> a Resource Management SIG as a contact point for OpenStack community > >> to communicate with Kubernetes Resource Management WG[0] and other > >> related SIGs. > >> > >> The formation of the SIG is to provide a gathering of similar > >> interested parties and establish an official channel. Currently we > >> have already OpenStack developers actively participating in kubernetes > >> discussion (e.g. [1]), we would hope the ResMgmt SIG could further > >> help such activities and better align the resource mgmt mechanism, > >> especially the data modeling between the two communities (or even more > >> communities with similar desire). > >> > >> I have floated the idea with Jay Pipes and Chris Dent and received > >> positive feedback. The SIG will have a co-lead structure so that > >> people could spearheading in the area they are most interested in. For > >> example for me as Cyborg dev, I will mostly lead in the area of > >> acceleration[2]. > >> > >> If you are also interested please reply to this thread, and let's find > >> a efficient way to form this SIG. Efficient means no extra unnecessary > >> meetings and other undue burdens. > > > > +1 > > > > From the Nova perspective, the scheduler meeting (which is Mondays at > > 1400 UTC) is the primary meeting where resource tracking and accounting > > issues are typically discussed. > > > > Chris Dent has done a fabulous job recording progress on the resource > > providers and placement work over the last couple releases by issuing > > status emails to the openstack-dev@ mailing list each Friday. > > > > I think having a bi-weekly cross-project (or even cross-ecosystem if > > we're talking about OpenStack+k8s) status email reporting any big events > > in the resource tracking world would be useful. As far as regular > > meetings for a resource management SIG, I'm +0 on that. I prefer to have > > targeted topical meetings over regular meetings. > > > > Best, > > -jay > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From MM9745 at att.com Tue Jan 9 01:21:54 2018 From: MM9745 at att.com (MCEUEN, MATT) Date: Tue, 9 Jan 2018 01:21:54 +0000 Subject: [openstack-dev] [openstack-helm] normal team meeting tomorrow Message-ID: <7C64A75C21BB8D43BD75BB18635E4D8965491D18@MOSTLS1MSGUSRFF.ITServices.sbc.com> OpenStack-Helm team: Due to availability of some folks who want to participate in our monthly CI/CD-focused meeting, we're going to push it back to 1/16. Tomorrow's (1/9) meeting will be a normal weekly meeting instead. Thanks! Matt From ekcs.openstack at gmail.com Tue Jan 9 02:35:49 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Mon, 08 Jan 2018 18:35:49 -0800 Subject: [openstack-dev] [infra][tempest][devstack][congress] tempest.config.CONF.service_available changed on Jan 2/3? In-Reply-To: References: Message-ID: On 1/7/18, 9:27 PM, "Ghanshyam Mann" wrote: >On Sat, Jan 6, 2018 at 3:41 PM, Chandan kumar >wrote: >> Hello Eric, >> >> On Sat, Jan 6, 2018 at 4:46 AM, Eric K wrote: >>> Seems that sometime between 1/2 and 1/3 this year, >>> tempest.config.CONF.service_available.aodh_plugin as well as >>> ..service_available.mistral became unavailable in congress dsvm >>>check/gate >>> job. [1][2] >>> >>> I've checked the changes that went in to congress, tempest, devstack, >>> devstack-gate, aodh, and mistral during that period but don't see >>>obvious >>> causes. Any suggestions on where to look next to fix the issue? Thanks >>> very much! > >These config options should stay there even separating the tempest >plugin. I have checked aodh and mistral config options and there are >present as tempest config. > >- >https://github.com/openstack/telemetry-tempest-plugin/blob/b30a19214d00361 >41de75047b444d48ae0d0b656/telemetry_tempest_plugin/config.py#L27 >- >https://github.com/openstack/mistral-tempest-plugin/blob/63a0fe20f98e0cb83 >16beb81ca77249ffdda29c5/mistral_tempest_tests/config.py#L18 > > >Issue occurred because of removing the in-tree plugins before congress >was setup to use new repo. We should not remove the in-tree plugin >before gate setup of consuming the new plugin is complete for each >consumer of plugings. > >>> >> >> The aodh tempest plugin [https://review.openstack.org/#/c/526299/] is >> moved to telemetry-tempest-plugin >> [https://github.com/openstack/telemetry-tempest-plugin]. >> I have sent a patch to Congress project to fix the issue: >> https://review.openstack.org/#/c/531534/ > >Thanks Chandan, this will fix congress issue for Aodh, we need same >fix for mistral case too. Thank you Chandan Kumar and Ghanshyam Mann! It seems that the adding of telemetry-tempest-plugin does not solve the issue though. I did a testing patch based-off of Chandan's, and the aodh test was still skipped. Any ideas what more needs to be done? Thanks so much! https://review.openstack.org/#/c/531922/ http://logs.openstack.org/22/531922/1/check/congress-devstack-api-mysql/381 2b5d/logs/testr_results.html.gz > >> >> The mistral bundled intree tempest plugin >> [https://review.openstack.org/#/c/526918/] is also moved to >> mistral-tempest-plugin repo >> [https://github.com/openstack/mistral-tempest-plugin] >> >> Tests are moved to a new repo as a part of Tempest Plugin Split goal >> >>[https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.h >>tml]. >> Feel free to consume the new tempest plugin and let me know if you >> need any more help. >> >> Thanks, >> >> Chandan Kumar >> >> >>_________________________________________________________________________ >>_ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From renat.akhmerov at gmail.com Tue Jan 9 05:58:07 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Tue, 9 Jan 2018 12:58:07 +0700 Subject: [openstack-dev] [requirements][mistral][vitrage][octavia][taskflow][watcher] Networkx version 2.0 In-Reply-To: <20180108210029.wlevjmgie3jdt26t@gentoo.org> References: <623096DD-1612-46D8-B6C4-326255B276C8@nokia.com> <20171220165650.n3kqgbsrstgs63he@gentoo.org> <20180108210029.wlevjmgie3jdt26t@gentoo.org> Message-ID: <84075816-80f9-4359-b778-ec4638fc7697@Spark> We (Mistral) are ready to react too whenever the version is bumped. IMO, the sooner the better. Thanks Renat Akhmerov @Nokia On 9 Jan 2018, 04:00 +0700, Matthew Thode , wrote: > On 17-12-20 10:56:50, Matthew Thode wrote: > > On 17-12-20 15:51:17, Afek, Ifat (Nokia - IL/Kfar Sava) wrote: > > > Hi, > > > > > > There is an open bug in launchpad about the new release of Networkx 2.0, that is backward incompatible with versions 1.x [1]. > > > Is there a plan to change the Networkx version in the global requirements in Queens? We need to make some code refactoring in Vitrage, and I’m trying to understand how urgent it is. > > > > > > [1] https://bugs.launchpad.net/diskimage-builder/+bug/1718576 > > > > > > > Mistral, Vitrage, Octavia, Taskflow, Watcher > > > > Those are the projects using NetworkX that'd need to be updated. > > http://codesearch.openstack.org/?q=networkx&i=nope&files=.*requirements.*&repos= > > > > I'm open to uncapping networkx if these projects have buyin. > > > > I've created https://review.openstack.org/531902 that your patches can > depend upon. > > -- > Matthew Thode (prometheanfire) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sriharsha.basavapatna at broadcom.com Tue Jan 9 06:00:14 2018 From: sriharsha.basavapatna at broadcom.com (Sriharsha Basavapatna) Date: Tue, 9 Jan 2018 11:30:14 +0530 Subject: [openstack-dev] [os-vif] Message-ID: Hi, I've uploaded a patch for review: https://review.openstack.org/#/c/531674/ This is the first time I'm submitting a patch on openstack. I'd like to add code reviewers on this patch. I'd appreciate if you could point me to any guidelines on how to pick reviewers for a given project (os-vif library in this case). Thanks, -Harsha From ykarel at redhat.com Tue Jan 9 06:18:38 2018 From: ykarel at redhat.com (Yatin Karel) Date: Tue, 9 Jan 2018 11:48:38 +0530 Subject: [openstack-dev] [os-vif] In-Reply-To: References: Message-ID: Hi Sriharsha, You can check the core reviewers for os-vif here:- https://review.openstack.org/#/admin/groups/1175,members or from system:- ssh -p 29418 @review.openstack.org gerrit ls-members os-vif-core (This can be run from the system for which you have added the puplic keys to gerrit) To add all core-reviewers, you can just type os-vif-core in Add Reviewer Text box on gerrit. For other projects:- Open Gerrit --> Go to People TAB --> Select "List Groups" --> search project (look for -core) Hope this helps. On Tue, Jan 9, 2018 at 11:30 AM, Sriharsha Basavapatna wrote: > Hi, > > I've uploaded a patch for review: > https://review.openstack.org/#/c/531674/ > > This is the first time I'm submitting a patch on openstack. I'd like > to add code reviewers on this patch. I'd appreciate if you could point > me to any guidelines on how to pick reviewers for a given project > (os-vif library in this case). > > Thanks, > -Harsha > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From aj at suse.com Tue Jan 9 06:29:42 2018 From: aj at suse.com (Andreas Jaeger) Date: Tue, 9 Jan 2018 07:29:42 +0100 Subject: [openstack-dev] [os-vif] In-Reply-To: References: Message-ID: <1a198bb2-d863-0695-473d-3e062fabf4d9@suse.com> On 2018-01-09 07:00, Sriharsha Basavapatna wrote: > Hi, > > I've uploaded a patch for review: > https://review.openstack.org/#/c/531674/ > > This is the first time I'm submitting a patch on openstack. I'd like > to add code reviewers on this patch. I'd appreciate if you could point > me to any guidelines on how to pick reviewers for a given project > (os-vif library in this case). In general, there's no need to add core reviewers to a review. Each of us have their own dashboards or watch projects that we review and review as time permits. Since your change is failing the testsuite: Please fix tests first! os-vif is part of nova, so if you want to discuss about it, best to join the #openstack-nova freenode IRC channel, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From zhangyujun+zte at gmail.com Tue Jan 9 06:33:40 2018 From: zhangyujun+zte at gmail.com (Yujun Zhang (ZTE)) Date: Tue, 09 Jan 2018 06:33:40 +0000 Subject: [openstack-dev] [vitrage] rules in vitrage_aggregated_state() Message-ID: Hi root causers I have been inspecting the code about aggregated state recently and have a question regarding the rules. The "not" operator in the if clause confuses me. If it is not a configured data source, how do we apply the aggregation rules? It seems this is handled in else clause. if datasource_name in self.datasources_state_confs or \ datasource_name *not* in self.conf.datasources.types: ... else: self.category_normalizer[vitrage_category].set_aggregated_value( new_vertex, self.UNDEFINED_DATASOURCE) self.category_normalizer[vitrage_category].set_operational_value( new_vertex, self.UNDEFINED_DATASOURCE) There are some test case describing the expected behavior. But I couldn't understand the design philosophy behind it. What is expected when 1. the data source is not defined 2. data source defined but state config not exist 3. data source defined, state config exist but the state is not found. Could somebody shed some light on it? -- Yujun Zhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Tue Jan 9 06:34:45 2018 From: aj at suse.com (Andreas Jaeger) Date: Tue, 9 Jan 2018 07:34:45 +0100 Subject: [openstack-dev] [os-vif] In-Reply-To: References: Message-ID: <2a6a1a7b-6eef-11d3-15cf-711d8fe1cfd3@suse.com> On 2018-01-09 07:00, Sriharsha Basavapatna wrote: > Hi, > > I've uploaded a patch for review: > https://review.openstack.org/#/c/531674/ > > This is the first time I'm submitting a patch on openstack. I'd like Welcome to OpenStack, Harsha. Please read https://docs.openstack.org/infra/manual/developers.html if you haven't. I see that your change fails the basic tests, you can run these locally as follows to check that your fixes will pass: tox -e pep8 tox -e py27 Andreas > to add code reviewers on this patch. I'd appreciate if you could point > me to any guidelines on how to pick reviewers for a given project > (os-vif library in this case). > > Thanks, > -Harsha > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From zhangyujun+zte at gmail.com Tue Jan 9 06:34:45 2018 From: zhangyujun+zte at gmail.com (Yujun Zhang (ZTE)) Date: Tue, 09 Jan 2018 06:34:45 +0000 Subject: [openstack-dev] [vitrage] rules in vitrage_aggregated_state() In-Reply-To: References: Message-ID: Forgot to paste the link to the related code: https://git.openstack.org/cgit/openstack/vitrage/tree/vitrage/entity_graph/mappings/datasource_info_mapper.py#n61 On Tue, Jan 9, 2018 at 2:34 PM Yujun Zhang (ZTE) wrote: > Hi root causers > > I have been inspecting the code about aggregated state recently and have a > question regarding the rules. > > The "not" operator in the if clause confuses me. If it is not a configured > data source, how do we apply the aggregation rules? It seems this is > handled in else clause. > > if datasource_name in self.datasources_state_confs or \ > datasource_name *not* in self.conf.datasources.types: ... > > else: > self.category_normalizer[vitrage_category].set_aggregated_value( > new_vertex, self.UNDEFINED_DATASOURCE) > self.category_normalizer[vitrage_category].set_operational_value( > new_vertex, self.UNDEFINED_DATASOURCE) > > > There are some test case describing the expected behavior. But I couldn't understand the design philosophy behind it. What is expected when > > 1. the data source is not defined > 2. data source defined but state config not exist > 3. data source defined, state config exist but the state is not found. > > Could somebody shed some light on it? > > > > -- > Yujun Zhang > -- Yujun Zhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From glongwave at gmail.com Tue Jan 9 06:59:28 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Tue, 9 Jan 2018 14:59:28 +0800 Subject: [openstack-dev] [oslo] proposing Stephen Finucan for oslo-core In-Reply-To: <646ac9cf-c518-e1b6-b807-003ddf9d6abf@nemebean.com> References: <1515423211-sup-8000@lrrr.local> <646ac9cf-c518-e1b6-b807-003ddf9d6abf@nemebean.com> Message-ID: +1 for stephenfin. 2018-01-09 1:34 GMT+08:00 Ben Nemec : > Definite +1 > > > On 01/08/2018 08:55 AM, Doug Hellmann wrote: > >> Stephen (sfinucan) has been working on pbr, oslo.config, and >> oslo.policy and reviewing several of the other Oslo libraries for >> a while now. His reviews are always helpful and I think he would >> make a good addition to the oslo-core team. >> >> As per our usual practice, please reply here with a +1 or -1 and >> any reservations. >> >> Doug >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From glongwave at gmail.com Tue Jan 9 07:11:39 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Tue, 9 Jan 2018 15:11:39 +0800 Subject: [openstack-dev] [oslo] Rocky PTG planning Message-ID: Hi , The Dublin PTG is closing, It' time to collect topics before the PTG. Just put your idea or discussion in https://etherpad.openstack.org/p/oslo-ptg-rocky -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From sriharsha.basavapatna at broadcom.com Tue Jan 9 08:41:40 2018 From: sriharsha.basavapatna at broadcom.com (Sriharsha Basavapatna) Date: Tue, 9 Jan 2018 14:11:40 +0530 Subject: [openstack-dev] [os-vif] In-Reply-To: References: Message-ID: Hi Yatin, Thanks for the info; I'll add the reviewers. -Harsha On Tue, Jan 9, 2018 at 11:48 AM, Yatin Karel wrote: > Hi Sriharsha, > > You can check the core reviewers for os-vif here:- > https://review.openstack.org/#/admin/groups/1175,members > > or from system:- ssh -p 29418 username>@review.openstack.org gerrit ls-members os-vif-core (This > can be run from the system for which you have added the puplic keys to > gerrit) > > To add all core-reviewers, you can just type os-vif-core in Add > Reviewer Text box on gerrit. > > For other projects:- Open Gerrit --> Go to People TAB --> Select "List > Groups" --> search project (look for -core) > > > Hope this helps. > > On Tue, Jan 9, 2018 at 11:30 AM, Sriharsha Basavapatna > wrote: >> Hi, >> >> I've uploaded a patch for review: >> https://review.openstack.org/#/c/531674/ >> >> This is the first time I'm submitting a patch on openstack. I'd like >> to add code reviewers on this patch. I'd appreciate if you could point >> me to any guidelines on how to pick reviewers for a given project >> (os-vif library in this case). >> >> Thanks, >> -Harsha >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sriharsha.basavapatna at broadcom.com Tue Jan 9 08:50:35 2018 From: sriharsha.basavapatna at broadcom.com (Sriharsha Basavapatna) Date: Tue, 9 Jan 2018 14:20:35 +0530 Subject: [openstack-dev] [os-vif] In-Reply-To: <2a6a1a7b-6eef-11d3-15cf-711d8fe1cfd3@suse.com> References: <2a6a1a7b-6eef-11d3-15cf-711d8fe1cfd3@suse.com> Message-ID: Hi Andreas, On Tue, Jan 9, 2018 at 12:04 PM, Andreas Jaeger wrote: > On 2018-01-09 07:00, Sriharsha Basavapatna wrote: >> Hi, >> >> I've uploaded a patch for review: >> https://review.openstack.org/#/c/531674/ >> >> This is the first time I'm submitting a patch on openstack. I'd like > > Welcome to OpenStack, Harsha. Thank you. > Please read > https://docs.openstack.org/infra/manual/developers.html if you haven't. Ok, i'll read it. > > I see that your change fails the basic tests, you can run these locally > as follows to check that your fixes will pass: > > tox -e pep8 > tox -e py27 I was wondering if there's a way to catch these errors without having to submit it for gerrit review. I fixed the ones that were reported in patch-set-1; looks like there's some new ones in the second patch-set. I'll run the above commands to verify the fix locally. Thanks, -Harsha > > Andreas > >> to add code reviewers on this patch. I'd appreciate if you could point >> me to any guidelines on how to pick reviewers for a given project >> (os-vif library in this case). >> >> Thanks, >> -Harsha >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > From honjo.rikimaru at po.ntt-tx.co.jp Tue Jan 9 09:11:09 2018 From: honjo.rikimaru at po.ntt-tx.co.jp (Rikimaru Honjo) Date: Tue, 9 Jan 2018 18:11:09 +0900 Subject: [openstack-dev] [oslo][oslo.log]Re: Error will be occurred if watch_log_file option is true In-Reply-To: <1515074711-sup-5593@lrrr.local> References: <1515074711-sup-5593@lrrr.local> Message-ID: <165d1214-d0af-b634-6a29-c3e3afe52797@po.ntt-tx.co.jp> Hello, On 2018/01/04 23:12, Doug Hellmann wrote: > Excerpts from Rikimaru Honjo's message of 2018-01-04 18:22:26 +0900: >> Hello, >> >> The below bug was reported in Masakari's Launchpad. >> I think that this bug was caused by oslo.log. >> (And, the root cause is a bug of pyinotify using by oslo.log. The detail is >> written in the bug report.) >> >> * masakari-api failed to launch due to setting of watch_log_file and log_file >> https://bugs.launchpad.net/masakari/+bug/1740111 >> >> There is a possibility that this bug will affects all openstack components using oslo.log. >> (But, the processes working with uwsgi[1] wasn't affected when I tried to reproduce. >> I haven't solved the reason of this yet...) >> >> Could you help us? >> And, what should we do...? >> >> [1] >> e.g. nova-api, cinder-api, keystone... >> >> Best regards, > > The bug is in pyinotify. According to the git repo [1] that project > was last updated in June of 2015. I recommend we move off of > pyinotify entirely, since it appears to be unmaintained. > > If there is another library to do the same thing we should switch > to it (there seem to be lots of options [2]). If there is no viable > replacement or fork, we should deprecate that log watching feature > (and anything else for which we use pyinotify) and remove it ASAP. > > We'll need a volunteer to do the evaluation and update oslo.log. > > Doug > > [1] https://github.com/seb-m/pyinotify > [2] https://pypi.python.org/pypi?%3Aaction=search&term=inotify&submit=search Thank you for replying. I haven't deeply researched, but inotify looks good. Because "weight" of inotify is the largest, and following text is described. https://pypi.python.org/pypi/inotify/0.2.9 > This project is unrelated to the *PyInotify* project that existed prior to this one (this project began in 2015). That project is defunct and no longer available. PyInotify is defunct and no longer available... -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at po.ntt-tx.co.jp From gaetan at xeberon.net Tue Jan 9 09:25:56 2018 From: gaetan at xeberon.net (Gaetan) Date: Tue, 9 Jan 2018 10:25:56 +0100 Subject: [openstack-dev] [pbr] support v_version Message-ID: Hello I have submitted this patch ([1]) that add support for v_version in PBR. Basically I can tag v1.0.0 instead of 1.0.0 to release 1.0.0. However, after rework it appears PBR does not behaves well, even if the unit tests pass: On tag for instance v1.0.0, the result packages in named `-1.0.0.dev1`. Do you know where I need to hack PBR to fix it? Second point, to go to the end of the logic of my change, I would like to propose an optional way (in setup.cfg?) to **prevent** any tag without the 'v' prefix, ie, where a bare version tag like `1.0.0` is not to be considered as a valid version. That way, on system such as gitlab or github: - repository owners "protect" tags with pattern "v*", ie, all tags for release such as "v1.0.0", ... cannot be pushed by anyone but the owners/masters - other developer can still push other tags for other purpose What do you think about this proposal? [1] https://review.openstack.org/#/c/531161/ Thanks Regards ----- Gaetan -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Jan 9 10:23:02 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 9 Jan 2018 11:23:02 +0100 Subject: [openstack-dev] [ResMgmt SIG]Proposal to form Resource Management SIG In-Reply-To: References: Message-ID: <466b3ca5-af1c-89aa-d040-e9ac8385a9ae@openstack.org> Zhipeng Huang wrote: > [...] > With the maturing of resource provider/placement feature landing in > OpenStack in recent release, and also in light of Kubernetes community > increasing attention to the similar effort, I want to propose to form a > Resource Management SIG as a contact point for OpenStack community to > communicate with Kubernetes Resource Management WG[0] and other related > SIGs. > [...] When ready, please propose a change to the governance-sigs repository, adding the proposed SIG to the sigs.yaml file: https://git.openstack.org/cgit/openstack/governance-sigs/tree/sigs.yaml -- Thierry Carrez (ttx) From cdent+os at anticdent.org Tue Jan 9 10:30:36 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 9 Jan 2018 10:30:36 +0000 (GMT) Subject: [openstack-dev] [ResMgmt SIG]Proposal to form Resource Management SIG In-Reply-To: <11ce8607-0a59-401d-0605-c36c2a901cf9@gmail.com> References: <11ce8607-0a59-401d-0605-c36c2a901cf9@gmail.com> Message-ID: On Mon, 8 Jan 2018, Jay Pipes wrote: > I think having a bi-weekly cross-project (or even cross-ecosystem if we're > talking about OpenStack+k8s) status email reporting any big events in the > resource tracking world would be useful. As far as regular meetings for a > resource management SIG, I'm +0 on that. I prefer to have targeted topical > meetings over regular meetings. I agree, would much prefer to see more email and less meetings. It would be fantastic if we can get some cross pollination disucssion happening. A status email, especially one that was cross-ecosystem, would be great. Unfortunately I can't commit to doing that myself (the existing 2 a week I do is plenty) but hope someone will take it up. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From sriharsha.basavapatna at broadcom.com Tue Jan 9 11:00:42 2018 From: sriharsha.basavapatna at broadcom.com (Sriharsha Basavapatna) Date: Tue, 9 Jan 2018 16:30:42 +0530 Subject: [openstack-dev] [os-vif] In-Reply-To: References: <2a6a1a7b-6eef-11d3-15cf-711d8fe1cfd3@suse.com> Message-ID: On Tue, Jan 9, 2018 at 2:20 PM, Sriharsha Basavapatna wrote: > Hi Andreas, > > On Tue, Jan 9, 2018 at 12:04 PM, Andreas Jaeger wrote: >> On 2018-01-09 07:00, Sriharsha Basavapatna wrote: >>> Hi, >>> >>> I've uploaded a patch for review: >>> https://review.openstack.org/#/c/531674/ >>> >>> This is the first time I'm submitting a patch on openstack. I'd like >> >> Welcome to OpenStack, Harsha. > > Thank you. > >> Please read >> https://docs.openstack.org/infra/manual/developers.html if you haven't. > > Ok, i'll read it. >> >> I see that your change fails the basic tests, you can run these locally >> as follows to check that your fixes will pass: >> >> tox -e pep8 >> tox -e py27 > > I was wondering if there's a way to catch these errors without having > to submit it for gerrit review. I fixed the ones that were reported > in patch-set-1; looks like there's some new ones in the second > patch-set. I'll run the above commands to verify the fix locally. > > Thanks, > -Harsha I installed python-pip and tox. But when I run "tox -e pep8", I'm seeing some errors: building 'netifaces' extension gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DNETIFACES_VERSION=0.10.6 -DHAVE_GETIFADDRS=1 -DHAVE_GETNAMEINFO=1 -DHAVE_NETASH_ASH_H=1 -DHAVE_NETATALK_AT_H=1 -DHAVE_NETAX25_AX25_H=1 -DHAVE_NETECONET_EC_H=1 -DHAVE_NETIPX_IPX_H=1 -DHAVE_NETPACKET_PACKET_H=1 -DHAVE_LINUX_IRDA_H=1 -DHAVE_LINUX_ATM_H=1 -DHAVE_LINUX_LLC_H=1 -DHAVE_LINUX_TIPC_H=1 -DHAVE_LINUX_DN_H=1 -DHAVE_SOCKADDR_AT=1 -DHAVE_SOCKADDR_AX25=1 -DHAVE_SOCKADDR_IN=1 -DHAVE_SOCKADDR_IN6=1 -DHAVE_SOCKADDR_IPX=1 -DHAVE_SOCKADDR_UN=1 -DHAVE_SOCKADDR_ASH=1 -DHAVE_SOCKADDR_EC=1 -DHAVE_SOCKADDR_LL=1 -DHAVE_SOCKADDR_ATMPVC=1 -DHAVE_SOCKADDR_ATMSVC=1 -DHAVE_SOCKADDR_DN=1 -DHAVE_SOCKADDR_IRDA=1 -DHAVE_SOCKADDR_LLC=1 -DHAVE_PF_NETLINK=1 -I/usr/include/python2.7 -c netifaces.c -o build/temp.linux-x86_64-2.7/netifaces.o netifaces.c:1:20: fatal error: Python.h: No such file or directory #include ^ compilation terminated. error: command 'gcc' failed with exit status 1 ---------------------------------------- Command "/home/harshab/os-vif/.tox/pep8/bin/python2 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-OibnHO/netifaces/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-3Hu__1-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/harshab/os-vif/.tox/pep8/include/site/python2.7/netifaces" failed with error code 1 in /tmp/pip-build-OibnHO/netifaces/ ERROR: could not install deps [-r/home/harshab/os-vif/requirements.txt, -r/home/harshab/os-vif/test-requirements.txt]; v = InvocationError('/home/harshab/os-vif/.tox/pep8/bin/pip install -U -r/home/harshab/os-vif/requirements.txt -r/home/harshab/os-vif/test-requirements.txt (see /home/harshab/os-vif/.tox/pep8/log/pep8-1.log)', 1) ___________________________________ summary ____________________________________ ERROR: pep8: could not install deps [-r/home/harshab/os-vif/requirements.txt, -r/home/harshab/os-vif/test-requirements.txt]; v = InvocationError('/home/harshab/os-vif/.tox/pep8/bin/pip install -U -r/home/harshab/os-vif/requirements.txt -r/home/harshab/os-vif/test-requirements.txt (see /home/harshab/os-vif/.tox/pep8/log/pep8-1.log)', 1) Thanks, -Harsha > >> >> Andreas >> >>> to add code reviewers on this patch. I'd appreciate if you could point >>> me to any guidelines on how to pick reviewers for a given project >>> (os-vif library in this case). >>> >>> Thanks, >>> -Harsha >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> -- >> Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi >> SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany >> GF: Felix Imendörffer, Jane Smithard, Graham Norton, >> HRB 21284 (AG Nürnberg) >> GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 >> From strigazi at gmail.com Tue Jan 9 12:15:06 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Tue, 9 Jan 2018 13:15:06 +0100 Subject: [openstack-dev] [magnum] fedora atomic image with kubernetes with a CRI = frakti or clear containers In-Reply-To: References: Message-ID: Hi Greg, You can try to build an image with this process [1]. I haven't used for some time since we rely on the upstream image. Another option that I would like to investigate is to build a system container with frakti or clear container similar to these container images [2] [3] [4]. Then you can install that container on the atomic host. We could discuss this during the magnum meeting today at 16h00 UTC in #openstack-meeting-alt [5]. Cheers, Spyros [1] http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/image/fedora-atomic/README.rst [2] https://github.com/kubernetes-incubator/cri-o/tree/master/contrib/system_containers/fedora [3] https://github.com/projectatomic/atomic-system-containers/tree/master/docker-centos [4] https://gitlab.cern.ch/cloud/atomic-system-containers/tree/cern-qa/docker-centos [5] https://wiki.openstack.org/wiki/Meetings/Containers On 8 January 2018 at 16:42, Waines, Greg wrote: > Hey there, > > > > I am currently running magnum with the fedora-atomic image that is > installed as part of the devstack installation of magnum. > > This fedora-atomic image has kubernetes with a CRI of the standard docker > container. > > > > Where can i find (or how do i build) a fedora-atomic image with kubernetes > and either frakti or clear containers (runV) as the CRI ? > > > > Greg. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Jan 9 12:23:36 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 9 Jan 2018 13:23:36 +0100 Subject: [openstack-dev] [all] Longer development cycles - a temporary conclusion Message-ID: Hi everyone, Last month I started a thread to discuss our rhythm and the possibility of switching to one-year development cycles, starting with Rocky: http://lists.openstack.org/pipermail/openstack-dev/2017-December/125473.html The thread quickly exploded in various directions, objections and tangents, which I tried to summarize here: http://lists.openstack.org/pipermail/openstack-dev/2017-December/125688.html At the core, I think the thread exposed a fundamental tension between upstream and downstream OpenStack on that topic. On the upstream side, we automated most of the process so the cost of releasing is limited. The cycle boilerplate activities are a reasonable cost to pay to get releases out often enough to get tight feedback loops and maintain pace and quality. The downstream cost, however, is still significant: marketing the new release, packaging it, upgrading to it or organizing events around it is not getting simpler and in some cases resources are getting more limited. So while the concerns (especially downstream) are real, the proposed solution is not a straight and consensual answer to them. We need to look at alternative solutions to to reduce release downstream cost. We need to have a wider look at how OpenStack is (or should be) consumed, and discuss cycle length in relation to other efforts in this area (fast-forward upgrades, past-EOL maintenance, support for OpenStack deployments on mixed versions of components...). We can't really have those complex discussions and the TC make any change in time for the Rocky cycle, which will start in a couple of weeks. Those discussions will happen in Dublin (PTG) and continue in Vancouver (Forum at Summit). In the mean time, Rocky will be a 6-month development cycle, as proposed in https://review.openstack.org/#/c/528772/ As a sidenote, Jay pointed us to a similar discussion[1] around releasing less often, which is currently happening in the Kubernetes community. While some of the context is different (especially their releases "upstream" cost is still pretty high), a lot of their concerns overlap with ours, so it makes an interesting complement read: [1] https://groups.google.com/forum/#!msg/kubernetes-dev/nvEMOYKF8Kk/n3Rjd2bMCAAJ Looking forward to discussing this topic more with interested people in Dublin. -- Thierry Carrez (ttx) From tenobreg at redhat.com Tue Jan 9 12:30:27 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Tue, 09 Jan 2018 12:30:27 +0000 Subject: [openstack-dev] [sahara] Sahara Rocky PTG Message-ID: Hi Saharans and interested folks, We are about getting close to the Rocky PTG and the sooner we start putting our minds together to make the best of the week the better. I started an etherpad where we should add our topic ideas and I will later on organize a schedule based on what we have there. Lets get as many idea as possible asap so we have time to prepare to them all. Thanks in advance. -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Jan 9 13:29:17 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 9 Jan 2018 13:29:17 +0000 (GMT) Subject: [openstack-dev] [tc] [all] TC Report 18-02 Message-ID: (On the blog: https://anticdent.org/tc-report-18-02.html ) Welcome to the first normal TC Report of 2018. Last week I did a [review of much of 2017](https://anticdent.org/tc-report-2017-in-review.html) but there was little to report from the first two days of the year, so there is no TC Report 18-01. Since then there's been a bit of TC business, mostly with people working to get themselves resituated and organized for the upcoming year. ## Upgrades On Thursday there was some effort to recall the salient points from the giant [longer development cycle thread](http://lists.openstack.org/pipermail/openstack-dev/2017-December/thread.html#125473). The consensus is that there are some issues to discuss and resolve, but lengthening the cycle does not address them. One standout issue from the thread was managing upgrades. There was an effort [to discuss why they are hard](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-04.log.html#t2018-01-04T15:11:56), that, in typical fashion, branched broadly to "what are the reasons they matter". There's plenty of awareness that one size is not going to fit all, but _not_ plenty of awareness of what all the sizes are. In part this is because we were unable to stick to any single line of analysis before branching. We need more input and volunteers to help with the work that results from any conclusions. A few different people have made plans to provide summaries of the development cycle thread or to extract a list of issues so they are not forgotten. Thierry did a [TL;DR back in December](http://lists.openstack.org/pipermail/openstack-dev/2017-December/125688.html) and has written a [temporary conclusion](http://lists.openstack.org/pipermail/openstack-dev/2018-January/126080.html) today. ## Projects Publishing to PyPI On [Monday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-08.log.html#t2018-01-08T15:05:41) the topic of publishing the service projects to PyPI was brought back up. Previous discussion had happened [in email](http://lists.openstack.org/pipermail/openstack-dev/2017-November/124676.html). There's general agreement that this would be a good thing assuming two issues can be managed: * There's a naming conflict with an existing project on PyPI called [keystone](https://pypi.python.org/pypi/Keystone). It appears to be stalled but there's no easy process for removing a project. One option is to make the service distributions have names such as openstack-keystone, but they may have cascading effects downstream. * We assume (hope?) that there will be very few people doing `pip install nova` (or any other service) that expect a working system. The [release team is working on it](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-08.log.html#t2018-01-08T15:38:38). ## Rocky Goals The main topic from [this morning's office hour](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-09.log.html#t2018-01-09T09:18:33) was establishing some [OpenStack-wide goals](https://governance.openstack.org/tc/goals/index.html) for the next cycle, Rocky. Today there is only one proposed goal, [Migrating to Storyboard](https://review.openstack.org/#/c/513875/), but it has a few issues which leave it without the vast swell of support a good goal ought to have. The [community goals](https://etherpad.openstack.org/p/community-goals) etherpad has some suggestions, and based on the discussion in IRC a couple more were added. If you have ideas please propose them in gerrit, post some email, or bring them to `#openstack-tc` for discussion. ## Board Individual Directors Election This week hosts the election for the OpenStack Foundation Board Individual Directors. Monty has [a good tweet storm](https://twitter.com/e_monty/status/948911657715159040) on why voting (if you can) for these people matters. If you are eligible ("joined the OpenStack Foundation as an Individual Member by July 17, 2017") you should have received a ballot on Monday, the 8th of January. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From mbooth at redhat.com Tue Jan 9 15:28:41 2018 From: mbooth at redhat.com (Matthew Booth) Date: Tue, 9 Jan 2018 15:28:41 +0000 Subject: [openstack-dev] [nova] Local disk serial numbers series for reviewers: update 9th Jan Message-ID: In summary, the patch series is here: https://review.openstack.org/#/q/status:open+project: openstack/nova+branch:master+topic:bp/local-disk-serial-numbers The bottom 3 patches, which add BDM.uuid have landed. The next 3 currently have a single +2. Since I last posted I have found and fixed a problem in swap_volume, which added 2 more patches to the series. There are currently 13 outstanding patches in the series. The following 6 patches are the 'crux' patches. The others in the series are related fixes/cleanups (mostly renaming things and fixing tests) which I've moved into separate patches to reduce noise. Add DriverLocalImageBlockDevice: https://review.openstack.org/#/c/526347/6 Add local_root to block_device_info: https://review.openstack.org/#/c/529029/6 Pass DriverBlockDevice to driver.attach_volume https://review.openstack.org/#/c/528363/ Expose volume host type and path independent of libvirt config https://review.openstack.org/#/c/530786/ Don't generate fake disk_info in swap_volume https://review.openstack.org/#/c/530787/ Local disk serial numbers for the libvirt driver https://review.openstack.org/#/c/529380/ Thanks, Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifat.afek at nokia.com Tue Jan 9 15:29:07 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Tue, 9 Jan 2018 15:29:07 +0000 Subject: [openstack-dev] [vitrage] rules in vitrage_aggregated_state() In-Reply-To: References: Message-ID: Hi, I agree that the code is confusing… This is part of a change that was made in order to support default states for static entities. For example, in the static configuration yaml file you can add entities of types ‘switch’ and ‘br-ex’. In the past, in order to support states for these new types, you needed to add switch.yaml and br-ex.yaml under /etc/vitrage/datasources_values, which you would most likely copy&paste from another datasource. Now, we have under /etc/vitrage/datasources_values a default.yaml file that is used for all static entities. Back to the code, I believe this is the logic: · If the datasource is part of ‘types’ (as defined in vitrage.conf) and has states configuration – use it. This is the normal behavior. · If the datasource is not part of ‘types’, we understand that it was defined in a static configuration file. Use the default states configuration. I assume that it is somehow handled in the first part of the if statement (I’m not so familiar with that code) · If neither is true – it means that the datasource is “real” and not static, and was defined in vitrage.conf types. And it also means that its states configuration is missing, so the state is UNDEFINED. And to your questions: 1. the data source is not defined -> the default states should be used 2. data source defined but state config not exist -> UNDEFINED state 3. data source defined, state config exist but the state is not found. -> I believe that somewhere in the first part of the if statement you will get UNDEFINED Hope that’s more clear now. It might be a good idea to add some comments to that function… Best Regards, Ifat. From: "Yujun Zhang (ZTE)" Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Tuesday, 9 January 2018 at 8:34 To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [vitrage] rules in vitrage_aggregated_state() Forgot to paste the link to the related code: https://git.openstack.org/cgit/openstack/vitrage/tree/vitrage/entity_graph/mappings/datasource_info_mapper.py#n61 On Tue, Jan 9, 2018 at 2:34 PM Yujun Zhang (ZTE) > wrote: Hi root causers I have been inspecting the code about aggregated state recently and have a question regarding the rules. The "not" operator in the if clause confuses me. If it is not a configured data source, how do we apply the aggregation rules? It seems this is handled in else clause. if datasource_name in self.datasources_state_confs or \ datasource_name not in self.conf.datasources.types: ... else: self.category_normalizer[vitrage_category].set_aggregated_value( new_vertex, self.UNDEFINED_DATASOURCE) self.category_normalizer[vitrage_category].set_operational_value( new_vertex, self.UNDEFINED_DATASOURCE) There are some test case describing the expected behavior. But I couldn't understand the design philosophy behind it. What is expected when 1. the data source is not defined 2. data source defined but state config not exist 3. data source defined, state config exist but the state is not found. Could somebody shed some light on it? -- Yujun Zhang -- Yujun Zhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Jan 9 16:11:10 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 09 Jan 2018 11:11:10 -0500 Subject: [openstack-dev] [oslo][oslo.log]Re: Error will be occurred if watch_log_file option is true In-Reply-To: <165d1214-d0af-b634-6a29-c3e3afe52797@po.ntt-tx.co.jp> References: <1515074711-sup-5593@lrrr.local> <165d1214-d0af-b634-6a29-c3e3afe52797@po.ntt-tx.co.jp> Message-ID: <1515514211-sup-4244@lrrr.local> Excerpts from Rikimaru Honjo's message of 2018-01-09 18:11:09 +0900: > Hello, > > On 2018/01/04 23:12, Doug Hellmann wrote: > > Excerpts from Rikimaru Honjo's message of 2018-01-04 18:22:26 +0900: > >> Hello, > >> > >> The below bug was reported in Masakari's Launchpad. > >> I think that this bug was caused by oslo.log. > >> (And, the root cause is a bug of pyinotify using by oslo.log. The detail is > >> written in the bug report.) > >> > >> * masakari-api failed to launch due to setting of watch_log_file and log_file > >> https://bugs.launchpad.net/masakari/+bug/1740111 > >> > >> There is a possibility that this bug will affects all openstack components using oslo.log. > >> (But, the processes working with uwsgi[1] wasn't affected when I tried to reproduce. > >> I haven't solved the reason of this yet...) > >> > >> Could you help us? > >> And, what should we do...? > >> > >> [1] > >> e.g. nova-api, cinder-api, keystone... > >> > >> Best regards, > > > > The bug is in pyinotify. According to the git repo [1] that project > > was last updated in June of 2015. I recommend we move off of > > pyinotify entirely, since it appears to be unmaintained. > > > > If there is another library to do the same thing we should switch > > to it (there seem to be lots of options [2]). If there is no viable > > replacement or fork, we should deprecate that log watching feature > > (and anything else for which we use pyinotify) and remove it ASAP. > > > > We'll need a volunteer to do the evaluation and update oslo.log. > > > > Doug > > > > [1] https://github.com/seb-m/pyinotify > > [2] https://pypi.python.org/pypi?%3Aaction=search&term=inotify&submit=search > Thank you for replying. > > I haven't deeply researched, but inotify looks good. > Because "weight" of inotify is the largest, and following text is described. > > https://pypi.python.org/pypi/inotify/0.2.9 > > This project is unrelated to the *PyInotify* project that existed prior to this one (this project began in 2015). That project is defunct and no longer available. > PyInotify is defunct and no longer available... > The inotify package seems like a good candidate to replace pyinotify. Have you looked at how hard it would be to change oslo.log? If so, does using the newer library eliminate the bug you had? Doug From ifat.afek at nokia.com Tue Jan 9 16:19:02 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Tue, 9 Jan 2018 16:19:02 +0000 Subject: [openstack-dev] [congress] generic push driver In-Reply-To: References: Message-ID: <3137FA98-08C9-4B84-ACB2-7E035FFC414D@nokia.com> From: Eric K Date: Tuesday, 9 January 2018 at 0:43 Hi Ifat, From: "Afek, Ifat (Nokia - IL/Kfar Sava)" > Date: Sunday, January 7, 2018 at 4:00 AM Hi Eric, I have two questions: 1. An alarm is usually raised on a resource, and in Vitrage we can send you the details of that resource. Is there a way in Congress for the alarm to reference a resource that exists in another table? And what if the resource does not exist in Congress? First, the columns I chose are just a minimal sample to illustrate the generic nature of the driver. In use with vitrage, we would probably also want to include columns such as `resource_id`. Does that address the need to reference a resource? That resource referenced by ID may or may not exist in another part of Congress. It would be the job of the policy to resolve references when taking appropriate actions. If referential integrity is needed, additional policy rules can be specified to catch breakage. [Ifat] Ok, sounds good. This brings up a related question I had about vitrage: Looking at the vertex properties listed here: https://github.com/openstack/vitrage/blob/master/vitrage/common/constants.py#L17 Where can I find more information about the type and content of data in each property? Exapmle: - is the `resource` property an ID string or a python object reference? [Ifat] Most of the properties are key-value strings on the vertex in the entity graph. The RESOURCE is a special property that is added on an alarm for the use of the notifier. It holds the entire resource object, so the notifier could use its properties when sending notifications. - what does the property `is_real_vitrage_id` represent? [Ifat] It represents old code that should be deleted ;-) please ignore it - what is the difference between `resource_id` and `vitrage_resource_id` ? [Ifat] resource_id is the id of the resource as retrieved by the datasource, e.g. the Nova instance id vitrage_id is the id of the resource inside Vitrage. This is the id that Vitrage uses to identify its resources. For a Nova instance, vitrage_id will be different from its resource_id. vitrage_resource_id is used only on alarms, and holds the vitrage_id of the resource of the alarm. 2. Do you plan to support also updateRows? This can be useful for alarm state changes. Are you thinking about updating an entire row or updating a specific field of a row? That is, update Row {"id":"1-1", "name":"name1", "state":"active", "severity":1} to become {"id":"1-1", "name":"name1", "state":"active", "severity":100} Vs Update the severity field of row with id "1-1" to severity 100. Both could be supported, but the second one is more complex to support efficiently. [Ifat] It’s really up to you, I think both would satisfy the use case. The Congress notifier will be written based on your selected implementation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Tue Jan 9 16:51:43 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 10 Jan 2018 00:51:43 +0800 Subject: [openstack-dev] [ResMgmt SIG]Proposal to form Resource Management SIG In-Reply-To: References: <11ce8607-0a59-401d-0605-c36c2a901cf9@gmail.com> Message-ID: i think I could do it, but I gotta rely on you guys to attend the Resource Management WG meeting since its time is really bad for us in APAC timezone :P On Tue, Jan 9, 2018 at 6:30 PM, Chris Dent wrote: > On Mon, 8 Jan 2018, Jay Pipes wrote: > > I think having a bi-weekly cross-project (or even cross-ecosystem if we're >> talking about OpenStack+k8s) status email reporting any big events in the >> resource tracking world would be useful. As far as regular meetings for a >> resource management SIG, I'm +0 on that. I prefer to have targeted topical >> meetings over regular meetings. >> > > I agree, would much prefer to see more email and less meetings. It > would be fantastic if we can get some cross pollination disucssion > happening. > > A status email, especially one that was cross-ecosystem, would be > great. Unfortunately I can't commit to doing that myself (the > existing 2 a week I do is plenty) but hope someone will take it up. > > -- > Chris Dent (⊙_⊙') https://anticdent.org/ > freenode: cdent tw: @anticdent > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfinucan at redhat.com Tue Jan 9 17:16:00 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 09 Jan 2018 17:16:00 +0000 Subject: [openstack-dev] [os-vif] In-Reply-To: References: <2a6a1a7b-6eef-11d3-15cf-711d8fe1cfd3@suse.com> Message-ID: <1515518160.27468.19.camel@redhat.com> On Tue, 2018-01-09 at 16:30 +0530, Sriharsha Basavapatna wrote: > On Tue, Jan 9, 2018 at 2:20 PM, Sriharsha Basavapatna > wrote: > > Hi Andreas, > > > > On Tue, Jan 9, 2018 at 12:04 PM, Andreas Jaeger > > wrote: > > > On 2018-01-09 07:00, Sriharsha Basavapatna wrote: > > > > Hi, > > > > > > > > I've uploaded a patch for review: > > > > https://review.openstack.org/#/c/531674/ > > > > > > > > This is the first time I'm submitting a patch on openstack. I'd > > > > like > > > > > > Welcome to OpenStack, Harsha. > > > > Thank you. > > > > > Please read > > > https://docs.openstack.org/infra/manual/developers.html if you > > > haven't. > > > > Ok, i'll read it. > > > > > > I see that your change fails the basic tests, you can run these > > > locally > > > as follows to check that your fixes will pass: > > > > > > tox -e pep8 > > > tox -e py27 > > > > I was wondering if there's a way to catch these errors without > > having > > to submit it for gerrit review. I fixed the ones that were > > reported > > in patch-set-1; looks like there's some new ones in the second > > patch-set. I'll run the above commands to verify the fix locally. > > > > Thanks, > > -Harsha > > I installed python-pip and tox. But when I run "tox -e pep8", I'm > seeing some errors: > > building 'netifaces' extension > gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong > --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic > -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong > --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic > -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DNETIFACES_VERSION=0.10.6 > -DHAVE_GETIFADDRS=1 -DHAVE_GETNAMEINFO=1 -DHAVE_NETASH_ASH_H=1 > -DHAVE_NETATALK_AT_H=1 -DHAVE_NETAX25_AX25_H=1 > -DHAVE_NETECONET_EC_H=1 > -DHAVE_NETIPX_IPX_H=1 -DHAVE_NETPACKET_PACKET_H=1 > -DHAVE_LINUX_IRDA_H=1 -DHAVE_LINUX_ATM_H=1 -DHAVE_LINUX_LLC_H=1 > -DHAVE_LINUX_TIPC_H=1 -DHAVE_LINUX_DN_H=1 -DHAVE_SOCKADDR_AT=1 > -DHAVE_SOCKADDR_AX25=1 -DHAVE_SOCKADDR_IN=1 -DHAVE_SOCKADDR_IN6=1 > -DHAVE_SOCKADDR_IPX=1 -DHAVE_SOCKADDR_UN=1 -DHAVE_SOCKADDR_ASH=1 > -DHAVE_SOCKADDR_EC=1 -DHAVE_SOCKADDR_LL=1 -DHAVE_SOCKADDR_ATMPVC=1 > -DHAVE_SOCKADDR_ATMSVC=1 -DHAVE_SOCKADDR_DN=1 -DHAVE_SOCKADDR_IRDA=1 > -DHAVE_SOCKADDR_LLC=1 -DHAVE_PF_NETLINK=1 -I/usr/include/python2.7 -c > netifaces.c -o build/temp.linux-x86_64-2.7/netifaces.o > netifaces.c:1:20: fatal error: Python.h: No such file or > directory > #include > ^ > compilation terminated. > error: command 'gcc' failed with exit status 1 > > ---------------------------------------- > Command "/home/harshab/os-vif/.tox/pep8/bin/python2 -u -c "import > setuptools, tokenize;__file__='/tmp/pip-build- > OibnHO/netifaces/setup.py';f=getattr(tokenize, > 'open', open)(__file__);code=f.read().replace('\r\n', > '\n');f.close();exec(compile(code, __file__, 'exec'))" install > --record /tmp/pip-3Hu__1-record/install-record.txt > --single-version-externally-managed --compile --install-headers > /home/harshab/os-vif/.tox/pep8/include/site/python2.7/netifaces" > failed with error code 1 in /tmp/pip-build-OibnHO/netifaces/ > > ERROR: could not install deps > [-r/home/harshab/os-vif/requirements.txt, > -r/home/harshab/os-vif/test-requirements.txt]; v = > InvocationError('/home/harshab/os-vif/.tox/pep8/bin/pip install -U > -r/home/harshab/os-vif/requirements.txt > -r/home/harshab/os-vif/test-requirements.txt (see > /home/harshab/os-vif/.tox/pep8/log/pep8-1.log)', 1) > ___________________________________ summary > ____________________________________ > ERROR: pep8: could not install deps > [-r/home/harshab/os-vif/requirements.txt, > -r/home/harshab/os-vif/test-requirements.txt]; v = > InvocationError('/home/harshab/os-vif/.tox/pep8/bin/pip install -U > -r/home/harshab/os-vif/requirements.txt > -r/home/harshab/os-vif/test-requirements.txt (see > /home/harshab/os-vif/.tox/pep8/log/pep8-1.log)', 1) > > Thanks, > -Harsha That's happening because the 'pep8' target is installing all the requirements for the project in a virtualenv, and one of them needs Python development headers. What Linux distro are you using? On Fedora you can fix this like so: sudo dnf install python-devel On Ubuntu, I think it's something like this: sudo apt-get install python-dev Stephen From akapoor87 at gmail.com Tue Jan 9 17:57:25 2018 From: akapoor87 at gmail.com (Akshay Kapoor) Date: Tue, 9 Jan 2018 23:27:25 +0530 Subject: [openstack-dev] Openstack CLI issues Message-ID: Hello Everyone ! I am facing some issues with the Openstack CLI Scenario: I have a domain admin user account (say 'A') I want to assign this user as an 'admin' to two projects X and Y in the same domain. When I trigger the command 'openstack --insecure role add --user "$OS_USERNAME" --project "X" admin' , I get the following error: The request you have made requires authentication. (HTTP 401) How can I add this admin user as an admin to two different tenants (where this admin account has no role previously). Once this role assignment is done, I want to setup rbac access between two projects 'X' and 'Y' Any help would be really appreciated. Thanks Best Regards, Akshay -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Jan 9 18:37:50 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 9 Jan 2018 10:37:50 -0800 Subject: [openstack-dev] [all] [tc] Community Goals for Rocky In-Reply-To: References: Message-ID: As promised, let's continue the discussion and move things forward. This morning Thierry brought the discussion during the TC office hour (that I couldn't attend due to timezone): http://eavesdrop.openstack.org/irclogs/%23openstack-tc/latest.log.html#t2018-01-09T09:18:33 Some outputs: - One goal has been proposed so far. Right now, we only have one goal proposal: Storyboard Migration. There are some concerns about the ability to achieve this goal in 6 months. At that point, we think it would be great to postpone the goal to S cycle, continue the progress (kudos to Kendall) and fine other goals for Rocky. - We still have a good backlog of goals, we're just missing champions. https://etherpad.openstack.org/p/community-goals Chris brought up "pagination links in collection resources" in api-wg guidelines theme. He said in the past this goal was more a "should" than a "must". Thierry mentioned privsep migration (done in Nova and Zun). (action, ping mikal about it). Thierry also brought up the version discovery (proposed by Monty). Flavio proposed mutable configuration, which might be very useful for operators. He also mentioned that IPv6 support goal shouldn't be that far from done, but we're currently lacking in CI jobs that test IPv6 deployments (question for infra/QA, can we maybe document the gap so we can run some gate jobs on ipv6 ?) (personal note on that one, since TripleO & Puppet OpenStack CI already have IPv6 jobs, we can indeed be confident that it shouldn't be that hard to complete this goal in 6 months, I guess the work needs to happen in the projects layouts). Another interesting goal proposed by Thierry, also useful for operators, is to move more projects to assert:supports-upgrade tag. Thierry said we are probably not that far from this goal, but the major lack is in testing. Finally, another "simple" goal is to remove mox/mox3 (Flavio said most of projects don't use it anymore already). With that said, let's continue the discussion on these goals, see which ones can be actionable and find champions. - Flavio asked how would it be perceived if one cycle wouldn't have at least one community goal. Thierry said we could introduce multi-cycle goals (Storyboard might be a good candidate). Chris and Thierry thought that it would be a bad sign for our community to not have community goals during a cycle, "loss of momentum" eventually. Thanks for reading so far, On Fri, Dec 15, 2017 at 9:07 AM, Emilien Macchi wrote: > On Tue, Nov 28, 2017 at 2:22 PM, Emilien Macchi wrote: > [...] >> Suggestions are welcome: >> - on the mailing-list, in a new thread per goal [all] [tc] Proposing >> goal XYZ for Rocky >> - on Gerrit in openstack/governance like Kendall did. > > Just a fresh reminder about Rocky goals. > A few questions that we can ask ourselves: > > 1) What common challenges do we have? > > e.g. Some projects don't have mutable configuration or some projects > aren't tested against IPv6 clouds, etc. > > 2) Who is willing to drive a community goal (a.k.a. Champion)? > > note: a Champion is someone who volunteer to drive the goal, but > doesn't commit to write the code necessarily. The Champion will > communicate with projects PTLs about the goal, and make the liaison if > needed. > > The list of ideas for Community Goals is documented here: > https://etherpad.openstack.org/p/community-goals > > Please be involved and propose some ideas, I'm sure our community has > some common goals, right ? :-) > Thanks, and happy holidays. I'll follow-up in January of next year. > -- > Emilien Macchi -- Emilien Macchi From emilien at redhat.com Tue Jan 9 20:10:43 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 9 Jan 2018 12:10:43 -0800 Subject: [openstack-dev] The Weekly Owl - 4th Edition Message-ID: Note: this is the fourth edition of a weekly update of what happens in TripleO, with a little touch of fun. The goal is to provide a short reading (less than 5 minutes) to learn where we are and what we're doing. Any contributions and feedback are welcome. +---------------------------------+ | General announcements | +---------------------------------+ +--> Focus is on Queens-m3 (end in 2 weeks): stabilization. +--> New contributor: Sulaiman Radwan. Welcome here, have fun and let us know any question :-) +--> The team should start planning for Rocky, and prepare the specs / blueprints if needed. +------------------------------+ | Continuous Integration | +------------------------------+ +--> Rover is Gabriele and ruck is Arx. Please let them know any new CI issue. +--> Master promotion is 5 days, Pike is 3 days and Ocata is 4 days. +--> Sprint 6 is ongoing, major focus on TripleO CI data collection in grafana. +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting and https://trello.com/b/U1ITy0cu/tripleo-ci-squad?menu=filter&filter=label:Sprint%206 +-------------+ | Upgrades | +-------------+ +--> Reviews are needed, please check the etherpads +--> Progress made on the Pike to Queens workflow. +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status and https://etherpad.openstack.org/p/tripleo-upgrade-squad-meeting +---------------+ | Containers | +---------------+ +--> Blog post from jistr "OpenShift Origin in TripleO" https://www.jistr.com/blog/2018-01-04-openshift-origin-in-tripleo. +--> Progress on containerized undercloud can be tracked here: https://etherpad.openstack.org/p/tripleo-queens-undercloud-containers +--> Ongoing work to containerize TripleO UI +--> Some work done on container-prepare-workflow pre-changes is ready for review +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +--------------+ | Integration | +--------------+ +--> Work that need review: Manila/CephNFS, Multiple Ceph clusters support and also some backports. +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> Trying to finish roles management for TripleO-UI and for workflows +--> Cleaning up the TripleO-UI deps issue (npm 3 vs. npm 5) +--> Beginning stages of bringing more testing/CI to UI +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> Inventory code moved to tripleo-common +--> Ansible lint script implemented and gating +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> Progress on routed networks, Octavia, TLS everywhere support for ODL and NIC rendering config templates with Jinja2 +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-------------+ | Owl facts | +-------------+ The Madagascar Red Owl is a medium-sized owl with no ear-tufts. It is also known as the Soumagnes Grass Owl or the Madagascar Grass Owl. Similar to a smallish Barn Owl, with an overall ochre-reddish to yellow-ochre colour. The upperparts have have fine dark spots, which are larger towards the tail and on the wings. Underparts are similar with scattered, very fine dark spots. The facial disc is white, with a brownish tinge between the lower edge of the eyes and the base of the light grey bill. The rim of the facial disc is brown. The eyes are blackish. Feet are smoky-grey with greyish-brown claws. (source: https://www.owlpages.com/owls/species.php?s=100) Stay tuned! -- Your fellow reporter, Emilien Macchi From kennelson11 at gmail.com Tue Jan 9 20:30:06 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 09 Jan 2018 20:30:06 +0000 Subject: [openstack-dev] [First Contact] [SIG] Rocky PTG Planning Message-ID: Hello Everyone :) I put us down for one day at the PTG and wanted to get a jump start on discussion planning. I created an etherpad[1] and wrote down some topics to get the ball rolling. Please feel free to expand on them if there are other details you feel we need to talk about or add new ones as you see fit. Also, please add your name to the 'Planned Attendance' section if you are thinking of attending. Thanks! -Kendall (diablo_rojo) [1] https://etherpad.openstack.org/p/FC_SIG_Rocky_PTG -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Tue Jan 9 21:18:44 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 9 Jan 2018 15:18:44 -0600 Subject: [openstack-dev] Openstack CLI issues In-Reply-To: References: Message-ID: <7b657012-6e22-22fa-bc6d-434d5d4258b8@gmail.com> On 01/09/2018 11:57 AM, Akshay Kapoor wrote: > Hello Everyone ! > > I am facing some issues with the Openstack CLI > > Scenario: > > I have a domain admin user account (say 'A') > > I want to assign this user as an 'admin' to two projects X and Y in > the same domain. > > When I trigger the command 'openstack --insecure role add --user > "$OS_USERNAME" --project "X" admin' , I get the following error: > > The request you have made requires authentication. (HTTP 401) Are you sure your credentials are right when authenticating? Based solely on the information provided there could be a couple of things happening. The first is that the credentials provided to make the call are incorrect. The second is that the user your attempting to authenticate as to make the call doesn't have a role on the project keystoneauth is trying to get a scoped token for (which can be denoted using If you were able to get a token and use it to make the call and if you didn't have the right permissions to assign roles to other users you'd be seeing a 403 instead of a 401. Those are just a couple suggestions based on the information provided. If you have access to the keystone logs you should see log warning or debug messaging that might be more helpful (depending on the configuration). Keystone does provide an administrator account during the bootstrap process [0] which should have the proper role to do these operations according to the default policies. [0] https://docs.openstack.org/keystone/latest/admin/identity-bootstrap.html > > > How can I add this admin user as an admin to two different tenants > (where this admin account has no role previously). Once this role > assignment is done, I want to setup rbac access between two projects > 'X' and 'Y' > > > Any help would be really appreciated. Thanks > > > Best Regards, > Akshay > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From kumarmn at us.ibm.com Tue Jan 9 23:10:17 2018 From: kumarmn at us.ibm.com (Manoj Kumar) Date: Tue, 9 Jan 2018 17:10:17 -0600 Subject: [openstack-dev] [trove] Changes to the Trove core team In-Reply-To: References: Message-ID: I would like to announce the following changes to the Trove core reviewers: -amrith +maciej.jozefczyk +fanzhang Amrith's stewardship of Trove and active contributions over the last several cycles would be missed dearly. Would like to welcome Fan and Maciej. - Manoj -------------- next part -------------- An HTML attachment was scrubbed... URL: From feilong at catalyst.net.nz Wed Jan 10 02:32:24 2018 From: feilong at catalyst.net.nz (Fei Long Wang) Date: Wed, 10 Jan 2018 15:32:24 +1300 Subject: [openstack-dev] [trove] Changes to the Trove core team In-Reply-To: References: Message-ID: +1 and congrats! On 10/01/18 12:10, Manoj Kumar wrote: > I would like to announce the following changes to the Trove core > reviewers: > > -amrith > +maciej.jozefczyk > +fanzhang > > Amrith's stewardship of Trove and active contributions over the last > several cycles would be missed dearly. > Would like to welcome Fan and Maciej. > > - Manoj > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From sriharsha.basavapatna at broadcom.com Wed Jan 10 05:54:18 2018 From: sriharsha.basavapatna at broadcom.com (Sriharsha Basavapatna) Date: Wed, 10 Jan 2018 11:24:18 +0530 Subject: [openstack-dev] [os-vif] In-Reply-To: <1515518160.27468.19.camel@redhat.com> References: <2a6a1a7b-6eef-11d3-15cf-711d8fe1cfd3@suse.com> <1515518160.27468.19.camel@redhat.com> Message-ID: On Tue, Jan 9, 2018 at 10:46 PM, Stephen Finucane wrote: > On Tue, 2018-01-09 at 16:30 +0530, Sriharsha Basavapatna wrote: >> On Tue, Jan 9, 2018 at 2:20 PM, Sriharsha Basavapatna >> wrote: >> > Hi Andreas, >> > >> > On Tue, Jan 9, 2018 at 12:04 PM, Andreas Jaeger >> > wrote: >> > > On 2018-01-09 07:00, Sriharsha Basavapatna wrote: >> > > > Hi, >> > > > >> > > > I've uploaded a patch for review: >> > > > https://review.openstack.org/#/c/531674/ >> > > > >> > > > This is the first time I'm submitting a patch on openstack. I'd >> > > > like >> > > >> > > Welcome to OpenStack, Harsha. >> > >> > Thank you. >> > >> > > Please read >> > > https://docs.openstack.org/infra/manual/developers.html if you >> > > haven't. >> > >> > Ok, i'll read it. >> > > >> > > I see that your change fails the basic tests, you can run these >> > > locally >> > > as follows to check that your fixes will pass: >> > > >> > > tox -e pep8 >> > > tox -e py27 >> > >> > I was wondering if there's a way to catch these errors without >> > having >> > to submit it for gerrit review. I fixed the ones that were >> > reported >> > in patch-set-1; looks like there's some new ones in the second >> > patch-set. I'll run the above commands to verify the fix locally. >> > >> > Thanks, >> > -Harsha >> >> I installed python-pip and tox. But when I run "tox -e pep8", I'm >> seeing some errors: >> >> building 'netifaces' extension >> gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall >> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong >> --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic >> -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall >> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong >> --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic >> -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DNETIFACES_VERSION=0.10.6 >> -DHAVE_GETIFADDRS=1 -DHAVE_GETNAMEINFO=1 -DHAVE_NETASH_ASH_H=1 >> -DHAVE_NETATALK_AT_H=1 -DHAVE_NETAX25_AX25_H=1 >> -DHAVE_NETECONET_EC_H=1 >> -DHAVE_NETIPX_IPX_H=1 -DHAVE_NETPACKET_PACKET_H=1 >> -DHAVE_LINUX_IRDA_H=1 -DHAVE_LINUX_ATM_H=1 -DHAVE_LINUX_LLC_H=1 >> -DHAVE_LINUX_TIPC_H=1 -DHAVE_LINUX_DN_H=1 -DHAVE_SOCKADDR_AT=1 >> -DHAVE_SOCKADDR_AX25=1 -DHAVE_SOCKADDR_IN=1 -DHAVE_SOCKADDR_IN6=1 >> -DHAVE_SOCKADDR_IPX=1 -DHAVE_SOCKADDR_UN=1 -DHAVE_SOCKADDR_ASH=1 >> -DHAVE_SOCKADDR_EC=1 -DHAVE_SOCKADDR_LL=1 -DHAVE_SOCKADDR_ATMPVC=1 >> -DHAVE_SOCKADDR_ATMSVC=1 -DHAVE_SOCKADDR_DN=1 -DHAVE_SOCKADDR_IRDA=1 >> -DHAVE_SOCKADDR_LLC=1 -DHAVE_PF_NETLINK=1 -I/usr/include/python2.7 -c >> netifaces.c -o build/temp.linux-x86_64-2.7/netifaces.o >> netifaces.c:1:20: fatal error: Python.h: No such file or >> directory >> #include >> ^ >> compilation terminated. >> error: command 'gcc' failed with exit status 1 >> >> ---------------------------------------- >> Command "/home/harshab/os-vif/.tox/pep8/bin/python2 -u -c "import >> setuptools, tokenize;__file__='/tmp/pip-build- >> OibnHO/netifaces/setup.py';f=getattr(tokenize, >> 'open', open)(__file__);code=f.read().replace('\r\n', >> '\n');f.close();exec(compile(code, __file__, 'exec'))" install >> --record /tmp/pip-3Hu__1-record/install-record.txt >> --single-version-externally-managed --compile --install-headers >> /home/harshab/os-vif/.tox/pep8/include/site/python2.7/netifaces" >> failed with error code 1 in /tmp/pip-build-OibnHO/netifaces/ >> >> ERROR: could not install deps >> [-r/home/harshab/os-vif/requirements.txt, >> -r/home/harshab/os-vif/test-requirements.txt]; v = >> InvocationError('/home/harshab/os-vif/.tox/pep8/bin/pip install -U >> -r/home/harshab/os-vif/requirements.txt >> -r/home/harshab/os-vif/test-requirements.txt (see >> /home/harshab/os-vif/.tox/pep8/log/pep8-1.log)', 1) >> ___________________________________ summary >> ____________________________________ >> ERROR: pep8: could not install deps >> [-r/home/harshab/os-vif/requirements.txt, >> -r/home/harshab/os-vif/test-requirements.txt]; v = >> InvocationError('/home/harshab/os-vif/.tox/pep8/bin/pip install -U >> -r/home/harshab/os-vif/requirements.txt >> -r/home/harshab/os-vif/test-requirements.txt (see >> /home/harshab/os-vif/.tox/pep8/log/pep8-1.log)', 1) >> >> Thanks, >> -Harsha > > That's happening because the 'pep8' target is installing all the > requirements for the project in a virtualenv, and one of them needs > Python development headers. What Linux distro are you using? On Fedora > you can fix this like so: > > sudo dnf install python-devel Thanks Stephen, I'm using RHEL and 'yum install python-devel' resolved it. -Harsha > > On Ubuntu, I think it's something like this: > > sudo apt-get install python-dev > > Stephen > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From anlin.kong at gmail.com Wed Jan 10 06:40:00 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 10 Jan 2018 19:40:00 +1300 Subject: [openstack-dev] [kubernetes-python-client] Failed to get service list Message-ID: I submitted an issue in github[1] the other day but didn't get any response, try my luck to attract attention here in case someone else has the same problem or already has a solution I didn't know, or hopefully, I missed something. The problem is when I want to get service list(the result should be an empty list), but I met with the following exception: 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server File >> "/usr/local/lib/python2.7/dist-packages/kubernetes/client/apis/core_v1_api.py", >> line 12951, in list_namespaced_service > > 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server (data) = >> self.list_namespaced_service_with_http_info(namespace, **kwargs) > > 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server File >> "/usr/local/lib/python2.7/dist-packages/kubernetes/client/apis/core_v1_api.py", >> line 13054, in list_namespaced_service_with_http_info > > 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server >> collection_formats=collection_formats) > > 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server File >> "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", >> line 321, in call_api > > 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server >> _return_http_data_only, collection_formats, _preload_content, >> _request_timeout) > > 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server File >> "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", >> line 163, in __call_api > > 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server >> return_data = self.deserialize(response_data, response_type) > > 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server File >> "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", >> line 236, in deserialize > > 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server return >> self.__deserialize(data, response_type) > > 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server File >> "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", >> line 276, in __deserialize > > 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server return >> self.__deserialize_model(data, klass) > > 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server File >> "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", >> line 622, in __deserialize_model > > 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server instance >> = klass(**kwargs) > > 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server File >> "/usr/local/lib/python2.7/dist-packages/kubernetes/client/models/v1_service_list.py", >> line 60, in __init__ > > 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server >> self.items = items > > 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server File >> "/usr/local/lib/python2.7/dist-packages/kubernetes/client/models/v1_service_list.py", >> line 110, in items > > 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server raise >> ValueError("Invalid value for `items`, must not be `None`") > > 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server ValueError: >> Invalid value for `items`, must not be `None` > > [1]: https://github.com/kubernetes-incubator/client-python/issues/424 Cheers, Lingxian Kong (Larry) -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Wed Jan 10 08:33:09 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 10 Jan 2018 16:33:09 +0800 Subject: [openstack-dev] [acceleration]Cyborg Team Weekly Meeting 2018.01.10 Message-ID: Hi Team, We will have our regular team meeting today starting UTC1500 at #openstack-cyborg as usual. The main agenda is to go over action items from last week's meeting and see what we could close now and what remains challenging. -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkovar at redhat.com Wed Jan 10 11:05:22 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 10 Jan 2018 12:05:22 +0100 Subject: [openstack-dev] [docs] Documentation meeting today Message-ID: <20180110120522.0ebb310b091537c7015534c6@redhat.com> Hi all, The docs meeting will continue today at 16:00 UTC in #openstack-doc, as scheduled. For more details, see the meeting page: https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting Cheers, pk From lijie at unitedstack.com Wed Jan 10 12:16:03 2018 From: lijie at unitedstack.com (=?utf-8?B?5p2O5p2w?=) Date: Wed, 10 Jan 2018 20:16:03 +0800 Subject: [openstack-dev] [nova] about rebuild and rescue instance booted from volume Message-ID: Hi,all This is the spec about rebuild and rescue a instance booted from volume, anyone who is interested in booted from volume can help to review this. Any suggestion is welcome. The link is here. Re:the rebuild spec:https://review.openstack.org/#/c/532407/ the rescue spec:https://review.openstack.org/#/c/532410/ Best Regards Lijie -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Jan 10 13:00:39 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 10 Jan 2018 07:00:39 -0600 Subject: [openstack-dev] Retiring Evoque Projects Message-ID: <20180110130039.GA25100@sm-xps> While doing some cleanup, I noticed the Evoque projects (openstack/evoque and openstack/evoque-ui) have not had any activity since the beginning of 2016, with the team's last official meeting held in December of 2015. I contacted the core team for this project, and luckily they were very responsive in letting me know that this project is indeed no longer active and should probably be retired. Unless I hear otherwise soon, I will be starting the process to retire this project and clean up things like its zuul and gerrit configuration. Thanks, Sean McGinnis (smcginnis) From thierry at openstack.org Wed Jan 10 13:48:34 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 10 Jan 2018 14:48:34 +0100 Subject: [openstack-dev] [ptg] Post-lunch presentation(s) Message-ID: Hi everyone, One complain that was expressed in past PTG feedback session(s) was the lack of situational awareness and a missed opportunity for "global" communication at the event. To address that, in Dublin we'd like to use the end of the lunch break for general communications that could be interesting to OpenStack upstream developers and project team members. So far we only used that time for communicating housekeeping items, the idea would be to take it a step further. The idea is not necessarily to find a presentation for every day -- but if we find content that is generally useful and can be consumed while people start their digestion process, then we can use one of those slots for that. Interesting topics include general guidance to make the most of the PTG week (good Monday content), development tricks, code review etiquette, new libraries features you should adopt, lightning talks (good Friday content)... We'd likely keep the slot under 20min. If you have ideas, please fill https://etherpad.openstack.org/p/dublin-PTG-postlunch -- in a few weeks the TC will review suggestions there and pick things that fit the bill. Cheers, -- Thierry Carrez (ttx) From aj at suse.com Wed Jan 10 14:02:17 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 10 Jan 2018 15:02:17 +0100 Subject: [openstack-dev] Retiring Evoque Projects In-Reply-To: <20180110130039.GA25100@sm-xps> References: <20180110130039.GA25100@sm-xps> Message-ID: <8b39d544-2d9c-1fef-00b9-863248786192@suse.com> On 2018-01-10 14:00, Sean McGinnis wrote: > While doing some cleanup, I noticed the Evoque projects (openstack/evoque and > openstack/evoque-ui) have not had any activity since the beginning of 2016, > with the team's last official meeting held in December of 2015. > > I contacted the core team for this project, and luckily they were very > responsive in letting me know that this project is indeed no longer active and > should probably be retired. > > Unless I hear otherwise soon, I will be starting the process to retire this > project and clean up things like its zuul and gerrit configuration. Thanks, see also http://lists.openstack.org/pipermail/openstack-dev/2017-December/125352.html, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From jungleboyj at gmail.com Wed Jan 10 15:36:25 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Wed, 10 Jan 2018 10:36:25 -0500 Subject: [openstack-dev] [cinder] Rocky PTG Planning Etherpad ... Message-ID: <0f654a73-34bb-228e-921e-05ffed539cd2@gmail.com> Team, Hard to believe that Queens is wrapping up already and that we need to be thinking about the PTG in Dublin ... but here it is. I have started an etherpad [1] to record your planned attendance and any topics you want to cover at the PTG.  Just get them listed at the top and I will organize them across the days once we know what all there is to discuss. Hope you all are able to join us!  I am looking forward to another productive PTG! Jay (jungleboyj) [1] https://etherpad.openstack.org/p/cinder-ptg-rocky From mrostecki at suse.com Wed Jan 10 14:58:36 2018 From: mrostecki at suse.com (Michal Rostecki) Date: Wed, 10 Jan 2018 15:58:36 +0100 Subject: [openstack-dev] [kubernetes-python-client] Failed to get service list In-Reply-To: References: Message-ID: On 01/10/2018 07:40 AM, Lingxian Kong wrote: > I submitted an issue in github[1] the other day but didn't get any > response, try my luck to attract attention here in case someone else has > the same problem or already has a solution I didn't know, or hopefully, I > missed something. > This is not the correct mailing list to talk about that project. Kubernetes-incubator is a part of Kubernetes community, not OpenStack. If you have a problem with reaching developers of python-client on github, I recommend to use Kubernetes Slack. Cheers, Michal From sean.mcginnis at gmx.com Wed Jan 10 16:56:12 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 10 Jan 2018 10:56:12 -0600 Subject: [openstack-dev] Retiring Evoque Projects In-Reply-To: <8b39d544-2d9c-1fef-00b9-863248786192@suse.com> References: <20180110130039.GA25100@sm-xps> <8b39d544-2d9c-1fef-00b9-863248786192@suse.com> Message-ID: <20180110165611.GA3008@sm-xps> > > > > Unless I hear otherwise soon, I will be starting the process to retire this > > project and clean up things like its zuul and gerrit configuration. > > Thanks, see also > http://lists.openstack.org/pipermail/openstack-dev/2017-December/125352.html, > > Andreas Oh, sorry Andreas, I completely missed or ignored that one. I guess this is confirmation that we should go ahead with the retirement. I have the first patch started and have the next set queued up for once that lands. While going through my clean up I did notice some other projects that appear to be dormant. I am hoping to continue this through and clean up some more soon. Sean From pkovar at redhat.com Wed Jan 10 17:00:34 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 10 Jan 2018 18:00:34 +0100 Subject: [openstack-dev] [docs] Documentation meeting minutes for 2018-01-10 In-Reply-To: <20180110120522.0ebb310b091537c7015534c6@redhat.com> References: <20180110120522.0ebb310b091537c7015534c6@redhat.com> Message-ID: <20180110180034.9dc6f9470c6e4ff11f49f1be@redhat.com> ======================= #openstack-doc: docteam ======================= Meeting started by pkovar at 16:00:32 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/docteam/2018/docteam.2018-01-10-16.00.log.html . Meeting summary --------------- * roll call (pkovar, 16:00:49) * PDF builds (pkovar, 16:04:06) * LINK: https://review.openstack.org/#/c/509297/ (pkovar, 16:04:15) * Docs retention policy changes (pkovar, 16:08:59) * LINK: http://specs.openstack.org/openstack/docs-specs/specs/queens/retention-policy.html (pkovar, 16:09:07) * we have a pending review for adding deprecation badges (pkovar, 16:10:11) * LINK: https://review.openstack.org/#/c/530142/ (pkovar, 16:10:23) * Rocky PTG (pkovar, 16:12:12) * LINK: https://www.openstack.org/ptg/ (pkovar, 16:12:29) * Planning etherpad for docs+i18n created (pkovar, 16:12:38) * LINK: https://etherpad.openstack.org/p/docs-i18n-ptg-rocky (pkovar, 16:12:53) * Sign up and tell us your preference wrt parcel time into small chunks or have full-day focus on one team agenda? (pkovar, 16:13:02) * Bug Triage Team (pkovar, 16:14:50) * you can still sign up (pkovar, 16:15:09) * LINK: https://wiki.openstack.org/wiki/Documentation/SpecialityTeams (pkovar, 16:15:11) * PDF builds (pkovar, 16:16:26) * LINK: http://lists.openstack.org/pipermail/openstack-dev/2017-November/124863.html (ianychoi, 16:17:14) * updating PTI will be needed in order to unblock the pdf spec (pkovar, 16:42:32) * let's discuss again before the rocky ptg and then try to work together on the update at the ptg (pkovar, 16:43:08) * LINK: https://github.com/openstack/openstackdocstheme/commit/9219e0b38838bb8c788849b792180e4502e2b3a6 (pkovar, 16:44:55) * LINK: https://review.openstack.org/#/c/532163/ (stephenfin, 16:48:49) * openstackdocstheme 1.18.0 on the way but blocked by https://review.openstack.org/#/c/532163/ (pkovar, 16:49:06) * Open discussion (pkovar, 16:53:38) Meeting ended at 16:58:55 UTC. People present (lines said) --------------------------- * pkovar (73) * ianychoi (33) * stephenfin (14) * tosky (3) * openstack (3) * d0ugal (1) Generated by `MeetBot`_ 0.1.4 From inc007 at gmail.com Wed Jan 10 17:03:12 2018 From: inc007 at gmail.com (=?UTF-8?B?TWljaGHFgiBKYXN0cnrEmWJza2k=?=) Date: Wed, 10 Jan 2018 09:03:12 -0800 Subject: [openstack-dev] [kolla] PTL non candidacy Message-ID: Hello, A bit earlier than usually, but I'd like to say that I won't be running for PTL reelection for Rocky cycle. I had privilege of being PTL of Kolla for last 3 cycles and I would like to thank Kolla community for this opportunity and trust. I'm very proud of what we've accomplished over last 3 releases and I'm sure we will accomplish even greater things in the future! It's good for project to change leadership every now and then. I would encourage everyone in community to consider running, I can promise that this job is ... very interesting;) and extremely rewarding! Thank you all for support and please, support new PTL as much as you supported me. Regards, Michal From cboylan at sapwetik.org Wed Jan 10 18:11:31 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 10 Jan 2018 10:11:31 -0800 Subject: [openstack-dev] [All][Infra] Meltdown Patching Message-ID: <1515607891.560347.1230895440.2E5B4A18@webmail.messagingengine.com> Hello everyone, As a general heads up Ubuntu has published new kernels which enable kernel page table isolation to address the meltdown vulnerability that made the news last week. The infra team is currently working through patching our Ubuntu servers to pick up these fixes. Unfortunately patching does require reboots so you may notice some service outages as we roll through and update things. As a side note all of our CentOS servers were patched last week when CentOS published new kernels. We managed to do these with no service outages, but won't be so lucky with one off services running on Ubuntu. Thank you for your patience and feel free to ask if you have any questions related to this or anything else really. Clark From anteaya at anteaya.info Wed Jan 10 18:21:19 2018 From: anteaya at anteaya.info (Anita Kuno) Date: Wed, 10 Jan 2018 13:21:19 -0500 Subject: [openstack-dev] [All][Infra] Meltdown Patching In-Reply-To: <1515607891.560347.1230895440.2E5B4A18@webmail.messagingengine.com> References: <1515607891.560347.1230895440.2E5B4A18@webmail.messagingengine.com> Message-ID: <334baab4-0565-657c-13d5-0fcb3fce0200@anteaya.info> On 2018-01-10 01:11 PM, Clark Boylan wrote: > Hello everyone, > > As a general heads up Ubuntu has published new kernels which enable kernel page table isolation to address the meltdown vulnerability that made the news last week. The infra team is currently working through patching our Ubuntu servers to pick up these fixes. Unfortunately patching does require reboots so you may notice some service outages as we roll through and update things. > > As a side note all of our CentOS servers were patched last week when CentOS published new kernels. We managed to do these with no service outages, but won't be so lucky with one off services running on Ubuntu. > > Thank you for your patience and feel free to ask if you have any questions related to this or anything else really. > > Clark > Thank you Clark, Anita. From gr at ham.ie Wed Jan 10 19:10:30 2018 From: gr at ham.ie (Graham Hayes) Date: Wed, 10 Jan 2018 19:10:30 +0000 Subject: [openstack-dev] [designate] state of pdns4 backend In-Reply-To: <5A53F878020000900001E0E0@prv-mh.provo.novell.com> References: <5A53F878020000900001E0E0@prv-mh.provo.novell.com> Message-ID: Hi Ritesh, see in line: On 08/01/18 23:02, Ritesh Anand wrote: > Hi Stackers, > > I see that we moved from PowerDNS Backend to PDNS4 backend, I have a few > questions in that regard: > > 1. Should powerdns 3.4 (with PowerDNS backend) continue to work fine on > Pike OpenStack? Yes - as long as powerdns 3.x does not change the DB schema, it should work fine. > 2. Why did we change the default backend to BIND9? There is a mix of pdns packages available across distros - e.g. trusty had 3.x and xenial had 4.x The only common backend was bind9 - so we swapped the devstack default to bind9. > 3. How feasible is moving from one backend to other? Say if we move from > PowerDNS to BIND9 backend, if I generate BIND zone files from PowerDNS > mysql backed, and make necessary designate config changes, is that > sufficient? Moving from one to another is not that simple unfortunately. We currently do not have a way, other than adding a new target with the new driver type, forcing things to sync on to it (may require some additional tooling right now), and removing the old target. > Thanks again for your help! > Any other questions, please ask, here or in #openstack-dns :) Thanks, Graham > Best, > Ritesh > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From mriedemos at gmail.com Wed Jan 10 19:34:33 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 10 Jan 2018 13:34:33 -0600 Subject: [openstack-dev] [All][Infra] Meltdown Patching In-Reply-To: <1515607891.560347.1230895440.2E5B4A18@webmail.messagingengine.com> References: <1515607891.560347.1230895440.2E5B4A18@webmail.messagingengine.com> Message-ID: <615d76b9-d963-a3e6-683f-d5d558a92ca0@gmail.com> On 1/10/2018 12:11 PM, Clark Boylan wrote: > or anything else really. Clark, where do babies come from? -- Thanks, Matt From jimmy at openstack.org Wed Jan 10 19:41:57 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 10 Jan 2018 13:41:57 -0600 Subject: [openstack-dev] [All][Infra] Meltdown Patching In-Reply-To: <615d76b9-d963-a3e6-683f-d5d558a92ca0@gmail.com> References: <1515607891.560347.1230895440.2E5B4A18@webmail.messagingengine.com> <615d76b9-d963-a3e6-683f-d5d558a92ca0@gmail.com> Message-ID: <5A566C85.908@openstack.org> When a patch loves a bug very, very much.... Matt Riedemann wrote: > On 1/10/2018 12:11 PM, Clark Boylan wrote: >> or anything else really. > > Clark, where do babies come from? > From melwittt at gmail.com Wed Jan 10 20:34:50 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 10 Jan 2018 12:34:50 -0800 Subject: [openstack-dev] zuulv3 log structure and format grumblings In-Reply-To: <1515177839.586364.1225592056.79BA9B04@webmail.messagingengine.com> References: <40cc7e65-bfc1-c30c-edb2-dbb09b4a3523@gmail.com> <1515177839.586364.1225592056.79BA9B04@webmail.messagingengine.com> Message-ID: <49d0ae85-e872-13da-6394-d027e99639d3@gmail.com> On Fri, 05 Jan 2018 10:43:59 -0800, Clark Boylan wrote: > To expand a bit more on that what we are attempting to do is port the log handling code in devstack-gate [0] to zuul v3 jobs living in tempest [1]. The new job in tempest itself relies on the ansible process-test-results role which can be found here [2]. Chances are something in [1] and/or [2] will have to be updated to match the behavior in [0]. > > [0]https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/functions.sh#n524 > [1]https://git.openstack.org/cgit/openstack/tempest/tree/playbooks/post-tempest.yaml#n8 > [2]http://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/process-test-results Thanks for the pointer. So far, I can't tell what's going wrong but I noticed that none of the items in post-tempest.yaml are making it to controller/logs/. The tempest.conf is missing under controller/logs/etc, the tempest.log is missing, accounts.yaml is missing, along with testr_results.html. The job-output shows that the post-tempest.yaml is being executed [1] but the results aren't making it to logs/. [1] http://logs.openstack.org/95/523395/14/gate/tempest-full/ea04d53/job-output.txt.gz#_2018-01-10_19_15_27_228060 From anlin.kong at gmail.com Wed Jan 10 20:44:10 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Thu, 11 Jan 2018 09:44:10 +1300 Subject: [openstack-dev] [kubernetes-python-client] Failed to get service list In-Reply-To: References: Message-ID: Thanks for the reminder, Michal. I sent the email here because the client library is dependency of several openstack projects, the issue I found may cause potential problems to them, and I also want to get some hints if they already solved that. Cheers, Lingxian Kong (Larry) On Thu, Jan 11, 2018 at 3:58 AM, Michal Rostecki wrote: > On 01/10/2018 07:40 AM, Lingxian Kong wrote: > > I submitted an issue in github[1] the other day but didn't get any > > response, try my luck to attract attention here in case someone else has > > the same problem or already has a solution I didn't know, or hopefully, I > > missed something. > > > > This is not the correct mailing list to talk about that project. > Kubernetes-incubator is a part of Kubernetes community, not OpenStack. > If you have a problem with reaching developers of python-client on github, > I recommend to use Kubernetes Slack. > > Cheers, > Michal > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Jan 10 21:40:01 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 10 Jan 2018 16:40:01 -0500 Subject: [openstack-dev] [Release-job-failures][release][nova][infra] Release of openstack/nova failed In-Reply-To: References: Message-ID: <1515620295-sup-8508@lrrr.local> Excerpts from zuul's message of 2018-01-10 01:38:45 +0000: > Build failed. > > - release-openstack-python-without-pypi http://logs.openstack.org/d6/d6ce901fe280004edbe2f27d8af374ff905161d6/release/release-openstack-python-without-pypi/10069b4/ : FAILURE in 4m 23s > - announce-release announce-release : SKIPPED > The failure from [1] is during the step where the "sibling" packages are installed and says "pip python module is required". Do we need to update a bindep file somewhere? [1] http://logs.openstack.org/d6/d6ce901fe280004edbe2f27d8af374ff905161d6/release/release-openstack-python-without-pypi/10069b4/ara/result/eead3240-2a6f-40ba-8f59-9d80bc408603/ From fungi at yuggoth.org Wed Jan 10 21:54:47 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Jan 2018 21:54:47 +0000 Subject: [openstack-dev] [Release-job-failures][release][nova][infra] Release of openstack/nova failed In-Reply-To: <1515620295-sup-8508@lrrr.local> References: <1515620295-sup-8508@lrrr.local> Message-ID: <20180110215447.n2pfddr3u36zli35@yuggoth.org> On 2018-01-10 16:40:01 -0500 (-0500), Doug Hellmann wrote: [...] > The failure from [1] is during the step where the "sibling" > packages are installed and says "pip python module is required". [...] There were some "bad" images in production earlier which were missing pip. When that was caught they were rolled back to the previous working images so this should now succeed if rerun. I'll catch up with you in #openstack-release about reenqueuing the tag for it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jschluet at redhat.com Wed Jan 10 22:10:13 2018 From: jschluet at redhat.com (Jon Schlueter) Date: Wed, 10 Jan 2018 17:10:13 -0500 Subject: [openstack-dev] [ceilometer] Retiring ceilometerclient In-Reply-To: References: Message-ID: On Thu, Nov 23, 2017 at 4:12 PM, gordon chung wrote: > > > > On 2017-11-22 04:18 AM, Julien Danjou wrote: > > Hi, > > > > Now that the Ceilometer API is gone, we really don't need > > ceilometerclient anymore. I've proposed a set of patches to retire it: > > > > https://review.openstack.org/#/c/522183/ > > So my question here is are we missing a process check for retiring a project that is still in the requirements of several other OpenStack projects? I went poking around and found that rally [4], heat [1], aodh [3] and mistral [2] still had references to ceilometerclient in the RPM packaging in RDO Queens, and on digging a bit more they were still in the requirements for at least those 4 projects. I would think that a discussion around retiring a project should also include at least enumerating which projects are currently consuming it [5]. That way a little bit of pressure on those consumers can be exerted to evaluate their usage of an about to be retired project. It shouldn't stop the discussions around retiring a project just a data point for decision making. Thanks Jon Schlueter [1] https://review.openstack.org/532617 - heat [2] https://review.openstack.org/532610 - mistral [3] https://review.openstack.org/526246 - aodh [4] https://github.com/openstack/rally/blob/master/requirements.txt#L34 [5] http://codesearch.openstack.org/?q=python-ceilometerclient&i=nope&files=requirements.txt From andrea.frittoli at gmail.com Wed Jan 10 22:19:13 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Wed, 10 Jan 2018 22:19:13 +0000 Subject: [openstack-dev] zuulv3 log structure and format grumblings In-Reply-To: <49d0ae85-e872-13da-6394-d027e99639d3@gmail.com> References: <40cc7e65-bfc1-c30c-edb2-dbb09b4a3523@gmail.com> <1515177839.586364.1225592056.79BA9B04@webmail.messagingengine.com> <49d0ae85-e872-13da-6394-d027e99639d3@gmail.com> Message-ID: On Wed, Jan 10, 2018 at 8:35 PM melanie witt wrote: > On Fri, 05 Jan 2018 10:43:59 -0800, Clark Boylan wrote: > > To expand a bit more on that what we are attempting to do is port the > log handling code in devstack-gate [0] to zuul v3 jobs living in tempest > [1]. The new job in tempest itself relies on the ansible > process-test-results role which can be found here [2]. Chances are > something in [1] and/or [2] will have to be updated to match the behavior > in [0]. > > > > [0] > https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/functions.sh#n524 > > [1] > https://git.openstack.org/cgit/openstack/tempest/tree/playbooks/post-tempest.yaml#n8 > > [2] > http://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/process-test-results > > Thanks for the pointer. So far, I can't tell what's going wrong but I > noticed that none of the items in post-tempest.yaml are making it to > controller/logs/. The tempest.conf is missing under controller/logs/etc, > the tempest.log is missing, accounts.yaml is missing, along with > testr_results.html. The job-output shows that the post-tempest.yaml is > being executed [1] but the results aren't making it to logs/. > Thanks for looking into this. The issue with the missing tempest log and config is related to a change in stage-dir on devstack side. The ansible user dir is the correct stage dir to be used, but tempest was still setting /opt/stack in its post. The fix for that is here [1]. In fact we should be able to stop invoking stage-output in tempest post and only extend the zuul_copy_output variable in the devstack-tempest job definition, I'll look into that in the near future. Andrea Frittoli (andreaf) [1] https://review.openstack.org/532649 > > [1] > > http://logs.openstack.org/95/523395/14/gate/tempest-full/ea04d53/job-output.txt.gz#_2018-01-10_19_15_27_228060 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Jan 10 22:20:13 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 10 Jan 2018 16:20:13 -0600 Subject: [openstack-dev] Retirement of astara repos? Message-ID: <572FF9CF-9AB5-4CBA-A4C8-26E7A012309E@gmx.com> While going through various repos looking at things to be cleaned up, I noticed the last commit for openstack/astara was well over a year ago. Based on this and the little bit I have followed with this project, it’s my understanding that there is no further work planned with Astara. Should these repos be retired at this point? Or is there a reason to keep things around? Thanks, Sean McGinnis (smcginnis) From gord at live.ca Wed Jan 10 23:28:05 2018 From: gord at live.ca (gordon chung) Date: Wed, 10 Jan 2018 23:28:05 +0000 Subject: [openstack-dev] [ceilometer] Retiring ceilometerclient In-Reply-To: References: Message-ID: On 2018-01-10 05:10 PM, Jon Schlueter wrote: > I would think that a discussion around retiring a project should also > include at least enumerating > which projects are currently consuming it [5]. That way a little bit > of pressure on those consumers > can be exerted to evaluate their usage of an about to be retired > project. It shouldn't stop the > discussions around retiring a project just a data point for decision making. this is a very valid point. this is something overlooked on my part. out of curiosity, but what's the effect have 'retiring' something in openstack and having services still referencing ceilometerclient? is it that it will not get packaged in centos/ubuntu and therefore will be missing requirements when installed? or can you not build the package at all? cheers, -- gord From doug at doughellmann.com Wed Jan 10 23:34:07 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 10 Jan 2018 18:34:07 -0500 Subject: [openstack-dev] [ceilometer] Retiring ceilometerclient In-Reply-To: References: Message-ID: <1515627189-sup-4383@lrrr.local> Excerpts from gordon chung's message of 2018-01-10 23:28:05 +0000: > > On 2018-01-10 05:10 PM, Jon Schlueter wrote: > > I would think that a discussion around retiring a project should also > > include at least enumerating > > which projects are currently consuming it [5]. That way a little bit > > of pressure on those consumers > > can be exerted to evaluate their usage of an about to be retired > > project. It shouldn't stop the > > discussions around retiring a project just a data point for decision making. > > this is a very valid point. this is something overlooked on my part. > > out of curiosity, but what's the effect have 'retiring' something in > openstack and having services still referencing ceilometerclient? is it > that it will not get packaged in centos/ubuntu and therefore will be > missing requirements when installed? or can you not build the package at > all? > > cheers, > The python-ceilometer client is empty now except for a README file explaining that the project is retired. So if there's a bug in the library, there's no convenient way for anyone to fix it. Doug From mordred at inaugust.com Wed Jan 10 23:40:28 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 10 Jan 2018 17:40:28 -0600 Subject: [openstack-dev] [ceilometer] Retiring ceilometerclient In-Reply-To: References: Message-ID: On 01/10/2018 04:10 PM, Jon Schlueter wrote: > On Thu, Nov 23, 2017 at 4:12 PM, gordon chung wrote: >> >> >> >> On 2017-11-22 04:18 AM, Julien Danjou wrote: >>> Hi, >>> >>> Now that the Ceilometer API is gone, we really don't need >>> ceilometerclient anymore. I've proposed a set of patches to retire it: >>> >>> https://review.openstack.org/#/c/522183/ >>> > > > So my question here is are we missing a process check for retiring a > project that is still in > the requirements of several other OpenStack projects? > > I went poking around and found that rally [4], heat [1], aodh [3] and > mistral [2] still had references to > ceilometerclient in the RPM packaging in RDO Queens, and on digging a > bit more they > were still in the requirements for at least those 4 projects. > > I would think that a discussion around retiring a project should also > include at least enumerating > which projects are currently consuming it [5]. That way a little bit > of pressure on those consumers > can be exerted to evaluate their usage of an about to be retired > project. It shouldn't stop the > discussions around retiring a project just a data point for decision making. It's worth pointing out that openstacksdk has ceilometer REST API support in it, although it is special-cased since ceilometer was retired before we even made the service-types-authority: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/openstack/connection.py#n234 We can either keep it there indefinitely (there is no cost to keeping it, other than that one "self._load('metric')" line) - or we could take this opportunity to purge it from sdk as well. BUT - if we're going to remove it from SDK I'd rather we do it in the very-near-future because we're getting closer to a 1.0 for SDK and once that happens if ceilometer is still there ceilometer support will remain until the end of recorded history. We could keep it and migrate the heat/mistral/rally/aodh ceilometerclient uses to be SDK uses (although heaven knows how we test that without a ceilometer in devstack) I honestly do not have a strong opinion in either direction and welcome input on what people would like to see done. Monty From doug at doughellmann.com Wed Jan 10 23:44:01 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 10 Jan 2018 18:44:01 -0500 Subject: [openstack-dev] [ceilometer] Retiring ceilometerclient In-Reply-To: References: Message-ID: <1515627800-sup-7550@lrrr.local> Excerpts from Monty Taylor's message of 2018-01-10 17:40:28 -0600: > On 01/10/2018 04:10 PM, Jon Schlueter wrote: > > On Thu, Nov 23, 2017 at 4:12 PM, gordon chung wrote: > >> > >> > >> > >> On 2017-11-22 04:18 AM, Julien Danjou wrote: > >>> Hi, > >>> > >>> Now that the Ceilometer API is gone, we really don't need > >>> ceilometerclient anymore. I've proposed a set of patches to retire it: > >>> > >>> https://review.openstack.org/#/c/522183/ > >>> > > > > > > So my question here is are we missing a process check for retiring a > > project that is still in > > the requirements of several other OpenStack projects? > > > > I went poking around and found that rally [4], heat [1], aodh [3] and > > mistral [2] still had references to > > ceilometerclient in the RPM packaging in RDO Queens, and on digging a > > bit more they > > were still in the requirements for at least those 4 projects. > > > > I would think that a discussion around retiring a project should also > > include at least enumerating > > which projects are currently consuming it [5]. That way a little bit > > of pressure on those consumers > > can be exerted to evaluate their usage of an about to be retired > > project. It shouldn't stop the > > discussions around retiring a project just a data point for decision making. > > It's worth pointing out that openstacksdk has ceilometer REST API > support in it, although it is special-cased since ceilometer was retired > before we even made the service-types-authority: > > http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/openstack/connection.py#n234 > > We can either keep it there indefinitely (there is no cost to keeping > it, other than that one "self._load('metric')" line) - or we could take > this opportunity to purge it from sdk as well. > > BUT - if we're going to remove it from SDK I'd rather we do it in the > very-near-future because we're getting closer to a 1.0 for SDK and once > that happens if ceilometer is still there ceilometer support will remain > until the end of recorded history. > > We could keep it and migrate the heat/mistral/rally/aodh > ceilometerclient uses to be SDK uses (although heaven knows how we test > that without a ceilometer in devstack) > > I honestly do not have a strong opinion in either direction and welcome > input on what people would like to see done. > > Monty > If ceilometer itself is deprecated, do we need to maintain support in any of our tools? Doug From victoria at vmartinezdelacruz.com Wed Jan 10 23:49:49 2018 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Wed, 10 Jan 2018 20:49:49 -0300 Subject: [openstack-dev] [NEEDACTION] Call for mentors for Google Summer of Code 2018 Message-ID: Hi all, Google Summer of Code (GSoC) is a program that matches mentoring organizations with college and university student developers who are paid to write open source code. It has been around 2005 and we had been accepted as a mentor organization in only one opportunity (2014) having a great outcome for both interns and for our community. We expect to be able to join this year again, but for that, we will need your help. Mentors We need to submit our application as a mentoring organization, but for that, we need to have a clear outline of what different projects we have for interns to work on. *** The deadline for mentoring organizations applications is 23/01/2018. *** If you are interested in mentoring but you have doubts about it, please feel free to reach us here or on #openstack-gsoc. We will be happy to reply any doubt you may have about mentoring for this internship. Also, you can check out this guide [0]. If you are already convinced that you want to join us as a mentor for this round, add your name in the OpenStack Google Summer of Code 2018 wiki page [1] and add your project ideas in [2]. Make sure you leave your contact information in the OpenStack GSoC 2018 wiki and that you add all the important details about the project idea. Also reach us if there is something you are not certain about. Thanks Looking forward to see GSoC happening again in our community! Thanks, Victoria [0] http://en.flossmanuals.net/gsocmentoring/ [1] https://wiki.openstack.org/wiki/GSoC2018 [2] https://wiki.openstack.org/wiki/Internship_ideas -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Thu Jan 11 00:08:52 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 10 Jan 2018 18:08:52 -0600 Subject: [openstack-dev] [ceilometer] Retiring ceilometerclient In-Reply-To: <1515627800-sup-7550@lrrr.local> References: <1515627800-sup-7550@lrrr.local> Message-ID: On 01/10/2018 05:44 PM, Doug Hellmann wrote: > Excerpts from Monty Taylor's message of 2018-01-10 17:40:28 -0600: >> On 01/10/2018 04:10 PM, Jon Schlueter wrote: >>> On Thu, Nov 23, 2017 at 4:12 PM, gordon chung wrote: >>>> >>>> >>>> >>>> On 2017-11-22 04:18 AM, Julien Danjou wrote: >>>>> Hi, >>>>> >>>>> Now that the Ceilometer API is gone, we really don't need >>>>> ceilometerclient anymore. I've proposed a set of patches to retire it: >>>>> >>>>> https://review.openstack.org/#/c/522183/ >>>>> >>> >>> >>> So my question here is are we missing a process check for retiring a >>> project that is still in >>> the requirements of several other OpenStack projects? >>> >>> I went poking around and found that rally [4], heat [1], aodh [3] and >>> mistral [2] still had references to >>> ceilometerclient in the RPM packaging in RDO Queens, and on digging a >>> bit more they >>> were still in the requirements for at least those 4 projects. >>> >>> I would think that a discussion around retiring a project should also >>> include at least enumerating >>> which projects are currently consuming it [5]. That way a little bit >>> of pressure on those consumers >>> can be exerted to evaluate their usage of an about to be retired >>> project. It shouldn't stop the >>> discussions around retiring a project just a data point for decision making. >> >> It's worth pointing out that openstacksdk has ceilometer REST API >> support in it, although it is special-cased since ceilometer was retired >> before we even made the service-types-authority: >> >> http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/openstack/connection.py#n234 Whoops, that's not ceilometer - that's gnocchi I think? ceilometer support *does* have a service-types-authority reference so *isn't* special-cased and is here: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/openstack/meter >> We can either keep it there indefinitely (there is no cost to keeping >> it, other than that one "self._load('metric')" line) - or we could take >> this opportunity to purge it from sdk as well. >> >> BUT - if we're going to remove it from SDK I'd rather we do it in the >> very-near-future because we're getting closer to a 1.0 for SDK and once >> that happens if ceilometer is still there ceilometer support will remain >> until the end of recorded history. >> >> We could keep it and migrate the heat/mistral/rally/aodh >> ceilometerclient uses to be SDK uses (although heaven knows how we test >> that without a ceilometer in devstack) >> >> I honestly do not have a strong opinion in either direction and welcome >> input on what people would like to see done. >> >> Monty >> > > If ceilometer itself is deprecated, do we need to maintain support > in any of our tools? We do not - although if we had had ceilometer support in shade I would be very adamant that we continue to support it to the best of our ability for forever, since you never know who out there is running on an old cloud that still has it. This is why I could go either way personally from an SDK perspective - we don't have a 1.0 release of SDK yet, so if we do think it's best to just clean house, now's the time. From mriedemos at gmail.com Thu Jan 11 00:13:33 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 10 Jan 2018 18:13:33 -0600 Subject: [openstack-dev] [nova][cinder] nova support for volume multiattach Message-ID: <6b28b91d-5004-e37d-cfa9-04a5eff537dc@gmail.com> Hi everyone, I wanted to point out that the nova API patch for volume mulitattach support is available for review: https://review.openstack.org/#/c/271047/ It's actually a series of changes, but that is the last one that enables the feature in nova. It relies on the 2.59 compute API microversion to be able to create a server from a mulitattach volume or to attach a mulitattach volume to a server. We do not allow attaching a mulitattach volume to a shelved offloaded server, to be consistent with the 2.49 microversion for tagged attach. When creating a server from a mulitattach volume, the compute API will check to see that all nova-compute services in the deployment have been upgraded to the service version that supports the mulitattach code in the libvirt driver. Similarly, when attaching a mulitattach volume to an existing server instance, the compute API will check that the compute hosting the instance is new enough to support mulitattach volumes (has been upgraded) and it's using a virt driver that supports the capability (currently only the libvirt driver). There are more details in the release note but I wanted to point out those restrictions. There is also a set of tempest integration tests here: https://review.openstack.org/#/c/266605/ Those will be tested in the nova-multiattach CI job: https://review.openstack.org/#/c/532689/ Due to restrictions with libvirt, mulitattach support is only available if qemu<2.10 or libvirt>=3.10. The test environment takes this into account for upstream testing. Nova will rely on Cinder microversion >=3.44, which was added in Queens, for safe detach of a mulitattach volume. There is a design spec for Cinder which describes how volume mulitattach will be supported in Cinder and how operators will be able to configure volume types and Cinder policy rules for mulitattach support: https://specs.openstack.org/openstack/cinder-specs/specs/queens/enable-multiattach.html Several people from various companies have been pushing this hard in the Queens release and we're two weeks away from feature freeze. I'm on vacation next week also, but I have a feeling that this will get done finally in Queens. -- Thanks, Matt From gord at live.ca Thu Jan 11 00:18:23 2018 From: gord at live.ca (gordon chung) Date: Thu, 11 Jan 2018 00:18:23 +0000 Subject: [openstack-dev] [ceilometer] Retiring ceilometerclient In-Reply-To: <1515627800-sup-7550@lrrr.local> References: <1515627800-sup-7550@lrrr.local> Message-ID: On 2018-01-10 06:44 PM, Doug Hellmann wrote: >> It's worth pointing out that openstacksdk has ceilometer REST API >> support in it, although it is special-cased since ceilometer was retired >> before we even made the service-types-authority: so ceilometer's REST API does not exist anymore. i don't believe it was even packaged in Pike (at least i don't have have an rpm for it in my environment). >> >> http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/openstack/connection.py#n234 >> >> We can either keep it there indefinitely (there is no cost to keeping >> it, other than that one "self._load('metric')" line) - or we could take >> this opportunity to purge it from sdk as well. >> >> BUT - if we're going to remove it from SDK I'd rather we do it in the >> very-near-future because we're getting closer to a 1.0 for SDK and once >> that happens if ceilometer is still there ceilometer support will remain >> until the end of recorded history. if it was removed from SDK, does it affect installations from pre-Pike? technically the API code exists prior to Pike (but we've been telling people for a year+ prior to that, to stop using it). if it only affects Queens onwards, i'm an easy yes to removing for openstacksdk 1.0. >> >> We could keep it and migrate the heat/mistral/rally/aodh >> ceilometerclient uses to be SDK uses (although heaven knows how we test >> that without a ceilometer in devstack) >> i'm guessing it's not tested anywhere as we've removed the API code for a few months now and have not heard anyone complain about a broken gate. > If ceilometer itself is deprecated, do we need to maintain support > in any of our tools? just to clarify, ceilometer itself is **not** deprecated. it just doesn't have an API as there is currently nothing to query/interact with remotely. jd had an idea how to manage/monitor existing agents but that is unrealised currently. the workflow remains as it has been: - ceilometer agents generate/normalise data relating to openstack resources - ceilometer data is pushed to a configurable target for consumption - gnocchi, panko, whatever you want - you interact with the data according to the specific targets. cheers, -- gord From doug at doughellmann.com Thu Jan 11 00:22:51 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 10 Jan 2018 19:22:51 -0500 Subject: [openstack-dev] [ceilometer] Retiring ceilometerclient In-Reply-To: References: <1515627800-sup-7550@lrrr.local> Message-ID: <1515630117-sup-6180@lrrr.local> Excerpts from Monty Taylor's message of 2018-01-10 18:08:52 -0600: > On 01/10/2018 05:44 PM, Doug Hellmann wrote: > > Excerpts from Monty Taylor's message of 2018-01-10 17:40:28 -0600: > >> On 01/10/2018 04:10 PM, Jon Schlueter wrote: > >>> On Thu, Nov 23, 2017 at 4:12 PM, gordon chung wrote: > >>>> > >>>> > >>>> > >>>> On 2017-11-22 04:18 AM, Julien Danjou wrote: > >>>>> Hi, > >>>>> > >>>>> Now that the Ceilometer API is gone, we really don't need > >>>>> ceilometerclient anymore. I've proposed a set of patches to retire it: > >>>>> > >>>>> https://review.openstack.org/#/c/522183/ > >>>>> > >>> > >>> > >>> So my question here is are we missing a process check for retiring a > >>> project that is still in > >>> the requirements of several other OpenStack projects? > >>> > >>> I went poking around and found that rally [4], heat [1], aodh [3] and > >>> mistral [2] still had references to > >>> ceilometerclient in the RPM packaging in RDO Queens, and on digging a > >>> bit more they > >>> were still in the requirements for at least those 4 projects. > >>> > >>> I would think that a discussion around retiring a project should also > >>> include at least enumerating > >>> which projects are currently consuming it [5]. That way a little bit > >>> of pressure on those consumers > >>> can be exerted to evaluate their usage of an about to be retired > >>> project. It shouldn't stop the > >>> discussions around retiring a project just a data point for decision making. > >> > >> It's worth pointing out that openstacksdk has ceilometer REST API > >> support in it, although it is special-cased since ceilometer was retired > >> before we even made the service-types-authority: > >> > >> http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/openstack/connection.py#n234 > > Whoops, that's not ceilometer - that's gnocchi I think? > > ceilometer support *does* have a service-types-authority reference so > *isn't* special-cased and is here: > > http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/openstack/meter > > >> We can either keep it there indefinitely (there is no cost to keeping > >> it, other than that one "self._load('metric')" line) - or we could take > >> this opportunity to purge it from sdk as well. > >> > >> BUT - if we're going to remove it from SDK I'd rather we do it in the > >> very-near-future because we're getting closer to a 1.0 for SDK and once > >> that happens if ceilometer is still there ceilometer support will remain > >> until the end of recorded history. > >> > >> We could keep it and migrate the heat/mistral/rally/aodh > >> ceilometerclient uses to be SDK uses (although heaven knows how we test > >> that without a ceilometer in devstack) > >> > >> I honestly do not have a strong opinion in either direction and welcome > >> input on what people would like to see done. > >> > >> Monty > >> > > > > If ceilometer itself is deprecated, do we need to maintain support > > in any of our tools? > > We do not - although if we had had ceilometer support in shade I would > be very adamant that we continue to support it to the best of our > ability for forever, since you never know who out there is running on an > old cloud that still has it. > > This is why I could go either way personally from an SDK perspective - > we don't have a 1.0 release of SDK yet, so if we do think it's best to > just clean house, now's the time. > I favor dropping support in the SDK. I'm not sure what that means for the service projects that seem to be using it, though. Do they actually need it? Doug From emilien at redhat.com Thu Jan 11 00:39:41 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 10 Jan 2018 16:39:41 -0800 Subject: [openstack-dev] [ceilometer] Retiring ceilometerclient In-Reply-To: <1515630117-sup-6180@lrrr.local> References: <1515627800-sup-7550@lrrr.local> <1515630117-sup-6180@lrrr.local> Message-ID: I'm favor of dropping Ceilometer API support in the SDK if we claim 1.0 will support Queens and beyond. If the SDK has to support previous versions (Pike, Ocata, etc), then we should warn the SDK users that Ceilometer API has been deprecated and removed so depending on your cloud provider the SDK might not work anymore. Also, I'm in favor of supporting Gnocchi API in the SDK if that something which makes sense for the Telemetry team. On Wed, Jan 10, 2018 at 4:22 PM, Doug Hellmann wrote: > Excerpts from Monty Taylor's message of 2018-01-10 18:08:52 -0600: >> On 01/10/2018 05:44 PM, Doug Hellmann wrote: >> > Excerpts from Monty Taylor's message of 2018-01-10 17:40:28 -0600: >> >> On 01/10/2018 04:10 PM, Jon Schlueter wrote: >> >>> On Thu, Nov 23, 2017 at 4:12 PM, gordon chung wrote: >> >>>> >> >>>> >> >>>> >> >>>> On 2017-11-22 04:18 AM, Julien Danjou wrote: >> >>>>> Hi, >> >>>>> >> >>>>> Now that the Ceilometer API is gone, we really don't need >> >>>>> ceilometerclient anymore. I've proposed a set of patches to retire it: >> >>>>> >> >>>>> https://review.openstack.org/#/c/522183/ >> >>>>> >> >>> >> >>> >> >>> So my question here is are we missing a process check for retiring a >> >>> project that is still in >> >>> the requirements of several other OpenStack projects? >> >>> >> >>> I went poking around and found that rally [4], heat [1], aodh [3] and >> >>> mistral [2] still had references to >> >>> ceilometerclient in the RPM packaging in RDO Queens, and on digging a >> >>> bit more they >> >>> were still in the requirements for at least those 4 projects. >> >>> >> >>> I would think that a discussion around retiring a project should also >> >>> include at least enumerating >> >>> which projects are currently consuming it [5]. That way a little bit >> >>> of pressure on those consumers >> >>> can be exerted to evaluate their usage of an about to be retired >> >>> project. It shouldn't stop the >> >>> discussions around retiring a project just a data point for decision making. >> >> >> >> It's worth pointing out that openstacksdk has ceilometer REST API >> >> support in it, although it is special-cased since ceilometer was retired >> >> before we even made the service-types-authority: >> >> >> >> http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/openstack/connection.py#n234 >> >> Whoops, that's not ceilometer - that's gnocchi I think? >> >> ceilometer support *does* have a service-types-authority reference so >> *isn't* special-cased and is here: >> >> http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/openstack/meter >> >> >> We can either keep it there indefinitely (there is no cost to keeping >> >> it, other than that one "self._load('metric')" line) - or we could take >> >> this opportunity to purge it from sdk as well. >> >> >> >> BUT - if we're going to remove it from SDK I'd rather we do it in the >> >> very-near-future because we're getting closer to a 1.0 for SDK and once >> >> that happens if ceilometer is still there ceilometer support will remain >> >> until the end of recorded history. >> >> >> >> We could keep it and migrate the heat/mistral/rally/aodh >> >> ceilometerclient uses to be SDK uses (although heaven knows how we test >> >> that without a ceilometer in devstack) >> >> >> >> I honestly do not have a strong opinion in either direction and welcome >> >> input on what people would like to see done. >> >> >> >> Monty >> >> >> > >> > If ceilometer itself is deprecated, do we need to maintain support >> > in any of our tools? >> >> We do not - although if we had had ceilometer support in shade I would >> be very adamant that we continue to support it to the best of our >> ability for forever, since you never know who out there is running on an >> old cloud that still has it. >> >> This is why I could go either way personally from an SDK perspective - >> we don't have a 1.0 release of SDK yet, so if we do think it's best to >> just clean house, now's the time. >> > > I favor dropping support in the SDK. I'm not sure what that means > for the service projects that seem to be using it, though. Do they > actually need it? > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Emilien Macchi From ghanshyammann at gmail.com Thu Jan 11 02:25:18 2018 From: ghanshyammann at gmail.com (Ghanshyam Mann) Date: Thu, 11 Jan 2018 07:55:18 +0530 Subject: [openstack-dev] [QA] Meeting Thursday Jan 11th at 8:00 UTC Message-ID: Hello everyone, Hope everyone is back from vacation. QA team is resuming the regular weekly meeting from today. OpenStack QA team IRC meeting will be Thursday, Jan 11th at 8:00 UTC in the #openstack-meeting channel. The agenda for the meeting can be found here: https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_Jan_11th_2018_.280800_UTC.29 Anyone is welcome to add an item to the agenda. -gmann From hyangii at gmail.com Thu Jan 11 04:05:32 2018 From: hyangii at gmail.com (Jae Sang Lee) Date: Thu, 11 Jan 2018 13:05:32 +0900 Subject: [openstack-dev] [nova] enable vm video, sound card Message-ID: Hi, stackers I want to enable video and sound driver of vm, but there are no codes in nova. I found a blueprint about this( https://blueprints.launchpad.net/nova/+spec/libvirt-spice-video-sound-driver), it looks like almost complete but not merged. And this is custom patch to add video and sound setting to vm(http://textuploader.com/aij31) Do you know or are interested in this topic? I would like to have some feedback on developing this topic. Thanks, Jaesang -------------- next part -------------- An HTML attachment was scrubbed... URL: From coolsvap at gmail.com Thu Jan 11 05:55:52 2018 From: coolsvap at gmail.com (Swapnil Kulkarni) Date: Thu, 11 Jan 2018 11:25:52 +0530 Subject: [openstack-dev] Retirement of astara repos? In-Reply-To: <572FF9CF-9AB5-4CBA-A4C8-26E7A012309E@gmx.com> References: <572FF9CF-9AB5-4CBA-A4C8-26E7A012309E@gmx.com> Message-ID: On Thu, Jan 11, 2018 at 3:50 AM, Sean McGinnis wrote: > While going through various repos looking at things to be cleaned up, I noticed the last commit for openstack/astara > was well over a year ago. Based on this and the little bit I have followed with this project, it’s my understanding that > there is no further work planned with Astara. > > Should these repos be retired at this point? Or is there a reason to keep things around? > > Thanks, > > Sean McGinnis (smcginnis) > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Sean, There have been a set of e-mails from Andreas in Dec for the inactive projects [1] [2] [3] [4] with less or no responce. Just FYI. [1] Astara: http://lists.openstack.org/pipermail/openstack-dev/2017-December/125350.html [2] Cerberor: http://lists.openstack.org/pipermail/openstack-dev/2017-December/125351.html [3] Evoque: http://lists.openstack.org/pipermail/openstack-dev/2017-December/125352.html [4] puppet-apps-site: http://lists.openstack.org/pipermail/openstack-dev/2017-December/125360.html ~coolsvap From tbechtold at suse.com Thu Jan 11 06:48:44 2018 From: tbechtold at suse.com (Thomas Bechtold) Date: Thu, 11 Jan 2018 07:48:44 +0100 Subject: [openstack-dev] [ceilometer] Retiring ceilometerclient In-Reply-To: References: <1515627800-sup-7550@lrrr.local> Message-ID: <76837ea5-884b-8d70-1f5f-ba0941eee185@suse.com> Hi, On 11.01.2018 01:18, gordon chung wrote: > > > On 2018-01-10 06:44 PM, Doug Hellmann wrote: >>> It's worth pointing out that openstacksdk has ceilometer REST API >>> support in it, although it is special-cased since ceilometer was retired >>> before we even made the service-types-authority: > > so ceilometer's REST API does not exist anymore. i don't believe it was > even packaged in Pike (at least i don't have have an rpm for it in my > environment). It was at lease for openSUSE: https://build.opensuse.org/package/show/Cloud:OpenStack:Pike/openstack-ceilometer Tom From zhipengh512 at gmail.com Thu Jan 11 07:31:05 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 11 Jan 2018 15:31:05 +0800 Subject: [openstack-dev] [publiccloud-wg]Public Cloud Feature List Hackathon Day 2 Message-ID: Hi Folks, Today we are gonna continue to comb through the public cloud feature list[0] as we did yesterday. Please join the discussion at #openstack-publiccloud starting from UTC1400. [0]https://docs.google.com/spreadsheets/d/1Mf8OAyTzZxCKzYHMgBl-QK_2- XSycSkOjqCyMTIedkA/edit?usp=sharing -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Thu Jan 11 07:41:54 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 11 Jan 2018 08:41:54 +0100 Subject: [openstack-dev] [ceilometer] Retiring ceilometerclient In-Reply-To: <76837ea5-884b-8d70-1f5f-ba0941eee185@suse.com> References: <1515627800-sup-7550@lrrr.local> <76837ea5-884b-8d70-1f5f-ba0941eee185@suse.com> Message-ID: <67770bc0-32d8-a447-dff9-e9b4f48ec324@suse.com> On 2018-01-11 07:48, Thomas Bechtold wrote: > Hi, > > On 11.01.2018 01:18, gordon chung wrote: >> >> >> On 2018-01-10 06:44 PM, Doug Hellmann wrote: >>>> It's worth pointing out that openstacksdk has ceilometer REST API >>>> support in it, although it is special-cased since ceilometer was >>>> retired >>>> before we even made the service-types-authority: >> >> so ceilometer's REST API does not exist anymore. i don't believe it was >> even packaged in Pike (at least i don't have have an rpm for it in my >> environment). > > It was at lease for openSUSE: > https://build.opensuse.org/package/show/Cloud:OpenStack:Pike/openstack-ceilometer Wrong package - statement still true:-) https://build.opensuse.org/package/show/Cloud:OpenStack:Pike/python-ceilometerclient Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From niu.zglinux at gmail.com Thu Jan 11 08:33:03 2018 From: niu.zglinux at gmail.com (Zhenguo Niu) Date: Thu, 11 Jan 2018 16:33:03 +0800 Subject: [openstack-dev] [mogan] Transitioning PTL role to Li Tao Message-ID: Hi team, Due to my job responsibilities changed a while ago, I will transition the PTL role to Li Tao. If anyone has any concerns with this, please reach out to me. -- Best Regards, Zhenguo Niu -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Thu Jan 11 08:38:34 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 11 Jan 2018 16:38:34 +0800 Subject: [openstack-dev] [publiccloud-wg]Rocky PTG Planning Etherpad Message-ID: Hi Team, I drafted an initial framework of the etherpad we could use for Rocky PTG in Dublin. You are more than welcomed to provide input: https://etherpad.openstack.org/p/publiccloud-wg-ptg-rocky -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangyujun+zte at gmail.com Thu Jan 11 08:52:05 2018 From: zhangyujun+zte at gmail.com (Yujun Zhang (ZTE)) Date: Thu, 11 Jan 2018 08:52:05 +0000 Subject: [openstack-dev] [vitrage] rules in vitrage_aggregated_state() In-Reply-To: References: Message-ID: I have almost understood it thanks to your explanation. The confusion is mainly caused by the naming. I guess the main reason is that the scope evolves but the naming is not updated with it. For example 1. `vitrage_aggregated_state` actually applies for both resource state and alarm severity as defined in `value_properties`. So `vitrage_aggregated_values` could be a better name. 2. For data source in static configuration, we may use `static.yaml` as a fallback. The name `default.yaml` will mislead user that it should be applied to data source configured in "types" but without a values configuration. 3. The UNDEFINED value is named UNDEFINED_DATASOURCE = "undefined datasource", it is not a consistent type of severity and state enumeration. 4. The behavior for data source defined in static without values configuration and data source defined in "types" without values configuration are inconsistent. The former will fallback to `default.yaml` but the latter will lead to undefined value. I know it is there for historical reasons and current developers may already get used to it, but it gives new contributors too many surprises. What do you think? Shall we amend them? On Tue, Jan 9, 2018 at 11:29 PM Afek, Ifat (Nokia - IL/Kfar Sava) < ifat.afek at nokia.com> wrote: > Hi, > > > > I agree that the code is confusing… > > > > This is part of a change that was made in order to support default states > for static entities. For example, in the static configuration yaml file you > can add entities of types ‘switch’ and ‘br-ex’. In the past, in order to > support states for these new types, you needed to add switch.yaml and > br-ex.yaml under /etc/vitrage/datasources_values, which you would most > likely copy&paste from another datasource. Now, we have under > /etc/vitrage/datasources_values a default.yaml file that is used for all > static entities. > > > > Back to the code, I believe this is the logic: > > > > · If the datasource is part of ‘types’ (as defined in > vitrage.conf) and has states configuration – use it. This is the normal > behavior. > > · If the datasource is not part of ‘types’, we understand that it > was defined in a static configuration file. Use the default states > configuration. I assume that it is somehow handled in the first part of the > if statement (I’m not so familiar with that code) > > · If neither is true – it means that the datasource is “real” and > not static, and was defined in vitrage.conf types. And it also means that > its states configuration is missing, so the state is UNDEFINED. > > > > And to your questions: > > > > 1. the data source is not defined -> the default states should be used > 2. data source defined but state config not exist -> UNDEFINED state > 3. data source defined, state config exist but the state is not found. > -> I believe that somewhere in the first part of the if statement you will > get UNDEFINED > > > > > > Hope that’s more clear now. It might be a good idea to add some comments > to that function… > > > > Best Regards, > > Ifat. > > > > > > *From: *"Yujun Zhang (ZTE)" > *Reply-To: *"OpenStack Development Mailing List (not for usage > questions)" > *Date: *Tuesday, 9 January 2018 at 8:34 > *To: *"OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org> > *Subject: *Re: [openstack-dev] [vitrage] rules in > vitrage_aggregated_state() > > > > Forgot to paste the link to the related code: > > > > > https://git.openstack.org/cgit/openstack/vitrage/tree/vitrage/entity_graph/mappings/datasource_info_mapper.py#n61 > > > > > > > > On Tue, Jan 9, 2018 at 2:34 PM Yujun Zhang (ZTE) > wrote: > > Hi root causers > > > > I have been inspecting the code about aggregated state recently and have a > question regarding the rules. > > > > The "not" operator in the if clause confuses me. If it is not a configured > data source, how do we apply the aggregation rules? It seems this is > handled in else clause. > > > > if datasource_name in self.datasources_state_confs or \ > > datasource_name *not* in self.conf.datasources.types: ... > > else: > > self.category_normalizer[vitrage_category].set_aggregated_value( > > new_vertex, self.UNDEFINED_DATASOURCE) > > self.category_normalizer[vitrage_category].set_operational_value( > > new_vertex, self.UNDEFINED_DATASOURCE) > > > There are some test case describing the expected behavior. But I couldn't understand the design philosophy behind it. What is expected when > > 1. the data source is not defined > > 2. data source defined but state config not exist > > 3. data source defined, state config exist but the state is not found. > > Could somebody shed some light on it? > > > > > > -- > > Yujun Zhang > > > > -- > > Yujun Zhang > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Yujun Zhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifat.afek at nokia.com Thu Jan 11 09:13:09 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Thu, 11 Jan 2018 09:13:09 +0000 Subject: [openstack-dev] [vitrage] rules in vitrage_aggregated_state() In-Reply-To: References: Message-ID: <88E071AB-DE7D-4E02-9255-6796E721ADEF@nokia.com> From: "Yujun Zhang (ZTE)" Date: Thursday, 11 January 2018 at 10:52 I have almost understood it thanks to your explanation. [Ifat] I liked the “almost” ;-) The confusion is mainly caused by the naming. I guess the main reason is that the scope evolves but the naming is not updated with it. For example 1. `vitrage_aggregated_state` actually applies for both resource state and alarm severity as defined in `value_properties`. So `vitrage_aggregated_values` could be a better name. [Ifat] For alarms we use ‘vitrage_aggregated_severity’ 2. For data source in static configuration, we may use `static.yaml` as a fallback. The name `default.yaml` will mislead user that it should be applied to data source configured in "types" but without a values configuration. [Ifat] We should decide whether we want the default values to apply also to “real” datasources. I think the risk is that people who write a new datasource will forget to add the values yaml file, and will believe that everything works fine with the default. Then, upon a specific failure (that doesn’t happen often) they will get UNDEFINED status. On the other hand, if they always get UNDEFINED, they will remember to add the correct yaml file. 3. The UNDEFINED value is named UNDEFINED_DATASOURCE = "undefined datasource", it is not a consistent type of severity and state enumeration. [Ifat] I didn’t understand this comment. 4. The behavior for data source defined in static without values configuration and data source defined in "types" without values configuration are inconsistent. The former will fallback to `default.yaml` but the latter will lead to undefined value. [Ifat] See my answer to #2. I know it is there for historical reasons and current developers may already get used to it, but it gives new contributors too many surprises. What do you think? Shall we amend them? On Tue, Jan 9, 2018 at 11:29 PM Afek, Ifat (Nokia - IL/Kfar Sava) > wrote: Hi, I agree that the code is confusing… This is part of a change that was made in order to support default states for static entities. For example, in the static configuration yaml file you can add entities of types ‘switch’ and ‘br-ex’. In the past, in order to support states for these new types, you needed to add switch.yaml and br-ex.yaml under /etc/vitrage/datasources_values, which you would most likely copy&paste from another datasource. Now, we have under /etc/vitrage/datasources_values a default.yaml file that is used for all static entities. Back to the code, I believe this is the logic: • If the datasource is part of ‘types’ (as defined in vitrage.conf) and has states configuration – use it. This is the normal behavior. • If the datasource is not part of ‘types’, we understand that it was defined in a static configuration file. Use the default states configuration. I assume that it is somehow handled in the first part of the if statement (I’m not so familiar with that code) • If neither is true – it means that the datasource is “real” and not static, and was defined in vitrage.conf types. And it also means that its states configuration is missing, so the state is UNDEFINED. And to your questions: 1. the data source is not defined -> the default states should be used 2. data source defined but state config not exist -> UNDEFINED state 3. data source defined, state config exist but the state is not found. -> I believe that somewhere in the first part of the if statement you will get UNDEFINED Hope that’s more clear now. It might be a good idea to add some comments to that function… Best Regards, Ifat. From: "Yujun Zhang (ZTE)" > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, 9 January 2018 at 8:34 To: "OpenStack Development Mailing List (not for usage questions)" > Subject: Re: [openstack-dev] [vitrage] rules in vitrage_aggregated_state() Forgot to paste the link to the related code: https://git.openstack.org/cgit/openstack/vitrage/tree/vitrage/entity_graph/mappings/datasource_info_mapper.py#n61 On Tue, Jan 9, 2018 at 2:34 PM Yujun Zhang (ZTE) > wrote: Hi root causers I have been inspecting the code about aggregated state recently and have a question regarding the rules. The "not" operator in the if clause confuses me. If it is not a configured data source, how do we apply the aggregation rules? It seems this is handled in else clause. if datasource_name in self.datasources_state_confs or \ datasource_name not in self.conf.datasources.types: ... else: self.category_normalizer[vitrage_category].set_aggregated_value( new_vertex, self.UNDEFINED_DATASOURCE) self.category_normalizer[vitrage_category].set_operational_value( new_vertex, self.UNDEFINED_DATASOURCE) There are some test case describing the expected behavior. But I couldn't understand the design philosophy behind it. What is expected when 1. the data source is not defined 2. data source defined but state config not exist 3. data source defined, state config exist but the state is not found. Could somebody shed some light on it? -- Yujun Zhang -- Yujun Zhang __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Yujun Zhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Thu Jan 11 10:28:07 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 11 Jan 2018 10:28:07 +0000 (GMT) Subject: [openstack-dev] [qa] [api] [all] use gabbi and tempest with just YAML In-Reply-To: References: Message-ID: On Thu, 4 Jan 2018, Chris Dent wrote: > The gabbi-tempest plugin is responsible for getting authentication > and service catalog information (using standard tempest calls) from > keystone and creating a suite of environment variables (such as > PLACEMENT_SERVICE and COMPUTE_BASE). > > I have a sample file[5] that confirms resource provider and > allocation handling across the process of booting a single server. > It demonstrates some of the potential. Don't be too scared by the > noisy YAML anchors at the top, that's just an experiment to see what > can be done to manage URLs without having to know URLs. I did a bit more work on this and wrote it up: https://anticdent.org/gabbi-tempest-experiment-1.html -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From gord at live.ca Thu Jan 11 14:05:20 2018 From: gord at live.ca (gordon chung) Date: Thu, 11 Jan 2018 14:05:20 +0000 Subject: [openstack-dev] [ceilometer] Retiring ceilometerclient In-Reply-To: <76837ea5-884b-8d70-1f5f-ba0941eee185@suse.com> References: <1515627800-sup-7550@lrrr.local> <76837ea5-884b-8d70-1f5f-ba0941eee185@suse.com> Message-ID: On 2018-01-11 01:48 AM, Thomas Bechtold wrote: > > It was at lease for openSUSE: > https://build.opensuse.org/package/show/Cloud:OpenStack:Pike/openstack-ceilometer ah, maybe just centos then... or i'm not searching the correct place. :) -- gord From thierry at openstack.org Thu Jan 11 14:20:00 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 11 Jan 2018 15:20:00 +0100 Subject: [openstack-dev] [all] [tc] Community Goals for Rocky -- privsep In-Reply-To: References: Message-ID: Emilien Macchi wrote: > [...] > Thierry mentioned privsep migration (done in Nova and Zun). (action, > ping mikal about it). It's not "done" in Nova: Mikal planned to migrate all of nova-compute (arguably the largest service using rootwrap) to privsep during Queens, but AFAICT it's still work in progress. Other projects like cinder and neutron are using it. If support in Nova is almost there, it would make a great Queens goal to get rid of the last rootwrap leftovers and deprecate it. Mikal: could you give us a quick update of where you are ? Anyone interested in championing that as a goal? -- Thierry Carrez (ttx) From saverio.proto at switch.ch Thu Jan 11 14:23:46 2018 From: saverio.proto at switch.ch (Saverio Proto) Date: Thu, 11 Jan 2018 15:23:46 +0100 Subject: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID Message-ID: <054e9a0c-1c99-51dd-9e66-8adb3c910802@switch.ch> Hello, we recently enabled the JSON logging to feed a Kibana dashboard and look at the logs with modern tooling. however it looks like in our Openstack Newton deployment that some information in the JSON files is missing. most important missing bit is the request-id, that we use to track an event across multiple log files on multiple hosts. Looking at the code it really looks like the request ID is there for the context formatter and not for the json formatter. https://github.com/openstack/oslo.log/blob/master/oslo_log/formatters.py#L208 https://github.com/openstack/oslo.log/blob/master/oslo_log/formatters.py#L460 I am an operator and a very bad python developer, so can anyone confirm that is really missing in the code, and it is not me configuring stuff wrongly ? If it is really missing the request-id in the json log formatter, should I open a bug about this ? thank you Saverio -- SWITCH Saverio Proto, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 1573 saverio.proto at switch.ch, http://www.switch.ch http://www.switch.ch/stories From lhinds at redhat.com Thu Jan 11 15:33:36 2018 From: lhinds at redhat.com (Luke Hinds) Date: Thu, 11 Jan 2018 15:33:36 +0000 Subject: [openstack-dev] [security] Security PTG Planning, x-project request for topics. Message-ID: Hello All, I am seeking topics for the PTG from all projects, as this will be where we try out are new form of being a SIG. For this PTG, we hope to facilitate more cross project collaboration topics now that we are a SIG, so if your project has a security need / problem / proposal than please do use the security SIG room where a larger audience may be present to help solve problems and gain x-project consensus. Please see our PTG planning pad [0] where I encourage you to add to the topics. [0] https://etherpad.openstack.org/p/security-ptg-rocky -- Luke Hinds Security Project PTL -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Thu Jan 11 16:36:24 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Thu, 11 Jan 2018 17:36:24 +0100 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs Message-ID: Hi everyone, We have governance review under debate[1] that we need the community's help on. The debate is over what recommendation the TC should make to the Interop team on where the tests it uses for the OpenStack trademark program should be located, specifically those for the new add-on program being introduced. Let me badly summarize: A couple of years ago we issued a resolution[2] officially recommending that the Interop team use solely tempest as its source of tests for capability verification. The Interop team has always had the view that the developers, being the people closest to the project they're creating, are the best people to write tests verifying correct functionality, and so the Interop team doesn't maintain its own test suite, instead selecting tests from those written in coordination between the QA team and the other project teams. These tests are used to validate clouds applying for the OpenStack Powered tag, and since all of the projects included in the OpenStack Powered program already had tests in tempest, this was a natural fit. When we consider adding new trademark programs comprising of other projects, the test source is less obvious. Two examples are designate, which has never had tests in the tempest repo, and heat, which recently had its tests removed from the tempest repo. So far the patch proposes three options: 1) All trademark-related tests should go in the tempest repo, in accordance with the original resolution. This would mean that even projects that have never had tests in tempest would now have to add at least some of their black-box tests to tempest. The value of this option is that centralizes tests used for the Interop program in a location where interop-minded folks from the QA team can control them. The downside is that projects that so far have avoided having a dependency on tempest will now lose some control over the black-box tests that they use for functional and integration that would now also be used for trademark certification. There's also concern for the review bandwidth of the QA team - we can't expect the QA team to be continually responsible for an ever-growing list of projects and their trademark tests. 2) All trademark-related tests for *add-on projects* should be sourced from plugins external to tempest. The value of this option is it allows project teams to retain control over these tests. The potential problem with it is that individual project teams are not necessarily reviewing test changes with an eye for interop concerns and so could inadvertently change the behavior of the trademark-verification tools. 3) All trademark-related tests should go in a single separate tempest plugin. This has the value of giving the QA and Interop teams control over interop-related tests while also making clear the distinction between tests used for trademark verification and tests used for CI. Matt's argument against this is that there actually is very little distinction between those two cases, and that a given test could have many different applications. Other ideas that have been thrown around are: * Maintaining a branch in the tempest repo that Interop tests are pulled from. * Tagging Interop-related tests with decorators to make it clear that they need to be handled carefully. At the heart of the issue is the perception that projects that keep their integration tests within the tempest tree are somehow blessed, maybe by the QA team or by the TC. It would be nice to try to clarify what technical and political reasons we have for why different projects have tests in different places - review bandwidth of the QA team, ownership/control by the project teams, technical interdependency between certain projects, or otherwise. Ultimately, as Jeremy said in the comments on the resolution patch, the recommendation should be one that works best for the QA and Interop teams. So far we've heard from Matt and Mark expressing moderate support for option 2. We'd like to hear more from those teams about how they see this working, especially with regard to concerns about the quality and stability standards that out-of-tree tests may be held to. We additionally need input from the whole community on how maintaining trademark-related tests in tempest will affect you if you don't already have your tests there. We'd especially like to address any perceptions of favoritism or exclusionism that stem from these issues. And to quickly clear up one detail before it makes it onto this thread, the Queens Community Goal about splitting tempest plugins out of the main project's tree[3] is entirely about addressing technical problems related to packaging for existing tempest plugins, it's not a decree about what should live within the tempest repository nor does it have anything to do with the Interop program. As I'm not deeply steeped in the history of either the Interop or QA teams I am sure I've misrepresented some details here, I'm sorry about that. But we'd like to get this resolution moving forward and we're currently stuck, so this thread is intended to gather enough community input to get unstuck and avoid letting this proposal become stale. Please respond to this thread or comment on the resolution proposal[1] if you have any thoughts. Colleen [1] https://review.openstack.org/#/c/521602 [2] https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html [3] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html From sean.mcginnis at gmx.com Thu Jan 11 16:52:43 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 11 Jan 2018 10:52:43 -0600 Subject: [openstack-dev] [release] Release countdown for week R-6, January 13 - 19 Message-ID: <20180111165242.GA10326@sm-xps> Development Focus ----------------- Teams should be focused on implementing planned work. Work should be wrapping up on non-client libraries to meet the lib deadline Thursday, the 18th. General Information ------------------- We are now getting close to the end of the cycle. The non-client library (typically any lib other than the "python-$PROJECTclient" deliverables) deadline is 18 January, followed quickly the next Thursday with the final client library release. Releases for critical fixes will be allowed after this point, but we will be much more restrictive about what is allowed if there are more lib release requests after this point. Please keep this in mind. When requesting these library releases, you should also include the stable branching request with the review (as an example, see the "branches" section here: http://git.openstack.org/cgit/openstack/releases/tree/deliverables/pike/os-brick.yaml#n2) Upcoming Deadlines & Dates -------------------------- Final non-client library release deadline: January 18 Final client library release deadline: January 25 Queens-3 Milestone: January 25 Start of Rocky PTL nominations: January 29 Start of Rocky PTL election: February 7 Rocky PTG in Dublin: Week of February 26, 2018 -- Sean McGinnis (smcginnis) From cdent+os at anticdent.org Thu Jan 11 17:29:26 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 11 Jan 2018 17:29:26 +0000 (GMT) Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, Wide-ranging conversations in the API-SIG meeting today [7]. We started out discussing techniques for driving the adoption of the guidelines published by the group. Two ideas received most of the attention: * Provide guidance on the tools available to make it easier to align new API projects with the guidelines, from the outset. The idea of an example or cookbook project was mooted, but the number of options, variables, and opinions that such an effort would face killed this idea. Better, perhaps, are recommend tools for addressing common problems. * Making use of the OpenStack-wide goals [8] process to encourage working towards consistency. Monty has made a proposal [9] based on the rfc5988 link guidance newly listed below. Then we talked about work in progress to establish a common health check system across the OpenStack services. There's an oslo-spec [10] in progress. The API-SIG's involvement is to help make sure the HTTP side of things is handled well. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines None this week. # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. * Expand note about rfc5988 link header https://review.openstack.org/#/c/531914/ # Guidelines Currently Under Review [3] * Add guideline on exposing microversions in SDKs https://review.openstack.org/#/c/532814/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 * WIP: Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] http://eavesdrop.openstack.org/meetings/api_sig/2018/api_sig.2018-01-11-15.59.html [8] https://governance.openstack.org/tc/goals/index.html [9] https://review.openstack.org/#/c/532627/ [10] https://review.openstack.org/#/c/531456/ Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From doug at doughellmann.com Thu Jan 11 18:54:52 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 11 Jan 2018 13:54:52 -0500 Subject: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID In-Reply-To: <054e9a0c-1c99-51dd-9e66-8adb3c910802@switch.ch> References: <054e9a0c-1c99-51dd-9e66-8adb3c910802@switch.ch> Message-ID: <1515696336-sup-7054@lrrr.local> Excerpts from Saverio Proto's message of 2018-01-11 15:23:46 +0100: > Hello, > > we recently enabled the JSON logging to feed a Kibana dashboard and look > at the logs with modern tooling. > > however it looks like in our Openstack Newton deployment that some > information in the JSON files is missing. > > most important missing bit is the request-id, that we use to track an > event across multiple log files on multiple hosts. > > Looking at the code it really looks like the request ID is there for the > context formatter and not for the json formatter. > > https://github.com/openstack/oslo.log/blob/master/oslo_log/formatters.py#L208 > > https://github.com/openstack/oslo.log/blob/master/oslo_log/formatters.py#L460 > > I am an operator and a very bad python developer, so can anyone confirm > that is really missing in the code, and it is not me configuring stuff > wrongly ? > > If it is really missing the request-id in the json log formatter, should > I open a bug about this ? > > thank you > > Saverio > The newton version of the JSONFormatter adds all of the values from the context to the log record: http://git.openstack.org/cgit/openstack/oslo.log/tree/oslo_log/formatters.py?h=newton-eol#n142 That should include the request_id. Which service's logs are missing the request_id? Chaining request_id values from one service to the next was a separate piece of work, and I don't remember off the top of my head when that was added. Perhaps someone else can. I think Sean Dague drove a lot of that work. Doug From emilien at redhat.com Thu Jan 11 20:57:01 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 11 Jan 2018 12:57:01 -0800 Subject: [openstack-dev] [qa] [tc] Need champion for "cold upgrades capabilities" goal Message-ID: Some projects are still not testing cold upgrades and therefore don't have the "supports-upgrade" tag. https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html This goal would mostly benefit the operators community as we would continue to ensure OpenStack can be upgraded and it's something that we actually test in the gate. In term of actions, we would need to run grenade / upgrade jobs for the projects which don't have this tag yet, so it's mostly QA work (devstack, grenade, zuul layout). We're now looking for someone willing to lead this effort. Someone with a little bit of experience on QA and upgrades would work. However, our community is strong and we always help each others so no big deal if someone volunteers without all knowledge. A Champion is someone who coordinates the work to make a goal happen, and not supposed to do all the work. The Champion gets support from the whole community at any time. Please step-up if you're willing to take this role! Thanks, -- Emilien Macchi From seanroberts66 at gmail.com Thu Jan 11 21:21:37 2018 From: seanroberts66 at gmail.com (sean roberts) Date: Thu, 11 Jan 2018 21:21:37 +0000 Subject: [openstack-dev] Retirement of astara repos? In-Reply-To: References: <572FF9CF-9AB5-4CBA-A4C8-26E7A012309E@gmx.com> Message-ID: It’s my understanding that the development has been moved back into DreamHost. I stopped working on it over two years ago. I’ll ping them to double check that repo retiring is what they want. Thanks for asking first! On Wed, Jan 10, 2018 at 21:56 Swapnil Kulkarni wrote: > On Thu, Jan 11, 2018 at 3:50 AM, Sean McGinnis > wrote: > > While going through various repos looking at things to be cleaned up, I > noticed the last commit for openstack/astara > > was well over a year ago. Based on this and the little bit I have > followed with this project, it’s my understanding that > > there is no further work planned with Astara. > > > > Should these repos be retired at this point? Or is there a reason to > keep things around? > > > > Thanks, > > > > Sean McGinnis (smcginnis) > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > Sean, > > There have been a set of e-mails from Andreas in Dec for the inactive > projects [1] [2] [3] [4] with less or no responce. Just FYI. > > [1] Astara: > http://lists.openstack.org/pipermail/openstack-dev/2017-December/125350.html > [2] Cerberor: > http://lists.openstack.org/pipermail/openstack-dev/2017-December/125351.html > [3] Evoque: > http://lists.openstack.org/pipermail/openstack-dev/2017-December/125352.html > [4] puppet-apps-site: > > http://lists.openstack.org/pipermail/openstack-dev/2017-December/125360.html > > ~coolsvap > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- ~sean -------------- next part -------------- An HTML attachment was scrubbed... URL: From edmondsw at us.ibm.com Thu Jan 11 21:33:42 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Thu, 11 Jan 2018 16:33:42 -0500 Subject: [openstack-dev] [nova] Working toward Queens feature freeze and RC1 In-Reply-To: References: Message-ID: > From: William M Edmonds/Raleigh/IBM > To: "OpenStack Development Mailing List \(not for usage questions\)" > > Date: 01/08/2018 03:11 PM > Subject: Re: [openstack-dev] [nova] Working toward Queens feature > freeze and RC1 > > > From: Matt Riedemann > > To: "OpenStack Development Mailing List (not for usage questions)" > > > > Date: 01/03/2018 07:03 PM > > Subject: [openstack-dev] [nova] Working toward Queens feature freeze and RC1 > > > ... snip ... > > The rest of the blueprints are tracked here: > > > > https://etherpad.openstack.org/p/nova-queens-blueprint-status > > I updated that etherpad with the latest status for the powervm > blueprint. Should have 2 of the 3 remaining patches ready for review > in the next day or two, and the last later in the week. All of the powervm patches are ready for core reviews, and the etherpad has been updated accordingly. Thanks in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Jan 11 21:43:50 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 11 Jan 2018 15:43:50 -0600 Subject: [openstack-dev] [Release-job-failures] release-post job for openstack/releases failed In-Reply-To: References: Message-ID: <20180111214350.GA24283@sm-xps> On Thu, Jan 11, 2018 at 09:08:38PM +0000, zuul at openstack.org wrote: > Build failed. > > - tag-releases http://logs.openstack.org/a5/a52aa0b2ad06a52e50be8879f9256576ceceb91c/release-post/tag-releases/fecf776/ : SUCCESS in 3m 53s > - publish-static http://logs.openstack.org/a5/a52aa0b2ad06a52e50be8879f9256576ceceb91c/release-post/publish-static/cee3a5f/ : POST_FAILURE in 5m 06s > Already discussed a little with fungi and team, but to make sure it's captured here, failed on an rsync error. A test of ssh'ing into the server succeeded, so may be a transient failure. From Louie.Kwan at windriver.com Thu Jan 11 21:52:48 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Thu, 11 Jan 2018 21:52:48 +0000 Subject: [openstack-dev] [all] How to use libxml2 with tox Message-ID: <47EFB32CD8770A4D9590812EE28C977E961D7E8E@ALA-MBC.corp.ad.wrs.com> Would like to use libxml2 and having issues for tox. What needs to be included in the requirements.txt file etc. Any tip is much appreciated. Thanks. Louie From mark at mcclain.xyz Thu Jan 11 21:55:03 2018 From: mark at mcclain.xyz (Mark McClain) Date: Thu, 11 Jan 2018 16:55:03 -0500 Subject: [openstack-dev] Retirement of astara repos? In-Reply-To: <572FF9CF-9AB5-4CBA-A4C8-26E7A012309E@gmx.com> References: <572FF9CF-9AB5-4CBA-A4C8-26E7A012309E@gmx.com> Message-ID: <0DE3CB09-5CA1-4557-9158-C40F0FC37E6E@mcclain.xyz> Sean, Andreas- Sorry I missed Andres’ message earlier in December about retiring astara. Everyone is correct that development stopped a good while ago. We attempted in Barcelona to find others in the community to take over the day-to-day management of the project. Unfortunately, nothing sustained resulted from that session. I’ve intentionally delayed archiving the repos because of background conversations around restarting active development for some pieces bubble up from time-to-time. I’ll contact those I know were interested and try for a resolution to propose before the PTG. mark > On Jan 10, 2018, at 5:20 PM, Sean McGinnis wrote: > > While going through various repos looking at things to be cleaned up, I noticed the last commit for openstack/astara > was well over a year ago. Based on this and the little bit I have followed with this project, it’s my understanding that > there is no further work planned with Astara. > > Should these repos be retired at this point? Or is there a reason to keep things around? > > Thanks, > > Sean McGinnis (smcginnis) From sean.mcginnis at gmx.com Thu Jan 11 21:57:01 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 11 Jan 2018 15:57:01 -0600 Subject: [openstack-dev] Retirement of astara repos? In-Reply-To: <0DE3CB09-5CA1-4557-9158-C40F0FC37E6E@mcclain.xyz> References: <572FF9CF-9AB5-4CBA-A4C8-26E7A012309E@gmx.com> <0DE3CB09-5CA1-4557-9158-C40F0FC37E6E@mcclain.xyz> Message-ID: <20180111215701.GA30356@sm-xps> On Thu, Jan 11, 2018 at 04:55:03PM -0500, Mark McClain wrote: > Sean, Andreas- > > Sorry I missed Andres’ message earlier in December about retiring astara. Everyone is correct that development stopped a good while ago. We attempted in Barcelona to find others in the community to take over the day-to-day management of the project. Unfortunately, nothing sustained resulted from that session. > > I’ve intentionally delayed archiving the repos because of background conversations around restarting active development for some pieces bubble up from time-to-time. I’ll contact those I know were interested and try for a resolution to propose before the PTG. > > mark Great - yeah, no rush on this. I was just looking at things that could be cleaned up, but it's just been general housekeeping. We can wait and see how things go. Thanks! Sean From mriedemos at gmail.com Thu Jan 11 22:52:00 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 11 Jan 2018 16:52:00 -0600 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: References: Message-ID: On 1/11/2018 10:36 AM, Colleen Murphy wrote: > 1) All trademark-related tests should go in the tempest repo, in accordance > with the original resolution. This would mean that even projects that have > never had tests in tempest would now have to add at least some of their > black-box tests to tempest. > > The value of this option is that centralizes tests used for the Interop program > in a location where interop-minded folks from the QA team can control them. The > downside is that projects that so far have avoided having a dependency on > tempest will now lose some control over the black-box tests that they use for > functional and integration that would now also be used for trademark > certification. > There's also concern for the review bandwidth of the QA team - we can't expect > the QA team to be continually responsible for an ever-growing list of projects > and their trademark tests. How many tests are we talking about for designate and heat? Half a dozen? A dozen? More? If it's just a couple of tests per project it doesn't seem terrible to have them live in Tempest so you get the "interop eye" on reviews, as noted in your email. If it's a considerable amount, then option 2 seems the best for the majority of parties. -- Thanks, Matt From dan.dyer00 at gmail.com Thu Jan 11 23:05:11 2018 From: dan.dyer00 at gmail.com (Daniel Dyer) Date: Thu, 11 Jan 2018 16:05:11 -0700 Subject: [openstack-dev] [ceilometer] Retiring ceilometerclient In-Reply-To: References: <1515627800-sup-7550@lrrr.local> <76837ea5-884b-8d70-1f5f-ba0941eee185@suse.com> Message-ID: <0849E80C-8F4D-46E0-9BFE-F1A3ABFACF4A@gmail.com> Gordon, My understanding was that the API is not officially deprecated until queens. Is this not the case? Dan > On Jan 11, 2018, at 7:05 AM, gordon chung wrote: > > > > On 2018-01-11 01:48 AM, Thomas Bechtold wrote: >> >> It was at lease for openSUSE: >> https://build.opensuse.org/package/show/Cloud:OpenStack:Pike/openstack-ceilometer > > ah, maybe just centos then... or i'm not searching the correct place. :) > > > -- > gord > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cboylan at sapwetik.org Thu Jan 11 23:09:34 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 11 Jan 2018 15:09:34 -0800 Subject: [openstack-dev] [all] How to use libxml2 with tox In-Reply-To: <47EFB32CD8770A4D9590812EE28C977E961D7E8E@ALA-MBC.corp.ad.wrs.com> References: <47EFB32CD8770A4D9590812EE28C977E961D7E8E@ALA-MBC.corp.ad.wrs.com> Message-ID: <1515712174.1616913.1232526480.2B6A8680@webmail.messagingengine.com> On Thu, Jan 11, 2018, at 1:52 PM, Kwan, Louie wrote: > Would like to use libxml2 and having issues for tox. > > What needs to be included in the requirements.txt file etc. > > Any tip is much appreciated. You likely need to make sure that libxml2 header packages are installed so that the python package can link against libxml2. On Debian and Ubuntu I think the package is libxml2-dev and is libxml2-devel on suse. This isn't something that you would add to your requirements.txt as it would be a system dependency. To get it installed on our test nodes you can add it to the project's bindep.txt file. Hope this helps, Clark From iwienand at redhat.com Fri Jan 12 01:42:49 2018 From: iwienand at redhat.com (Ian Wienand) Date: Fri, 12 Jan 2018 12:42:49 +1100 Subject: [openstack-dev] [qa][requirements] CentOS libvirt versus newton/ocata libvirt-python Message-ID: <73416143-54ef-5920-0b39-bd49faded1e1@redhat.com> Hi, So I guess since CentOS included libvirt 3.2 (7-1708, or around RHEL 7.4), it's been incompatible with libvirt-python requirements of 2.1.0 in newton [1] and 2.5.0 in ocata [2] (pike, at 3.5.0, works). Do we want to do anything about this? I can think of several options * bump the libvirt-python versions on older branches * Create an older centos image (can't imagine we have the person bandwidth to maintain this) * Hack something in devstack (seems rather pointless to test something so far outside deployments). * Turn off CentOS testing for old devstack branches None are particularly appealing... (I'm sorry if this has been discussed, I have great déjà vu about it, maybe we were talking about it at summit or something). -i [1] http://logs.openstack.org/48/531248/2/check/legacy-tempest-dsvm-neutron-full-centos-7/80fa903/logs/devstacklog.txt.gz#_2018-01-09_05_14_40_960 [2] http://logs.openstack.org/50/531250/2/check/legacy-tempest-dsvm-neutron-full-centos-7/1c711f5/logs/devstacklog.txt.gz#_2018-01-09_20_43_08_833 From masayuki.igawa at gmail.com Fri Jan 12 03:13:42 2018 From: masayuki.igawa at gmail.com (Masayuki Igawa) Date: Fri, 12 Jan 2018 12:13:42 +0900 Subject: [openstack-dev] [qa] [tc] Need champion for "cold upgrades capabilities" goal In-Reply-To: References: Message-ID: <20180112031342.jubnjsx5jazbkmvf@fastmail.com> Hi Emilien, I'd love to take this role! I have some tempest experience but not QA work which you mentioned (devstack, grenade, zuul layout) that much. However, I think it will be a very good opportunity to step-up. So, for me, the next step might be to know grenade / upgrade jobs / plugin mechanizm deeply. And I suppose we need some documentation about "how to support upgrade" based on the requirements for the other projects (and me :). Any suggestion and/or comments are welcome! -- Masayuki On 01/11, Emilien Macchi wrote: > Some projects are still not testing cold upgrades and therefore don't > have the "supports-upgrade" tag. > > https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html > > This goal would mostly benefit the operators community as we would > continue to ensure OpenStack can be upgraded and it's something that > we actually test in the gate. > In term of actions, we would need to run grenade / upgrade jobs for > the projects which don't have this tag yet, so it's mostly QA work > (devstack, grenade, zuul layout). > > We're now looking for someone willing to lead this effort. Someone > with a little bit of experience on QA and upgrades would work. > However, our community is strong and we always help each others so no > big deal if someone volunteers without all knowledge. > A Champion is someone who coordinates the work to make a goal happen, > and not supposed to do all the work. The Champion gets support from > the whole community at any time. > > Please step-up if you're willing to take this role! > > Thanks, > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From prometheanfire at gentoo.org Fri Jan 12 03:53:29 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 11 Jan 2018 21:53:29 -0600 Subject: [openstack-dev] [qa][requirements] CentOS libvirt versus newton/ocata libvirt-python In-Reply-To: <73416143-54ef-5920-0b39-bd49faded1e1@redhat.com> References: <73416143-54ef-5920-0b39-bd49faded1e1@redhat.com> Message-ID: <20180112035329.nj2fsvtjanhlw4nz@gentoo.org> On 18-01-12 12:42:49, Ian Wienand wrote: > Hi, > > So I guess since CentOS included libvirt 3.2 (7-1708, or around RHEL > 7.4), it's been incompatible with libvirt-python requirements of 2.1.0 > in newton [1] and 2.5.0 in ocata [2] (pike, at 3.5.0, works). > > Do we want to do anything about this? I can think of several options > > * bump the libvirt-python versions on older branches > > * Create an older centos image (can't imagine we have the person > bandwidth to maintain this) > > * Hack something in devstack (seems rather pointless to test > something so far outside deployments). > > * Turn off CentOS testing for old devstack branches > > None are particularly appealing... > > (I'm sorry if this has been discussed, I have great déjà vu about it, > maybe we were talking about it at summit or something). > I thought I remembered something about it, but couldn't find it in the archives. First, about newton, it's dead (2017-10-11). Next, about ocata, it looks like it can support newer libvirt, but just because a distro updated a library doesn't mean we have to update. IIRC, for ubuntu they use cloud-archives to get the right version of libvirt, does something like that exist for centos/redhat? -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From iwienand at redhat.com Fri Jan 12 04:18:02 2018 From: iwienand at redhat.com (Ian Wienand) Date: Fri, 12 Jan 2018 15:18:02 +1100 Subject: [openstack-dev] [qa][requirements] CentOS libvirt versus newton/ocata libvirt-python In-Reply-To: <20180112035329.nj2fsvtjanhlw4nz@gentoo.org> References: <73416143-54ef-5920-0b39-bd49faded1e1@redhat.com> <20180112035329.nj2fsvtjanhlw4nz@gentoo.org> Message-ID: <89067ca5-8678-f3c0-98f0-d478c739e51a@redhat.com> On 01/12/2018 02:53 PM, Matthew Thode wrote: > First, about newton, it's dead (2017-10-11). Yeah, there were a few opt-outs, which is why I think devstack still runs it. Not worth a lot of effort. > Next, about ocata, it looks like it can support newer libvirt, but > just because a distro updated a library doesn't mean we have to > update. IIRC, for ubuntu they use cloud-archives to get the right > version of libvirt, does something like that exist for > centos/redhat? Well cloud-archives is ports of more recent things backwards, whereas I think we're in a situation of having too recent libraries in the base platform. The CentOS 7.3 v 7.4 situation is a little more subtle than Trusty v Xenial, say, but fundamentally the same I guess. The answer may be "Ocata not supported on 7.4". p.s. I hope I'm understanding the python-libvirt compat story correctly. AIUI any newer python-binding release will build against older versions of libvirt. But an old version of python-libvirt may not build against a newer release of the C libraries? -i From emilien at redhat.com Fri Jan 12 05:29:31 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 11 Jan 2018 21:29:31 -0800 Subject: [openstack-dev] [qa] [tc] Need champion for "cold upgrades capabilities" goal In-Reply-To: <20180112031342.jubnjsx5jazbkmvf@fastmail.com> References: <20180112031342.jubnjsx5jazbkmvf@fastmail.com> Message-ID: On Thu, Jan 11, 2018 at 7:13 PM, Masayuki Igawa wrote: > Hi Emilien, > > I'd love to take this role! Wow, this is AWESOME. > I have some tempest experience but not QA work which you mentioned > (devstack, grenade, zuul layout) that much. However, I think it will > be a very good opportunity to step-up. > > So, for me, the next step might be to know grenade / upgrade jobs / > plugin mechanizm deeply. And I suppose we need some documentation > about "how to support upgrade" based on the requirements for the other > projects (and me :). > > Any suggestion and/or comments are welcome! OK so the steps are documented here: https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html#requirements So first of all, it only affects projects that are "services" (e.g. Tacker). The main requirement is to have grenade scripts in-tree for the projects which miss the tag now. Look at Ironic which already has the tag: https://github.com/openstack/ironic/tree/master/devstack/upgrade At the same time, we need to change the zuul layout for the projects which don't have the tag yet. Still example with Ironic: https://github.com/openstack/ironic/blob/master/zuul.d/project.yaml#L6-L7 Once a project has a grenade job (voting) that runs Grenade with the specs described here: https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html#requirements - the project can apply to the tag and the goal is complete once the tag is approved. I think the first step is to make sure each project which doesn't have the tag has to understand what the tag means, in term of requirements. If they say: "Yeah, our project can already do that" (because they did some manual testing etc): then the work is purely grenade/zuul-layout. "No, we don't support upgrades at all": then the goal might take one or two cycles, depending on the amount of work. This is the list of service projects that don't have the tag yet: aodh blazar cloudkitty congress dragonflow ec2api freezer karbor kuryr masakari mistral monasca murano octavia panko searchlight senlin tacker trove vitrage watcher zaqar zun (I might have miss some, please fix it). That's it for now, I hope it helped, please let us know if more questions. Thanks again for stepping up and you have all our support at any time. [...] -- Emilien Macchi From emilien at redhat.com Fri Jan 12 05:31:17 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 11 Jan 2018 21:31:17 -0800 Subject: [openstack-dev] [ceilometer] Retiring ceilometerclient In-Reply-To: <0849E80C-8F4D-46E0-9BFE-F1A3ABFACF4A@gmail.com> References: <1515627800-sup-7550@lrrr.local> <76837ea5-884b-8d70-1f5f-ba0941eee185@suse.com> <0849E80C-8F4D-46E0-9BFE-F1A3ABFACF4A@gmail.com> Message-ID: On Thu, Jan 11, 2018 at 3:05 PM, Daniel Dyer wrote: > My understanding was that the API is not officially deprecated until queens. Is this not the case? https://docs.openstack.org/releasenotes/ceilometer/ocata.html#deprecation-notes https://docs.openstack.org/releasenotes/ceilometer/unreleased.html#upgrade-notes (queens) Ceilometer API was deprecated in Ocata and removed in Queens. -- Emilien Macchi From gord at live.ca Fri Jan 12 05:32:19 2018 From: gord at live.ca (gordon chung) Date: Fri, 12 Jan 2018 05:32:19 +0000 Subject: [openstack-dev] [ceilometer] Retiring ceilometerclient In-Reply-To: <0849E80C-8F4D-46E0-9BFE-F1A3ABFACF4A@gmail.com> References: <1515627800-sup-7550@lrrr.local> <76837ea5-884b-8d70-1f5f-ba0941eee185@suse.com> <0849E80C-8F4D-46E0-9BFE-F1A3ABFACF4A@gmail.com> Message-ID: hey Dan, On 2018-01-11 06:05 PM, Daniel Dyer wrote: > My understanding was that the API is not officially deprecated until queens. Is this not the case? not quite. we removed the API permanently in queens. it was actually deprecated back in 2016[1] officially. we've unofficially/transparently been telling people to switch to Gnocchi (or whatever target you want to put into publisher) for longer than that as the legacy api/storage hasn't been touched meaningfully since 2015. given the realities of the project and OpenStack, our project was just more proactive in culling stale and/or unusable code. as it stands, ceilometer solely generates/normalises data about openstack resources and publishes data to consumers. other services are required to leverage and add value to the data. [1] http://lists.openstack.org/pipermail/openstack-dev/2016-October/105042.html cheers, -- gord From emilien at redhat.com Fri Jan 12 05:34:21 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 11 Jan 2018 21:34:21 -0800 Subject: [openstack-dev] [qa] [tc] Need champion for "cold upgrades capabilities" goal In-Reply-To: References: <20180112031342.jubnjsx5jazbkmvf@fastmail.com> Message-ID: Sorry I forgot an important action: propose the goal for Rocky :-) An example with https://review.openstack.org/#/c/532361/ (from Sean). Thanks, On Thu, Jan 11, 2018 at 9:29 PM, Emilien Macchi wrote: > On Thu, Jan 11, 2018 at 7:13 PM, Masayuki Igawa > wrote: >> Hi Emilien, >> >> I'd love to take this role! > > Wow, this is AWESOME. > >> I have some tempest experience but not QA work which you mentioned >> (devstack, grenade, zuul layout) that much. However, I think it will >> be a very good opportunity to step-up. >> >> So, for me, the next step might be to know grenade / upgrade jobs / >> plugin mechanizm deeply. And I suppose we need some documentation >> about "how to support upgrade" based on the requirements for the other >> projects (and me :). >> >> Any suggestion and/or comments are welcome! > > OK so the steps are documented here: > https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html#requirements > > So first of all, it only affects projects that are "services" (e.g. Tacker). > The main requirement is to have grenade scripts in-tree for the > projects which miss the tag now. > Look at Ironic which already has the tag: > https://github.com/openstack/ironic/tree/master/devstack/upgrade > > At the same time, we need to change the zuul layout for the projects > which don't have the tag yet. > Still example with Ironic: > https://github.com/openstack/ironic/blob/master/zuul.d/project.yaml#L6-L7 > > Once a project has a grenade job (voting) that runs Grenade with the > specs described here: > https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html#requirements > - the project can apply to the tag and the goal is complete once the > tag is approved. > > I think the first step is to make sure each project which doesn't have > the tag has to understand what the tag means, in term of requirements. > > If they say: > > "Yeah, our project can already do that" (because they did some manual > testing etc): then the work is purely grenade/zuul-layout. > > "No, we don't support upgrades at all": then the goal might take one > or two cycles, depending on the amount of work. > > This is the list of service projects that don't have the tag yet: > aodh > blazar > cloudkitty > congress > dragonflow > ec2api > freezer > karbor > kuryr > masakari > mistral > monasca > murano > octavia > panko > searchlight > senlin > tacker > trove > vitrage > watcher > zaqar > zun > > (I might have miss some, please fix it). > > That's it for now, I hope it helped, please let us know if more questions. > Thanks again for stepping up and you have all our support at any time. > > [...] > -- > Emilien Macchi -- Emilien Macchi From glongwave at gmail.com Fri Jan 12 05:59:40 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Fri, 12 Jan 2018 13:59:40 +0800 Subject: [openstack-dev] [all] [tc] Community Goals for Rocky In-Reply-To: References: Message-ID: I would like to help the goal - enable mutable configuration, would like to post a patch for that later. 2018-01-10 2:37 GMT+08:00 Emilien Macchi : > As promised, let's continue the discussion and move things forward. > > This morning Thierry brought the discussion during the TC office hour > (that I couldn't attend due to timezone): > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ > latest.log.html#t2018-01-09T09:18:33 > > Some outputs: > > - One goal has been proposed so far. > > Right now, we only have one goal proposal: Storyboard Migration. There > are some concerns about the ability to achieve this goal in 6 months. > At that point, we think it would be great to postpone the goal to S > cycle, continue the progress (kudos to Kendall) and fine other goals > for Rocky. > > > - We still have a good backlog of goals, we're just missing champions. > > https://etherpad.openstack.org/p/community-goals > > Chris brought up "pagination links in collection resources" in api-wg > guidelines theme. He said in the past this goal was more a "should" > than a "must". > Thierry mentioned privsep migration (done in Nova and Zun). (action, > ping mikal about it). > Thierry also brought up the version discovery (proposed by Monty). > Flavio proposed mutable configuration, which might be very useful for > operators. > He also mentioned that IPv6 support goal shouldn't be that far from > done, but we're currently lacking in CI jobs that test IPv6 > deployments (question for infra/QA, can we maybe document the gap so > we can run some gate jobs on ipv6 ?) > (personal note on that one, since TripleO & Puppet OpenStack CI > already have IPv6 jobs, we can indeed be confident that it shouldn't > be that hard to complete this goal in 6 months, I guess the work needs > to happen in the projects layouts). > Another interesting goal proposed by Thierry, also useful for > operators, is to move more projects to assert:supports-upgrade tag. > Thierry said we are probably not that far from this goal, but the > major lack is in testing. > Finally, another "simple" goal is to remove mox/mox3 (Flavio said most > of projects don't use it anymore already). > > With that said, let's continue the discussion on these goals, see > which ones can be actionable and find champions. > > - Flavio asked how would it be perceived if one cycle wouldn't have at > least one community goal. > > Thierry said we could introduce multi-cycle goals (Storyboard might be > a good candidate). > Chris and Thierry thought that it would be a bad sign for our > community to not have community goals during a cycle, "loss of > momentum" eventually. > > > Thanks for reading so far, > > On Fri, Dec 15, 2017 at 9:07 AM, Emilien Macchi > wrote: > > On Tue, Nov 28, 2017 at 2:22 PM, Emilien Macchi > wrote: > > [...] > >> Suggestions are welcome: > >> - on the mailing-list, in a new thread per goal [all] [tc] Proposing > >> goal XYZ for Rocky > >> - on Gerrit in openstack/governance like Kendall did. > > > > Just a fresh reminder about Rocky goals. > > A few questions that we can ask ourselves: > > > > 1) What common challenges do we have? > > > > e.g. Some projects don't have mutable configuration or some projects > > aren't tested against IPv6 clouds, etc. > > > > 2) Who is willing to drive a community goal (a.k.a. Champion)? > > > > note: a Champion is someone who volunteer to drive the goal, but > > doesn't commit to write the code necessarily. The Champion will > > communicate with projects PTLs about the goal, and make the liaison if > > needed. > > > > The list of ideas for Community Goals is documented here: > > https://etherpad.openstack.org/p/community-goals > > > > Please be involved and propose some ideas, I'm sure our community has > > some common goals, right ? :-) > > Thanks, and happy holidays. I'll follow-up in January of next year. > > -- > > Emilien Macchi > > > > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Fri Jan 12 06:22:32 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 11 Jan 2018 22:22:32 -0800 Subject: [openstack-dev] [all] [tc] Community Goals for Rocky In-Reply-To: References: Message-ID: On Thu, Jan 11, 2018 at 9:59 PM, ChangBo Guo wrote: > I would like to help the goal - enable mutable configuration, would like to > post a patch for that later. are you interested to be the "Champion" for this goal? > > 2018-01-10 2:37 GMT+08:00 Emilien Macchi : >> >> As promised, let's continue the discussion and move things forward. >> >> This morning Thierry brought the discussion during the TC office hour >> (that I couldn't attend due to timezone): >> >> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/latest.log.html#t2018-01-09T09:18:33 >> >> Some outputs: >> >> - One goal has been proposed so far. >> >> Right now, we only have one goal proposal: Storyboard Migration. There >> are some concerns about the ability to achieve this goal in 6 months. >> At that point, we think it would be great to postpone the goal to S >> cycle, continue the progress (kudos to Kendall) and fine other goals >> for Rocky. >> >> >> - We still have a good backlog of goals, we're just missing champions. >> >> https://etherpad.openstack.org/p/community-goals >> >> Chris brought up "pagination links in collection resources" in api-wg >> guidelines theme. He said in the past this goal was more a "should" >> than a "must". >> Thierry mentioned privsep migration (done in Nova and Zun). (action, >> ping mikal about it). >> Thierry also brought up the version discovery (proposed by Monty). >> Flavio proposed mutable configuration, which might be very useful for >> operators. >> He also mentioned that IPv6 support goal shouldn't be that far from >> done, but we're currently lacking in CI jobs that test IPv6 >> deployments (question for infra/QA, can we maybe document the gap so >> we can run some gate jobs on ipv6 ?) >> (personal note on that one, since TripleO & Puppet OpenStack CI >> already have IPv6 jobs, we can indeed be confident that it shouldn't >> be that hard to complete this goal in 6 months, I guess the work needs >> to happen in the projects layouts). >> Another interesting goal proposed by Thierry, also useful for >> operators, is to move more projects to assert:supports-upgrade tag. >> Thierry said we are probably not that far from this goal, but the >> major lack is in testing. >> Finally, another "simple" goal is to remove mox/mox3 (Flavio said most >> of projects don't use it anymore already). >> >> With that said, let's continue the discussion on these goals, see >> which ones can be actionable and find champions. >> >> - Flavio asked how would it be perceived if one cycle wouldn't have at >> least one community goal. >> >> Thierry said we could introduce multi-cycle goals (Storyboard might be >> a good candidate). >> Chris and Thierry thought that it would be a bad sign for our >> community to not have community goals during a cycle, "loss of >> momentum" eventually. >> >> >> Thanks for reading so far, >> >> On Fri, Dec 15, 2017 at 9:07 AM, Emilien Macchi >> wrote: >> > On Tue, Nov 28, 2017 at 2:22 PM, Emilien Macchi >> > wrote: >> > [...] >> >> Suggestions are welcome: >> >> - on the mailing-list, in a new thread per goal [all] [tc] Proposing >> >> goal XYZ for Rocky >> >> - on Gerrit in openstack/governance like Kendall did. >> > >> > Just a fresh reminder about Rocky goals. >> > A few questions that we can ask ourselves: >> > >> > 1) What common challenges do we have? >> > >> > e.g. Some projects don't have mutable configuration or some projects >> > aren't tested against IPv6 clouds, etc. >> > >> > 2) Who is willing to drive a community goal (a.k.a. Champion)? >> > >> > note: a Champion is someone who volunteer to drive the goal, but >> > doesn't commit to write the code necessarily. The Champion will >> > communicate with projects PTLs about the goal, and make the liaison if >> > needed. >> > >> > The list of ideas for Community Goals is documented here: >> > https://etherpad.openstack.org/p/community-goals >> > >> > Please be involved and propose some ideas, I'm sure our community has >> > some common goals, right ? :-) >> > Thanks, and happy holidays. I'll follow-up in January of next year. >> > -- >> > Emilien Macchi >> >> >> >> -- >> Emilien Macchi >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > ChangBo Guo(gcb) > Community Director @EasyStack > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Emilien Macchi From masayuki.igawa at gmail.com Fri Jan 12 07:26:25 2018 From: masayuki.igawa at gmail.com (Masayuki Igawa) Date: Fri, 12 Jan 2018 16:26:25 +0900 Subject: [openstack-dev] [qa] [tc] Need champion for "cold upgrades capabilities" goal In-Reply-To: References: <20180112031342.jubnjsx5jazbkmvf@fastmail.com> Message-ID: <20180112072624.o6ex65ttxhbchdkl@fastmail.com> Hi, Thank you so much for your warm comments/links/etc,etc! I'll check them and start to write a proposal for the goal for Rocky early next week. -- Masayuki On 01/11, Emilien Macchi wrote: > Sorry I forgot an important action: propose the goal for Rocky :-) > > An example with https://review.openstack.org/#/c/532361/ (from Sean). > > Thanks, > > On Thu, Jan 11, 2018 at 9:29 PM, Emilien Macchi wrote: > > On Thu, Jan 11, 2018 at 7:13 PM, Masayuki Igawa > > wrote: > >> Hi Emilien, > >> > >> I'd love to take this role! > > > > Wow, this is AWESOME. > > > >> I have some tempest experience but not QA work which you mentioned > >> (devstack, grenade, zuul layout) that much. However, I think it will > >> be a very good opportunity to step-up. > >> > >> So, for me, the next step might be to know grenade / upgrade jobs / > >> plugin mechanizm deeply. And I suppose we need some documentation > >> about "how to support upgrade" based on the requirements for the other > >> projects (and me :). > >> > >> Any suggestion and/or comments are welcome! > > > > OK so the steps are documented here: > > https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html#requirements > > > > So first of all, it only affects projects that are "services" (e.g. Tacker). > > The main requirement is to have grenade scripts in-tree for the > > projects which miss the tag now. > > Look at Ironic which already has the tag: > > https://github.com/openstack/ironic/tree/master/devstack/upgrade > > > > At the same time, we need to change the zuul layout for the projects > > which don't have the tag yet. > > Still example with Ironic: > > https://github.com/openstack/ironic/blob/master/zuul.d/project.yaml#L6-L7 > > > > Once a project has a grenade job (voting) that runs Grenade with the > > specs described here: > > https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html#requirements > > - the project can apply to the tag and the goal is complete once the > > tag is approved. > > > > I think the first step is to make sure each project which doesn't have > > the tag has to understand what the tag means, in term of requirements. > > > > If they say: > > > > "Yeah, our project can already do that" (because they did some manual > > testing etc): then the work is purely grenade/zuul-layout. > > > > "No, we don't support upgrades at all": then the goal might take one > > or two cycles, depending on the amount of work. > > > > This is the list of service projects that don't have the tag yet: > > aodh > > blazar > > cloudkitty > > congress > > dragonflow > > ec2api > > freezer > > karbor > > kuryr > > masakari > > mistral > > monasca > > murano > > octavia > > panko > > searchlight > > senlin > > tacker > > trove > > vitrage > > watcher > > zaqar > > zun > > > > (I might have miss some, please fix it). > > > > That's it for now, I hope it helped, please let us know if more questions. > > Thanks again for stepping up and you have all our support at any time. > > > > [...] > > -- > > Emilien Macchi > > > > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From saverio.proto at switch.ch Fri Jan 12 08:17:55 2018 From: saverio.proto at switch.ch (Saverio Proto) Date: Fri, 12 Jan 2018 09:17:55 +0100 Subject: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID In-Reply-To: <1515696336-sup-7054@lrrr.local> References: <054e9a0c-1c99-51dd-9e66-8adb3c910802@switch.ch> <1515696336-sup-7054@lrrr.local> Message-ID: <7142b655-abca-ed58-bc4c-07fa51cb401c@switch.ch> > Which service's logs are missing the request_id? > If I look at neutron-server logs with my current setup I get two files: neutron-server.log neutron-server.json the standard log file has in all neutron.wsgi lines information like: neutron.wsgi [req-4fda8017-50c7-40eb-9e7b-710e7fba0d01 97d349b9499b4bd29c5e167c65ca1fb3 d447c836b6934dfab41a03f1ff96d879 - - -] where req-UUID is the request ID, and the other two values are the user UUID and the keystone project UUID. when I look at the same line in the JSON output this information is missing. I am starting neutron-server with the command line option --log-config-append=/etc/neutron/logging.neutron-server.conf where the conf file looks like [loggers] keys = root, neutron [handlers] keys = logfile, jsonfile, null [formatters] keys = context, json, default [logger_root] level = WARNING handlers = null [logger_neutron] level = INFO handlers = logfile, jsonfile qualname = neutron [handler_logfile] class = handlers.WatchedFileHandler args = ('/var/log/neutron/neutron-server.log',) formatter = context [handler_jsonfile] level = INFO class = handlers.WatchedFileHandler args = ('/var/log/neutron/neutron-server.json',) formatter = json [handler_null] class = logging.NullHandler formatter = default args = () [formatter_context] class = oslo_log.formatters.ContextFormatter [formatter_json] class = oslo_log.formatters.JSONFormatter [formatter_default] format = %(message)s I had a look at nova-api and I have the same problem. Anyway in my Kibana I never so a req-UUID what so ever, so this looks like a problem with all the openstack services. Is it a problem with my logging configuration ? thank you Saverio -- SWITCH Saverio Proto, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 1573 saverio.proto at switch.ch, http://www.switch.ch http://www.switch.ch/stories From zhang.lei.fly at gmail.com Fri Jan 12 08:18:40 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Fri, 12 Jan 2018 16:18:40 +0800 Subject: [openstack-dev] [kolla] Re: About maridb 10.1 on kolla In-Reply-To: <76f5e411-d614-8c83-da98-00ce833e12a4@linaro.org> References: <76f5e411-d614-8c83-da98-00ce833e12a4@linaro.org> Message-ID: After using RDO mariadb, i found galera is fallback from galera-25.3.20 to 25.3.16. I am not sure what's the exact difference between these two version. But what i found is: > ‘SAFE TO BOOTSTRAP’ PROTECTION [1] > Starting with provider version 3.19, Galera has an additional protection > against attempting to boostrap the cluster using a node that may not > have been the last node remaining in the cluster prior to cluster shutdown. So the question is: it is safe to fallback galera version? [1] http://galeracluster.com/documentation-webpages/restartingcluster.html#safe-to-bootstrap-protection On Fri, Jan 5, 2018 at 1:15 AM, Marcin Juszkiewicz < marcin.juszkiewicz at linaro.org> wrote: > W dniu 29.12.2017 o 07:58, Jeffrey Zhang pisze: > > recently, a series patches about mariadb is pushed. Current issue is > > > > - using different mariadb binary from different repo ( from percona, > > Mariadb official, linux distro ) > > - using different version number of mariadb ( 10.0 and 10.1 ) > > > > To make life easier, some patches are pushed to unify all of these. Here > > is my thought about this > > > > - try to bump to 10.1, which is released long time ago > > - use mariadb binary provided by linux disto as much as possible > > > > So here is plan > > > > - trying to upgrade to mariadb 10.1 [0][1] > > - use mariadb 10.1 provided by RDO on redhat family distro [2] > > - use mariadb 10.0 provided by UCA on ubuntu > > - it is told that, it not work as excepted [3] > > - if this does not work. we can upgrade to mariadb 10.1 provides by > > mariadb official on ubuntu. > > - use mariadb 10.1 provided by os repo on Debian. > > How we are with testing/merging? > > For Debian to be deployable we need 529199 in images as rest of changes > are kolla-ansible and can be cherry-picked before deployment. > > > > [0] https://review.openstack.org/#/c/529505/ - fix kolla-ansible for > > mariadb 10.1 > > merged > > > [1] https://review.openstack.org/#/c/529199/ - Fix MariaDB bootstrap > for 10.1 version > > [2] https://review.openstack.org/#/c/468632/ - Consume RDO packaged > mariadb version > > > [3] https://review.openstack.org/#/c/426953/ - Revert "Removed percona > > from ubuntu repos" > > merged > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From yamamoto at midokura.com Fri Jan 12 08:26:24 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Fri, 12 Jan 2018 17:26:24 +0900 Subject: [openstack-dev] [neutron] Stepping down from core In-Reply-To: References: Message-ID: On Sat, Dec 16, 2017 at 4:01 AM, Armando M. wrote: > Hi neutrinos, > > To some of you this email may not come as a surprise. it was a surprise to me. > > During the past few months my upstream community engagements have been more > and more sporadic. While I tried hard to stay committed and fulfill my core > responsibilities I feel like I failed to retain the level of quality and > consistency that I would have liked ever since I stepped down from being the > Neutron PTL back at the end of Ocata. > > I stated many times when talking to other core developers that being core is > a duty rather than a privilege, and I personally feel like it's way overdue > for me to recognize on the mailing list that it's the time that I state > officially my intention to step down due to other commitments. it seems you have a very high standard. i suspect many of us should resign if we all follow your standard. :-) > > This does not mean that I will disappear tomorrow. I'll continue to be on > neutron IRC channels, support the neutron team, being the release liasion > for Queens, participate at meetings, and be open to providing feedback to > anyone who thinks my opinion is still valuable, especially when dealing with > the neutron quirks for which I might be (git) blamed :) > > Cheers, > Armando > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From chao.xu at timanetworks.com Fri Jan 12 08:47:47 2018 From: chao.xu at timanetworks.com (chao.xu) Date: Fri, 12 Jan 2018 16:47:47 +0800 Subject: [openstack-dev] The OpenStack Magnum service whether to supports Kubernetes Cluster/Bay auto scaling Message-ID: <2018011216474681470520@timanetworks.com> Hi,all In Pike version, the OpenStack Magnum container service whether to supports Kubernetes Cluster/Bay node to auto scaling? At present, the backend engine Swarm supports Cluster/Bay node auto scaling. Best Regards chao.xu -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Jan 12 09:07:46 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 12 Jan 2018 10:07:46 +0100 Subject: [openstack-dev] [qa] [tc] Need champion for "cold upgrades capabilities" goal In-Reply-To: References: <20180112031342.jubnjsx5jazbkmvf@fastmail.com> Message-ID: <22178021-632a-c4cc-6f5e-251da9d79727@openstack.org> Emilien Macchi wrote: > [...] > This is the list of service projects that don't have the tag yet: > aodh > blazar > cloudkitty > congress > dragonflow > ec2api > freezer > karbor > kuryr > masakari > mistral > monasca > murano > octavia > panko > searchlight > senlin > tacker > trove > vitrage > watcher > zaqar > zun > > (I might have miss some, please fix it). I think that misses magnum, solum (+ rally, cyborg, fuxi and maybe tricircle). I'm not 100% sure dragonflow would count as a "service" (it runs within neutron AFAIK). One way to keep the goal simpler is to focus for this cycle on the main "openstack" bucket from the OpenStack map[1]. This is where there would be the most value to operators and users (user-facing services) and it would represent a great first step. That would narrow the list down to: aodh blazar ec2api freezer karbor magnum masakari mistral murano octavia searchlight senlin solum trove zaqar zun +dragonflow? [1] https://www.openstack.org/openstack-map -- Thierry Carrez (ttx) From glongwave at gmail.com Fri Jan 12 09:27:51 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Fri, 12 Jan 2018 17:27:51 +0800 Subject: [openstack-dev] [all] [tc] Community Goals for Rocky In-Reply-To: References: Message-ID: Yeah, help means to be the "Champion" for me :-) 2018-01-12 14:22 GMT+08:00 Emilien Macchi : > On Thu, Jan 11, 2018 at 9:59 PM, ChangBo Guo wrote: > > I would like to help the goal - enable mutable configuration, would like > to > > post a patch for that later. > > are you interested to be the "Champion" for this goal? > > > > > 2018-01-10 2:37 GMT+08:00 Emilien Macchi : > >> > >> As promised, let's continue the discussion and move things forward. > >> > >> This morning Thierry brought the discussion during the TC office hour > >> (that I couldn't attend due to timezone): > >> > >> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ > latest.log.html#t2018-01-09T09:18:33 > >> > >> Some outputs: > >> > >> - One goal has been proposed so far. > >> > >> Right now, we only have one goal proposal: Storyboard Migration. There > >> are some concerns about the ability to achieve this goal in 6 months. > >> At that point, we think it would be great to postpone the goal to S > >> cycle, continue the progress (kudos to Kendall) and fine other goals > >> for Rocky. > >> > >> > >> - We still have a good backlog of goals, we're just missing champions. > >> > >> https://etherpad.openstack.org/p/community-goals > >> > >> Chris brought up "pagination links in collection resources" in api-wg > >> guidelines theme. He said in the past this goal was more a "should" > >> than a "must". > >> Thierry mentioned privsep migration (done in Nova and Zun). (action, > >> ping mikal about it). > >> Thierry also brought up the version discovery (proposed by Monty). > >> Flavio proposed mutable configuration, which might be very useful for > >> operators. > >> He also mentioned that IPv6 support goal shouldn't be that far from > >> done, but we're currently lacking in CI jobs that test IPv6 > >> deployments (question for infra/QA, can we maybe document the gap so > >> we can run some gate jobs on ipv6 ?) > >> (personal note on that one, since TripleO & Puppet OpenStack CI > >> already have IPv6 jobs, we can indeed be confident that it shouldn't > >> be that hard to complete this goal in 6 months, I guess the work needs > >> to happen in the projects layouts). > >> Another interesting goal proposed by Thierry, also useful for > >> operators, is to move more projects to assert:supports-upgrade tag. > >> Thierry said we are probably not that far from this goal, but the > >> major lack is in testing. > >> Finally, another "simple" goal is to remove mox/mox3 (Flavio said most > >> of projects don't use it anymore already). > >> > >> With that said, let's continue the discussion on these goals, see > >> which ones can be actionable and find champions. > >> > >> - Flavio asked how would it be perceived if one cycle wouldn't have at > >> least one community goal. > >> > >> Thierry said we could introduce multi-cycle goals (Storyboard might be > >> a good candidate). > >> Chris and Thierry thought that it would be a bad sign for our > >> community to not have community goals during a cycle, "loss of > >> momentum" eventually. > >> > >> > >> Thanks for reading so far, > >> > >> On Fri, Dec 15, 2017 at 9:07 AM, Emilien Macchi > >> wrote: > >> > On Tue, Nov 28, 2017 at 2:22 PM, Emilien Macchi > >> > wrote: > >> > [...] > >> >> Suggestions are welcome: > >> >> - on the mailing-list, in a new thread per goal [all] [tc] Proposing > >> >> goal XYZ for Rocky > >> >> - on Gerrit in openstack/governance like Kendall did. > >> > > >> > Just a fresh reminder about Rocky goals. > >> > A few questions that we can ask ourselves: > >> > > >> > 1) What common challenges do we have? > >> > > >> > e.g. Some projects don't have mutable configuration or some projects > >> > aren't tested against IPv6 clouds, etc. > >> > > >> > 2) Who is willing to drive a community goal (a.k.a. Champion)? > >> > > >> > note: a Champion is someone who volunteer to drive the goal, but > >> > doesn't commit to write the code necessarily. The Champion will > >> > communicate with projects PTLs about the goal, and make the liaison if > >> > needed. > >> > > >> > The list of ideas for Community Goals is documented here: > >> > https://etherpad.openstack.org/p/community-goals > >> > > >> > Please be involved and propose some ideas, I'm sure our community has > >> > some common goals, right ? :-) > >> > Thanks, and happy holidays. I'll follow-up in January of next year. > >> > -- > >> > Emilien Macchi > >> > >> > >> > >> -- > >> Emilien Macchi > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > -- > > ChangBo Guo(gcb) > > Community Director @EasyStack > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Fri Jan 12 09:49:55 2018 From: ltoscano at redhat.com (Luigi Toscano) Date: Fri, 12 Jan 2018 10:49:55 +0100 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: References: Message-ID: <13089710.Tc0ZGb15F4@whitebase.usersys.redhat.com> On Thursday, 11 January 2018 23:52:00 CET Matt Riedemann wrote: > On 1/11/2018 10:36 AM, Colleen Murphy wrote: > > 1) All trademark-related tests should go in the tempest repo, in > > accordance > > > > with the original resolution. This would mean that even projects that > > have > > never had tests in tempest would now have to add at least some of > > their > > black-box tests to tempest. > > > > The value of this option is that centralizes tests used for the Interop > > program in a location where interop-minded folks from the QA team can > > control them. The downside is that projects that so far have avoided > > having a dependency on tempest will now lose some control over the > > black-box tests that they use for functional and integration that would > > now also be used for trademark certification. > > There's also concern for the review bandwidth of the QA team - we can't > > expect the QA team to be continually responsible for an ever-growing list > > of projects and their trademark tests. > > How many tests are we talking about for designate and heat? Half a > dozen? A dozen? More? > > If it's just a couple of tests per project it doesn't seem terrible to > have them live in Tempest so you get the "interop eye" on reviews, as > noted in your email. If it's a considerable amount, then option 2 seems > the best for the majority of parties. I would argue that it does not scale; what if some test is taken out from the interoperability, and others are added? It would mean moving tests from one repository to another, with change of paths. I think that the solution 2, where the repository where a test belong and the functionality of a test are not linked, is better. Ciao -- Luigi From thierry at openstack.org Fri Jan 12 09:53:14 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 12 Jan 2018 10:53:14 +0100 Subject: [openstack-dev] [tc] Technical Committee Status update, January 12th Message-ID: Hi! This is the weekly summary of Technical Committee initiatives. You can find the full list of all open topics (updated twice a week) at: https://wiki.openstack.org/wiki/Technical_Committee_Tracker If you are working on something (or plan to work on something) governance-related that is not reflected on the tracker yet, please feel free to add to it ! == Recently-approved changes == * Upgrade assertion tags only apply to services [1][2] * Retired repo: ceilometerclient * New repo: charm-interface-designate * Goal updates: magnum, manila [1] https://review.openstack.org/#/c/528745/ [2] https://review.openstack.org/#/c/531395/ Only significant change beyond repository housekeeping and goal completion updates this week is the merging of the new wording to limit upgrade assertion tags to OpenStack "services". This is now reflected in the corresponding tags: * https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html * https://governance.openstack.org/tc/reference/tags/assert_supports-accessible-upgrade.html * https://governance.openstack.org/tc/reference/tags/assert_supports-rolling-upgrade.html * https://governance.openstack.org/tc/reference/tags/assert_supports-zero-downtime-upgrade.html == Rocky goals == Goal proposal season is in full swing in preparation for the start of the Rocky cycle. Two new goals have already been proposed: * Remove mox: https://review.openstack.org/532361 * Ensure pagination links: https://review.openstack.org/532627 Emilien started a new thread detailing other candidates: http://lists.openstack.org/pipermail/openstack-dev/2018-January/126090.html There are good candidates there, but we need someone willing to champion them before we can consider them for approval. == Under discussion == The discussion started by Graham Hayes to clarify how the testing of interoperability programs should be organized in the age of add-on trademark programs is still going on. The TC is interested in more input from the Interop WG and the QA team, to select an option that would work for both those groups. Colleen posted a write-up that is a great introduction to the topic if you're interested in chiming in: https://review.openstack.org/521602 http://lists.openstack.org/pipermail/openstack-dev/2018-January/126146.html Matt Treinish proposed an update to the Python PTI for tests to be specific and explicit. Wider community input is still needed on that topic. Please review at: https://review.openstack.org/519751 == TC member actions for the coming week(s) == Looking for a volunteer to drive the proposal to update to the Python PTI for tests to be specific and explicit to some closure. This likely involves posting a new thread on the ML to gather wider community input on the topic. We'd also like to get to a set of proposed goals to choose from, as the Rocky cycle will start with master branching a couple of weeks from now. Finally, we should be thinking about topics that would make good post-lunch presentations at the PTG in Dublin: http://lists.openstack.org/pipermail/openstack-dev/2018-January/126102.html == Office hours == To be more inclusive of all timezones and more mindful of people for which English is not the primary language, the Technical Committee dropped its dependency on weekly meetings. So that you can still get hold of TC members on IRC, we instituted a series of office hours on #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays For the coming week, I expect continued discussions on the stuck changes, as well as Rocky goals. Cheers, -- Thierry Carrez (ttx) From cdent+os at anticdent.org Fri Jan 12 12:42:48 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 12 Jan 2018 12:42:48 +0000 (GMT) Subject: [openstack-dev] [nova] [placement] resource providers update 18-02 Message-ID: Resource provider and placement 18-02. Getting a bit more warmed up here, so should be more stuff from more places. # Most Important Completing alternate hosts and exposing the basic nested resource providers functionality is what matters. We've reached that stage in the cycle where at least some interesting ideas, inspired by current work, need to be pushed off to Rocky. Speaking of Rocky, the etherpad for PTG topics is underway at https://etherpad.openstack.org/p/nova-ptg-rocky In typical fashion there's plenty of stuff on there related to placement already, but there's likely plenty more to talk about. If you have something, even if it is tentative, add it. The list will get more structured closer to the PTG. As we approach the end of the cycle finding and fixing bugs ought to become the focus. # What's Changed Eric gave a nice summary of this week's scheduler meeting in yesterday's Nova team meeting. It's worth reading: http://eavesdrop.openstack.org/meetings/nova/2018/nova.2018-01-11-14.01.log.html#l-74 # Help Wanted There are a fair few unstarted bugs related to placement that could do with some attention. Here's a handy URL: https://goo.gl/TgiPXb # Main Themes ## Nested Providers The nested provider work is proceeding along two main courses: getting the ProviderTree on the nova side gathering and syncing all the necessary information, and enabling nested provider searching when requesting /allocation_candidates. Both of these are within the same topic: https://review.openstack.org/#/q/topic:bp/nested-resource-providers We've identified the need to handle conflicts responses (409) in a more generic fashion in the ProviderTree. The new plan is, when a conflict is caused by mismatched generations, reset and reload the entire tree rather than attempting to resync at a granular level. # Alternate Hosts The last piece of the puzzle, changing the RPC interface, is pending: https://review.openstack.org/#/q/topic:bp/return-alternate-hosts Some issues with resizes and interaction with the CachingScheduler have been addressed. Related to this, exploration has started on limiting the number of responses that the scheduler will request when requesting hosts (some of which will become alternates): https://review.openstack.org/#/c/531517/ ## Misc Traits, Shared, Etc Cleanups There's a stack of code that fixes up a lot of things related to traits, sharing providers, test additions and fixes to those tests. At the moment the changes are in a bug topic: https://review.openstack.org/#/q/topic:bug/1702420 # Other * Extract instance allocation removal code https://review.openstack.org/#/c/513041/ * Sending global request ids from nova to placement https://review.openstack.org/#/q/topic:bug/1734625 * VGPU suppport https://review.openstack.org/#/q/topic:bp/add-support-for-vgpu * Use traits with ironic https://review.openstack.org/#/q/topic:bp/ironic-driver-traits * Move api schemas to own dir https://review.openstack.org/#/c/528629/ * request limit /allocation_candidate WIP https://review.openstack.org/#/c/531517/ * Update resources once in update available resources https://review.openstack.org/#/c/520024/ (This ought, when it works, to help address some performance concerns with nova making too many requests to placement) * Fix resource provider delete https://review.openstack.org/#/c/529519/ * spec: treat devices as generic resources https://review.openstack.org/#/c/497978/ This is a WIP and will need to move to Rocky * log options at DEBUG when starting wsgi app https://review.openstack.org/#/c/519462/ * Support aggregate affinity filters/weighers https://review.openstack.org/#/q/topic:bp/aggregate-affinity A rocky targeted improvement to affinity handling * Move placement body samples in docs to own dir https://review.openstack.org/#/c/529998/ * Improved functional test coverage for placement https://review.openstack.org/#/q/topic:bp/placement-test-enhancement * Functional tests for traits api https://review.openstack.org/#/c/524094/ * Functional test improvements for resource class https://review.openstack.org/#/c/524506/ * annotate loadapp() (for placement wsgi app) as public https://review.openstack.org/#/c/526691/ * Remove microversion fallback code from report client https://review.openstack.org/#/c/528794/ * Document lack of side-effects in AllocationList.create_all() https://review.openstack.org/#/c/530997/ * Fix documentation nits in set_and_clear_allocations https://review.openstack.org/#/c/531001/ * WIP: SchedulerReportClient.set_aggregates_for_provider https://review.openstack.org/#/c/532995/ This is likely for rocky as it depends on changing the api for aggregates handling on the placement side to accept and provide a generation * Naming update cn to rp (for clarity) https://review.openstack.org/#/c/529786/ * Add functional test for two-cell scheduler behaviors https://review.openstack.org/#/c/452006/ (This is old and maybe out of date, but something we might like to resurrect) * Make API history doc consistent https://review.openstack.org/#/c/477478/3 * WIP: General policy sample file for placement https://review.openstack.org/#/c/524425/ # End Hi, you made it this far? Awesome. Go review some of that stuff in the Other list! -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From doug at doughellmann.com Fri Jan 12 15:26:12 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 12 Jan 2018 10:26:12 -0500 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: <13089710.Tc0ZGb15F4@whitebase.usersys.redhat.com> References: <13089710.Tc0ZGb15F4@whitebase.usersys.redhat.com> Message-ID: <1515770731-sup-5621@lrrr.local> Excerpts from Luigi Toscano's message of 2018-01-12 10:49:55 +0100: > On Thursday, 11 January 2018 23:52:00 CET Matt Riedemann wrote: > > On 1/11/2018 10:36 AM, Colleen Murphy wrote: > > > 1) All trademark-related tests should go in the tempest repo, in > > > accordance > > > > > > with the original resolution. This would mean that even projects that > > > have > > > never had tests in tempest would now have to add at least some of > > > their > > > black-box tests to tempest. > > > > > > The value of this option is that centralizes tests used for the Interop > > > program in a location where interop-minded folks from the QA team can > > > control them. The downside is that projects that so far have avoided > > > having a dependency on tempest will now lose some control over the > > > black-box tests that they use for functional and integration that would > > > now also be used for trademark certification. > > > There's also concern for the review bandwidth of the QA team - we can't > > > expect the QA team to be continually responsible for an ever-growing list > > > of projects and their trademark tests. > > > > How many tests are we talking about for designate and heat? Half a > > dozen? A dozen? More? > > > > If it's just a couple of tests per project it doesn't seem terrible to > > have them live in Tempest so you get the "interop eye" on reviews, as > > noted in your email. If it's a considerable amount, then option 2 seems > > the best for the majority of parties. > > I would argue that it does not scale; what if some test is taken out from the > interoperability, and others are added? It would mean moving tests from one > repository to another, with change of paths. I think that the solution 2, > where the repository where a test belong and the functionality of a test are > not linked, is better. > > Ciao How often do the interop test suites change in that way? Doug From doug at doughellmann.com Fri Jan 12 15:31:29 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 12 Jan 2018 10:31:29 -0500 Subject: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID In-Reply-To: <7142b655-abca-ed58-bc4c-07fa51cb401c@switch.ch> References: <054e9a0c-1c99-51dd-9e66-8adb3c910802@switch.ch> <1515696336-sup-7054@lrrr.local> <7142b655-abca-ed58-bc4c-07fa51cb401c@switch.ch> Message-ID: <1515771070-sup-7997@lrrr.local> Excerpts from Saverio Proto's message of 2018-01-12 09:17:55 +0100: > > Which service's logs are missing the request_id? > > > If I look at neutron-server logs with my current setup I get two files: > > neutron-server.log > neutron-server.json > > the standard log file has in all neutron.wsgi lines information like: > > neutron.wsgi [req-4fda8017-50c7-40eb-9e7b-710e7fba0d01 > 97d349b9499b4bd29c5e167c65ca1fb3 d447c836b6934dfab41a03f1ff96d879 - - -] > > where req-UUID is the request ID, and the other two values are the user > UUID and the keystone project UUID. > > when I look at the same line in the JSON output this information is missing. > > I am starting neutron-server with the command line option > --log-config-append=/etc/neutron/logging.neutron-server.conf > > where the conf file looks like > > [loggers] > keys = root, neutron > > [handlers] > keys = logfile, jsonfile, null > > [formatters] > keys = context, json, default > > [logger_root] > level = WARNING > handlers = null > > [logger_neutron] > level = INFO > handlers = logfile, jsonfile > qualname = neutron > > [handler_logfile] > class = handlers.WatchedFileHandler > args = ('/var/log/neutron/neutron-server.log',) > formatter = context > > [handler_jsonfile] > level = INFO > class = handlers.WatchedFileHandler > args = ('/var/log/neutron/neutron-server.json',) > formatter = json > > [handler_null] > class = logging.NullHandler > formatter = default > args = () > > [formatter_context] > class = oslo_log.formatters.ContextFormatter > > [formatter_json] > class = oslo_log.formatters.JSONFormatter > > [formatter_default] > format = %(message)s > > > I had a look at nova-api and I have the same problem. Anyway in my > Kibana I never so a req-UUID what so ever, so this looks like a problem > with all the openstack services. > > Is it a problem with my logging configuration ? > > thank you > > Saverio > I don't think this is a configuration problem. Which version of the oslo.log library do you have installed? Doug From lajos.katona at ericsson.com Fri Jan 12 15:32:26 2018 From: lajos.katona at ericsson.com (Lajos Katona) Date: Fri, 12 Jan 2018 16:32:26 +0100 Subject: [openstack-dev] [horizon][trunk][ngdetails] Trunk admin panel and changes related to ngdetails patches Message-ID: Hi Horizon Team I read the meeting log (http://eavesdrop.openstack.org/meetings/horizon/2018/horizon.2018-01-10-20.00.log.html) and if I understand well the proposal is to merge part of the ngdetails patches (https://review.openstack.org/#/q/topic:bug/1681627+(status:open) ) in Queens, and address the remaining issues in Rocky, am I right? Could you help me to find a way to proceed with the remaining trunk related patches which are dependent on the above patches (https://review.openstack.org/#/q/project:openstack/horizon+status:open+AND+owner:%22Lajos+Katona+%253Clajos.katona%2540ericsson.com%253E%22). What do you think, shall I remove the dependency for ngdetails fix and add TODOs or similar to the code or wait for and help Shu with his work? Thanks in advance for the help. Regards Lajos From emilien at redhat.com Fri Jan 12 15:44:16 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 12 Jan 2018 07:44:16 -0800 Subject: [openstack-dev] [all] [tc] Community Goals for Rocky In-Reply-To: References: Message-ID: On Fri, Jan 12, 2018 at 1:27 AM, ChangBo Guo wrote: > Yeah, help means to be the "Champion" for me :-) That's very cool, if you can write a proposal in governance, similar to https://review.openstack.org/#/c/532361/ (Example with Sean). The community will vote on which goals we take for Rocky. Please let us know if you need any help, Thanks! > 2018-01-12 14:22 GMT+08:00 Emilien Macchi : >> >> On Thu, Jan 11, 2018 at 9:59 PM, ChangBo Guo wrote: >> > I would like to help the goal - enable mutable configuration, would like >> > to >> > post a patch for that later. >> >> are you interested to be the "Champion" for this goal? >> >> > >> > 2018-01-10 2:37 GMT+08:00 Emilien Macchi : >> >> >> >> As promised, let's continue the discussion and move things forward. >> >> >> >> This morning Thierry brought the discussion during the TC office hour >> >> (that I couldn't attend due to timezone): >> >> >> >> >> >> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/latest.log.html#t2018-01-09T09:18:33 >> >> >> >> Some outputs: >> >> >> >> - One goal has been proposed so far. >> >> >> >> Right now, we only have one goal proposal: Storyboard Migration. There >> >> are some concerns about the ability to achieve this goal in 6 months. >> >> At that point, we think it would be great to postpone the goal to S >> >> cycle, continue the progress (kudos to Kendall) and fine other goals >> >> for Rocky. >> >> >> >> >> >> - We still have a good backlog of goals, we're just missing champions. >> >> >> >> https://etherpad.openstack.org/p/community-goals >> >> >> >> Chris brought up "pagination links in collection resources" in api-wg >> >> guidelines theme. He said in the past this goal was more a "should" >> >> than a "must". >> >> Thierry mentioned privsep migration (done in Nova and Zun). (action, >> >> ping mikal about it). >> >> Thierry also brought up the version discovery (proposed by Monty). >> >> Flavio proposed mutable configuration, which might be very useful for >> >> operators. >> >> He also mentioned that IPv6 support goal shouldn't be that far from >> >> done, but we're currently lacking in CI jobs that test IPv6 >> >> deployments (question for infra/QA, can we maybe document the gap so >> >> we can run some gate jobs on ipv6 ?) >> >> (personal note on that one, since TripleO & Puppet OpenStack CI >> >> already have IPv6 jobs, we can indeed be confident that it shouldn't >> >> be that hard to complete this goal in 6 months, I guess the work needs >> >> to happen in the projects layouts). >> >> Another interesting goal proposed by Thierry, also useful for >> >> operators, is to move more projects to assert:supports-upgrade tag. >> >> Thierry said we are probably not that far from this goal, but the >> >> major lack is in testing. >> >> Finally, another "simple" goal is to remove mox/mox3 (Flavio said most >> >> of projects don't use it anymore already). >> >> >> >> With that said, let's continue the discussion on these goals, see >> >> which ones can be actionable and find champions. >> >> >> >> - Flavio asked how would it be perceived if one cycle wouldn't have at >> >> least one community goal. >> >> >> >> Thierry said we could introduce multi-cycle goals (Storyboard might be >> >> a good candidate). >> >> Chris and Thierry thought that it would be a bad sign for our >> >> community to not have community goals during a cycle, "loss of >> >> momentum" eventually. >> >> >> >> >> >> Thanks for reading so far, >> >> >> >> On Fri, Dec 15, 2017 at 9:07 AM, Emilien Macchi >> >> wrote: >> >> > On Tue, Nov 28, 2017 at 2:22 PM, Emilien Macchi >> >> > wrote: >> >> > [...] >> >> >> Suggestions are welcome: >> >> >> - on the mailing-list, in a new thread per goal [all] [tc] Proposing >> >> >> goal XYZ for Rocky >> >> >> - on Gerrit in openstack/governance like Kendall did. >> >> > >> >> > Just a fresh reminder about Rocky goals. >> >> > A few questions that we can ask ourselves: >> >> > >> >> > 1) What common challenges do we have? >> >> > >> >> > e.g. Some projects don't have mutable configuration or some projects >> >> > aren't tested against IPv6 clouds, etc. >> >> > >> >> > 2) Who is willing to drive a community goal (a.k.a. Champion)? >> >> > >> >> > note: a Champion is someone who volunteer to drive the goal, but >> >> > doesn't commit to write the code necessarily. The Champion will >> >> > communicate with projects PTLs about the goal, and make the liaison >> >> > if >> >> > needed. >> >> > >> >> > The list of ideas for Community Goals is documented here: >> >> > https://etherpad.openstack.org/p/community-goals >> >> > >> >> > Please be involved and propose some ideas, I'm sure our community has >> >> > some common goals, right ? :-) >> >> > Thanks, and happy holidays. I'll follow-up in January of next year. >> >> > -- >> >> > Emilien Macchi >> >> >> >> >> >> >> >> -- >> >> Emilien Macchi >> >> >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > >> > >> > -- >> > ChangBo Guo(gcb) >> > Community Director @EasyStack >> > >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> >> >> -- >> Emilien Macchi >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > ChangBo Guo(gcb) > Community Director @EasyStack > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Emilien Macchi From emilien at redhat.com Fri Jan 12 15:50:10 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 12 Jan 2018 07:50:10 -0800 Subject: [openstack-dev] [all] [tc] Community Goals for Rocky In-Reply-To: References: Message-ID: Here's a quick update before the weekend: 2 goals were proposed to governance: Remove mox https://review.openstack.org/#/c/532361/ Champion: Sean McGinnis (unless someone else steps up) Ensure pagination links https://review.openstack.org/#/c/532627/ Champion: Monty Taylor 2 more goals are about to be proposed: Enable mutable configuration Champion: ChangBo Guo Cold upgrades capabilities Champion: Masayuki Igawa Thanks everyone for your participation, We hope to make a vote within the next 2 weeks so we can prepare the PTG accordingly. On Tue, Jan 9, 2018 at 10:37 AM, Emilien Macchi wrote: > As promised, let's continue the discussion and move things forward. > > This morning Thierry brought the discussion during the TC office hour > (that I couldn't attend due to timezone): > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/latest.log.html#t2018-01-09T09:18:33 > > Some outputs: > > - One goal has been proposed so far. > > Right now, we only have one goal proposal: Storyboard Migration. There > are some concerns about the ability to achieve this goal in 6 months. > At that point, we think it would be great to postpone the goal to S > cycle, continue the progress (kudos to Kendall) and fine other goals > for Rocky. > > > - We still have a good backlog of goals, we're just missing champions. > > https://etherpad.openstack.org/p/community-goals > > Chris brought up "pagination links in collection resources" in api-wg > guidelines theme. He said in the past this goal was more a "should" > than a "must". > Thierry mentioned privsep migration (done in Nova and Zun). (action, > ping mikal about it). > Thierry also brought up the version discovery (proposed by Monty). > Flavio proposed mutable configuration, which might be very useful for operators. > He also mentioned that IPv6 support goal shouldn't be that far from > done, but we're currently lacking in CI jobs that test IPv6 > deployments (question for infra/QA, can we maybe document the gap so > we can run some gate jobs on ipv6 ?) > (personal note on that one, since TripleO & Puppet OpenStack CI > already have IPv6 jobs, we can indeed be confident that it shouldn't > be that hard to complete this goal in 6 months, I guess the work needs > to happen in the projects layouts). > Another interesting goal proposed by Thierry, also useful for > operators, is to move more projects to assert:supports-upgrade tag. > Thierry said we are probably not that far from this goal, but the > major lack is in testing. > Finally, another "simple" goal is to remove mox/mox3 (Flavio said most > of projects don't use it anymore already). > > With that said, let's continue the discussion on these goals, see > which ones can be actionable and find champions. > > - Flavio asked how would it be perceived if one cycle wouldn't have at > least one community goal. > > Thierry said we could introduce multi-cycle goals (Storyboard might be > a good candidate). > Chris and Thierry thought that it would be a bad sign for our > community to not have community goals during a cycle, "loss of > momentum" eventually. > > > Thanks for reading so far, > > On Fri, Dec 15, 2017 at 9:07 AM, Emilien Macchi wrote: >> On Tue, Nov 28, 2017 at 2:22 PM, Emilien Macchi wrote: >> [...] >>> Suggestions are welcome: >>> - on the mailing-list, in a new thread per goal [all] [tc] Proposing >>> goal XYZ for Rocky >>> - on Gerrit in openstack/governance like Kendall did. >> >> Just a fresh reminder about Rocky goals. >> A few questions that we can ask ourselves: >> >> 1) What common challenges do we have? >> >> e.g. Some projects don't have mutable configuration or some projects >> aren't tested against IPv6 clouds, etc. >> >> 2) Who is willing to drive a community goal (a.k.a. Champion)? >> >> note: a Champion is someone who volunteer to drive the goal, but >> doesn't commit to write the code necessarily. The Champion will >> communicate with projects PTLs about the goal, and make the liaison if >> needed. >> >> The list of ideas for Community Goals is documented here: >> https://etherpad.openstack.org/p/community-goals >> >> Please be involved and propose some ideas, I'm sure our community has >> some common goals, right ? :-) >> Thanks, and happy holidays. I'll follow-up in January of next year. >> -- >> Emilien Macchi > > > > -- > Emilien Macchi -- Emilien Macchi From aj at suse.com Fri Jan 12 16:13:28 2018 From: aj at suse.com (Andreas Jaeger) Date: Fri, 12 Jan 2018 17:13:28 +0100 Subject: [openstack-dev] [qa][neutron][octavia][horizon][networking-l2gw] Renaming tox_venvlist in Zuul v3 run-tempest Message-ID: <81bf1e9d-2180-4ff6-0b9a-c1873ab285bb@suse.com> The Zuul v3 tox jobs use "tox_envlist" to name the tox environment to use, the tempest run-tempest role used "tox_venvlist" with an extra "v" in it. This lead to some confusion and a wrong fix, so let's be consistent across these jobs. I've just pushed changes under the topic tox_envlist to sync these. To have working jobs, I needed the usual rename dance: Add the new variable, change the job, remove the old one. Neutron, octavia, horizon, networking-l2gw team, please review and merge the first one quickly. https://review.openstack.org/#/q/topic:tox_envlist Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From Tim.Bell at cern.ch Fri Jan 12 17:09:16 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Fri, 12 Jan 2018 17:09:16 +0000 Subject: [openstack-dev] [all] [tc] Community Goals for Rocky In-Reply-To: References: Message-ID: <397BB99F-D7B2-47B3-9724-E8B628EFD5C2@cern.ch> I was reading a tweet from Jean-Daniel and wondering if there would be an appropriate community goal regarding support of some of the later API versions or whether this would be more of a per-project goal. https://twitter.com/pilgrimstack/status/951860289141641217 Interesting numbers about customers tools used to talk to our @OpenStack APIs and the Keystone v3 compatibility: - 10% are not KeystoneV3 compatible - 16% are compatible - for the rest, the tools documentation has no info I think Keystone V3 and Glance V2 are the ones with APIs which have moved on significantly from the initial implementations and not all projects have been keeping up. Tim -----Original Message----- From: Emilien Macchi Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Friday, 12 January 2018 at 16:51 To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [all] [tc] Community Goals for Rocky Here's a quick update before the weekend: 2 goals were proposed to governance: Remove mox https://review.openstack.org/#/c/532361/ Champion: Sean McGinnis (unless someone else steps up) Ensure pagination links https://review.openstack.org/#/c/532627/ Champion: Monty Taylor 2 more goals are about to be proposed: Enable mutable configuration Champion: ChangBo Guo Cold upgrades capabilities Champion: Masayuki Igawa Thanks everyone for your participation, We hope to make a vote within the next 2 weeks so we can prepare the PTG accordingly. On Tue, Jan 9, 2018 at 10:37 AM, Emilien Macchi wrote: > As promised, let's continue the discussion and move things forward. > > This morning Thierry brought the discussion during the TC office hour > (that I couldn't attend due to timezone): > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/latest.log.html#t2018-01-09T09:18:33 > > Some outputs: > > - One goal has been proposed so far. > > Right now, we only have one goal proposal: Storyboard Migration. There > are some concerns about the ability to achieve this goal in 6 months. > At that point, we think it would be great to postpone the goal to S > cycle, continue the progress (kudos to Kendall) and fine other goals > for Rocky. > > > - We still have a good backlog of goals, we're just missing champions. > > https://etherpad.openstack.org/p/community-goals > > Chris brought up "pagination links in collection resources" in api-wg > guidelines theme. He said in the past this goal was more a "should" > than a "must". > Thierry mentioned privsep migration (done in Nova and Zun). (action, > ping mikal about it). > Thierry also brought up the version discovery (proposed by Monty). > Flavio proposed mutable configuration, which might be very useful for operators. > He also mentioned that IPv6 support goal shouldn't be that far from > done, but we're currently lacking in CI jobs that test IPv6 > deployments (question for infra/QA, can we maybe document the gap so > we can run some gate jobs on ipv6 ?) > (personal note on that one, since TripleO & Puppet OpenStack CI > already have IPv6 jobs, we can indeed be confident that it shouldn't > be that hard to complete this goal in 6 months, I guess the work needs > to happen in the projects layouts). > Another interesting goal proposed by Thierry, also useful for > operators, is to move more projects to assert:supports-upgrade tag. > Thierry said we are probably not that far from this goal, but the > major lack is in testing. > Finally, another "simple" goal is to remove mox/mox3 (Flavio said most > of projects don't use it anymore already). > > With that said, let's continue the discussion on these goals, see > which ones can be actionable and find champions. > > - Flavio asked how would it be perceived if one cycle wouldn't have at > least one community goal. > > Thierry said we could introduce multi-cycle goals (Storyboard might be > a good candidate). > Chris and Thierry thought that it would be a bad sign for our > community to not have community goals during a cycle, "loss of > momentum" eventually. > > > Thanks for reading so far, > > On Fri, Dec 15, 2017 at 9:07 AM, Emilien Macchi wrote: >> On Tue, Nov 28, 2017 at 2:22 PM, Emilien Macchi wrote: >> [...] >>> Suggestions are welcome: >>> - on the mailing-list, in a new thread per goal [all] [tc] Proposing >>> goal XYZ for Rocky >>> - on Gerrit in openstack/governance like Kendall did. >> >> Just a fresh reminder about Rocky goals. >> A few questions that we can ask ourselves: >> >> 1) What common challenges do we have? >> >> e.g. Some projects don't have mutable configuration or some projects >> aren't tested against IPv6 clouds, etc. >> >> 2) Who is willing to drive a community goal (a.k.a. Champion)? >> >> note: a Champion is someone who volunteer to drive the goal, but >> doesn't commit to write the code necessarily. The Champion will >> communicate with projects PTLs about the goal, and make the liaison if >> needed. >> >> The list of ideas for Community Goals is documented here: >> https://etherpad.openstack.org/p/community-goals >> >> Please be involved and propose some ideas, I'm sure our community has >> some common goals, right ? :-) >> Thanks, and happy holidays. I'll follow-up in January of next year. >> -- >> Emilien Macchi > > > > -- > Emilien Macchi -- Emilien Macchi __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From wenranxiao at gmail.com Fri Jan 12 17:13:56 2018 From: wenranxiao at gmail.com (wenran xiao) Date: Sat, 13 Jan 2018 01:13:56 +0800 Subject: [openstack-dev] [network-ovn] SNAT traffic in OVN. Message-ID: Hi all, Networking-ovn will support distributed floating IP, how about the snat traffic? Will in every compute node or not? Any suggestions are welcomed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Fri Jan 12 19:51:41 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 12 Jan 2018 20:51:41 +0100 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 8 January 2018 Message-ID: # Keystone Team Update - Week of 8 January 2018 ## News ### PTG We're still brainstorming for the PTG. If you have any topic proposals, please add them to the etherpad[1]. We'll need to coordinate with other teams to schedule discussion time on our major cross-project topics, especially unified limits and policy changes. ### Scope types ambiguity In the keystone and policy meetings Lance highlighted a number of APIs that could have different behaviors depending on whether the requester uses the new system scope or a project scope, for example in the projects API[2]. Keystone currently doesn't treat these scopes differently, but we'll be marking and tracking APIs and policies that need to have this addressed. It's likely that we'll have to help other projects deal with such ambiguous APIs in the future as well. [1] https://etherpad.openstack.org/p/keystone-rocky-ptg [2] https://review.openstack.org/#/c/526159/3/keystone/common/policies/project.py at 32 ## Recently Merged Changes Search query: https://goo.gl/hdD9Kw We merged 16 changes this week. Many of these were organizational changes to our api-ref from our Outreachy intern, Suramya, who has been helping us make our api-ref more consistent. We've also merged some more of Lance's system-scope changes, though those are still making their way through the gate. ## Changes that need Attention Search query: https://goo.gl/h9knRA There are 79 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. Among these are a number of fixes for the api-ref and several changes to add the scope_types option to our policies. ## Milestone Outlook https://releases.openstack.org/queens/schedule.html Our feature freeze is in two weeks. Please help review changes for system-scope, application credentials, and unified limits so we can meet this deadline. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From doug at doughellmann.com Fri Jan 12 20:37:42 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 12 Jan 2018 15:37:42 -0500 Subject: [openstack-dev] [tc][ptl][goals][storyboard] tracking the rocky goals with storyboard Message-ID: <1515789340-sup-6629@lrrr.local> Since we are discussing goals for the Rocky cycle, I would like to propose a change to the way we track progress on the goals. We've started to see lots and lots of changes to the goal documents, more than anticipated when we designed the system originally. That leads to code review churn within the governance repo, and it means the goal champions have to wait for the TC to review changes before they have complete tracking information published somewhere. We've talked about moving the tracking out of git and using an etherpad or a wiki page, but I propose that we use storyboard. Specifically, I think we should create 1 story for each goal, and one task for each project within the goal. We can then use a board to track progress, with lanes like "New", "Acknowledged", "In Progress", "Completed", and "Not Applicable". It would be the responsibility of the goal champion to create the board, story, and tasks and provide links to the board and story in the goal document (so we only need 1 edit after the goal is approved). From that point on, teams and goal champions could collaborate on keeping the board up to date. Not all projects are registered in storyboard, yet. Since that migration is itself a goal under discussion, I think for now we can just associate all tasks with the governance repository. It doesn't look like changes to a board trigger any sort of notifications for the tasks or stories involved, but that's probably OK. If we really want notifications we can look at adding them as a feature of Storyboard at the board level. How does this sound as an approach? Does anyone have any reservations about using storyboard this way? Doug From thingee at gmail.com Fri Jan 12 20:44:36 2018 From: thingee at gmail.com (Mike Perez) Date: Fri, 12 Jan 2018 12:44:36 -0800 Subject: [openstack-dev] Developer Mailing List Digest January 5-12th Message-ID: <20180112204436.GA3640@gmail.com> Contribute to the Dev Digest by summarizing OpenStack Dev List thread: * https://etherpad.openstack.org/p/devdigest * http://lists.openstack.org/pipermail/openstack-dev/ * http://lists.openstack.org/pipermail/openstack-sigs HTML version: https://www.openstack.org/blog/2018/01/developer-mailing-list-digest-january-5-12th/ Success Bot Says ================ * e0ne on #openstack-horizon [0]: amotoki runs horizon with django 2.0 * tristianC on #rdo [1]: review.rdoproject.org is now running sf-2.7 * mriedem on #openstack-nova [2]: nova merged alternate hosts support for server build * mriedem on #openstack-nova [3]: After a week of problems, finally got a volume multiattach test run to actually attach a volume to two instances without melting the world. \o/ * zaneb [4]: 14% reduction in Heat memory use in the TripleO gate from fixing https://bugs.launchpad.net/heat/+bug/1731349 * Tell us yours in OpenStack IRC channels using the command "#success " * More: https://wiki.openstack.org/wiki/Successes [0] - http://eavesdrop.openstack.org/irclogs/%23openstack-horizon/%23openstack-horizon.2017-12-18.log.html [1] - http://eavesdrop.openstack.org/irclogs/%23rdo/%23rdo.2017-12-21.log.html [2] - http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-12-22.log.html [3] - http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-01-05.log.html [4] - http://eavesdrop.openstack.org/irclogs/%23tripleo/%23tripleo.2018-01-09.log.html Community Summaries =================== * Technical Committee Status update [0] * POST /api-sig/news [1] * Release countdown [2] * Nova placement resource provider update [3] * Keystone team update [4] * Nova Notification Update [5] * TC report [6] [0] - http://lists.openstack.org/pipermail/openstack-dev/2018-January/126178.html [1] - http://lists.openstack.org/pipermail/openstack-dev/2018-January/126147.html [2] - http://lists.openstack.org/pipermail/openstack-dev/2018-January/125996.html [3] - http://lists.openstack.org/pipermail/openstack-dev/2018-January/126179.html [4] - http://lists.openstack.org/pipermail/openstack-dev/2018-January/126188.html [5] - http://lists.openstack.org/pipermail/openstack-dev/2018-January/126025.html [6] - http://lists.openstack.org/pipermail/openstack-dev/2018-January/126082.html Community Goals for Rocky ========================= So far one goal has been proposed by Kendall Nelson for migrating to Storyboard. It was agreed to postpone the goal until the S cycle, as it could take longer than six months to achieve. There is a good backlog of goals [0], just no champions. It'll be bad for momentum if we have a cycle with no community wide goal. [0] - https://etherpad.openstack.org/p/community-goals Full thread: http://lists.openstack.org/pipermail/openstack-dev/2018-January/126090.html PTG Post-lunch Presentations ============================ Feedback received from past PTG session(s) was the lack of situational awareness and missed opportunity for "global" communication at the event. In Dublin we'd used the end of the lunch break to for communications that could be interesting to OpenStack upstream developers and project team members. The idea is not to find a presentation for everyday, but if we find content that is generally useful. Interesting topics include general guidance to make the most of the PTG weeks (good Monday content), development tricks, code review etiquette, new library features you should adopt, lightning talks (good Friday content). We'd like to keep the slot under 20 minutes. If you have ideas please fill out this etherpad [0] in a few weeks. [0] - https://etherpad.openstack.org/p/dublin-PTG-postlunch Full thread: http://lists.openstack.org/pipermail/openstack-dev/2018-January/126102.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From doug at doughellmann.com Fri Jan 12 20:57:44 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 12 Jan 2018 15:57:44 -0500 Subject: [openstack-dev] [storyboard] need help figuring out how to use auth with storyboard client Message-ID: <1515790548-sup-2612@lrrr.local> The storyboard client docs mention an "access token" [1] as something a client needs in order to create stories and make other sorts of changes. They don't explain what that token is or how to get one, though. Where do I get a token? How long does the token work? Can I safely put a token in a configuration file, or do I need to get a new one each time I want to do something with the client? Doug [1] https://docs.openstack.org/infra/python-storyboardclient/usage.html From lauren at openstack.org Fri Jan 12 20:59:04 2018 From: lauren at openstack.org (Lauren Sell) Date: Fri, 12 Jan 2018 14:59:04 -0600 Subject: [openstack-dev] =?utf-8?q?Vancouver_Summit_CFP_is_open_-_what?= =?utf-8?b?4oCZcyBuZXc=?= Message-ID: <4572415A-B17D-44A2-967D-61376515BD24@openstack.org> Hi everyone, Today, we opened the Call for Presentations for the Vancouver Summit , which will take place May 21-24. The deadline to submit your proposal is February 8th. What’s New? We’re focused on open infrastructure integration. The Summit has evolved over the years to cover more than just OpenStack, but we’re making an even bigger effort to attract speakers across the open infrastructure ecosystem. In addition to OpenStack-related sessions, we’ll be featuring the newest project at the Foundation -- Kata Containers -- as well as recruiting many others from projects like Ansible, Ceph, Kubernetes, ONAP and many more. We’ve also organized Tracks around specific problem domains. We encourage you to submit proposals covering OpenStack and the “open infrastructure” tools you’re using, as well as the integration work needed to address these problem domains. We also encourage you to invite peers from other open source communities to come speak and collaborate. The Tracks are: CI/CD Container Infrastructure Edge Computing HPC / GPU / AI Open Source Community Private & Hybrid Cloud Public Cloud Telecom & NFV Where previously we had Track Chairs, we now have Programming Committees for each Track, made up of both Members and a Chair (or co-chairs). We’re also recruiting members and chairs from many different open source communities working in open infrastructure, in addition to the many familiar faces in the OpenStack community who will lead the effort. If you’re interested in nominating yourself or someone else to be a member of the Summit Programming Committee for a specific Track, please fill out the nomination form . Nominations will close on January 26, 2018. Again, the deadline to submit proposals is February 8, 2018. Please note topic submissions for the OpenStack Forum (planning/working sessions with OpenStack devs and operators) will open at a later date. We can’t wait to see you in Vancouver! We’re working hard to make it the best Summit yet, and look forward to bringing together different open infrastructure communities to solve these hard problems together! Want to provide feedback on this process? Please focus discussion on the openstack-community mailing list, or contact me or the OpenStack Foundation Summit Team directly at summit at openstack.org. Thank you, Lauren -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Fri Jan 12 21:11:40 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 12 Jan 2018 13:11:40 -0800 Subject: [openstack-dev] [tc][ptl][goals][storyboard] tracking the rocky goals with storyboard In-Reply-To: <1515789340-sup-6629@lrrr.local> References: <1515789340-sup-6629@lrrr.local> Message-ID: On Fri, Jan 12, 2018 at 12:37 PM, Doug Hellmann wrote: > Since we are discussing goals for the Rocky cycle, I would like to > propose a change to the way we track progress on the goals. > > We've started to see lots and lots of changes to the goal documents, > more than anticipated when we designed the system originally. That > leads to code review churn within the governance repo, and it means > the goal champions have to wait for the TC to review changes before > they have complete tracking information published somewhere. We've > talked about moving the tracking out of git and using an etherpad > or a wiki page, but I propose that we use storyboard. > > Specifically, I think we should create 1 story for each goal, and > one task for each project within the goal. We can then use a board > to track progress, with lanes like "New", "Acknowledged", "In > Progress", "Completed", and "Not Applicable". It would be the > responsibility of the goal champion to create the board, story, and > tasks and provide links to the board and story in the goal document > (so we only need 1 edit after the goal is approved). From that point > on, teams and goal champions could collaborate on keeping the board > up to date. > > Not all projects are registered in storyboard, yet. Since that > migration is itself a goal under discussion, I think for now we can > just associate all tasks with the governance repository. > > It doesn't look like changes to a board trigger any sort of > notifications for the tasks or stories involved, but that's probably > OK. If we really want notifications we can look at adding them as > a feature of Storyboard at the board level. > > How does this sound as an approach? Does anyone have any reservations > about using storyboard this way? Sounds like a good idea, and will help to "Eat Our Own Dog Food" (if we want Storyboard adopted at some point). -- Emilien Macchi From pabelanger at redhat.com Fri Jan 12 21:24:45 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Fri, 12 Jan 2018 16:24:45 -0500 Subject: [openstack-dev] [tc][ptl][goals][storyboard] tracking the rocky goals with storyboard In-Reply-To: References: <1515789340-sup-6629@lrrr.local> Message-ID: <20180112212445.GA649@localhost.localdomain> On Fri, Jan 12, 2018 at 01:11:40PM -0800, Emilien Macchi wrote: > On Fri, Jan 12, 2018 at 12:37 PM, Doug Hellmann wrote: > > Since we are discussing goals for the Rocky cycle, I would like to > > propose a change to the way we track progress on the goals. > > > > We've started to see lots and lots of changes to the goal documents, > > more than anticipated when we designed the system originally. That > > leads to code review churn within the governance repo, and it means > > the goal champions have to wait for the TC to review changes before > > they have complete tracking information published somewhere. We've > > talked about moving the tracking out of git and using an etherpad > > or a wiki page, but I propose that we use storyboard. > > > > Specifically, I think we should create 1 story for each goal, and > > one task for each project within the goal. We can then use a board > > to track progress, with lanes like "New", "Acknowledged", "In > > Progress", "Completed", and "Not Applicable". It would be the > > responsibility of the goal champion to create the board, story, and > > tasks and provide links to the board and story in the goal document > > (so we only need 1 edit after the goal is approved). From that point > > on, teams and goal champions could collaborate on keeping the board > > up to date. > > > > Not all projects are registered in storyboard, yet. Since that > > migration is itself a goal under discussion, I think for now we can > > just associate all tasks with the governance repository. > > > > It doesn't look like changes to a board trigger any sort of > > notifications for the tasks or stories involved, but that's probably > > OK. If we really want notifications we can look at adding them as > > a feature of Storyboard at the board level. > > > > How does this sound as an approach? Does anyone have any reservations > > about using storyboard this way? > > Sounds like a good idea, and will help to "Eat Our Own Dog Food" (if > we want Storyboard adopted at some point). > Agree, I've seen some downstream teams also do this with trello. If people would like to try with Storyboard, I don't have any objections. -Paul From cboylan at sapwetik.org Fri Jan 12 21:26:47 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 12 Jan 2018 13:26:47 -0800 Subject: [openstack-dev] [storyboard] need help figuring out how to use auth with storyboard client In-Reply-To: <1515790548-sup-2612@lrrr.local> References: <1515790548-sup-2612@lrrr.local> Message-ID: <1515792407.2983597.1233642400.0FE09AF8@webmail.messagingengine.com> On Fri, Jan 12, 2018, at 12:57 PM, Doug Hellmann wrote: > The storyboard client docs mention an "access token" [1] as something > a client needs in order to create stories and make other sorts of > changes. They don't explain what that token is or how to get one, > though. > > Where do I get a token? How long does the token work? Can I safely > put a token in a configuration file, or do I need to get a new one > each time I want to do something with the client? > > Doug > > [1] https://docs.openstack.org/infra/python-storyboardclient/usage.html > The storyboard api docs [2] point to this location under your userprofile [3], though it seems to not be directly linked to in the storyboard UI. And there are docs for managing subsequent user tokens further down in the api docs [4]. I've not used any of this so unsure how accurate it is, but hope this is enough to get you going with storyboardclient. [2] https://docs.openstack.org/infra/storyboard/webapi/v1.html#api [3] https://storyboard.openstack.org/#!/profile/tokens [4] https://docs.openstack.org/infra/storyboard/webapi/v1.html#user-tokens Clark From sean.mcginnis at gmx.com Fri Jan 12 21:29:06 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 12 Jan 2018 15:29:06 -0600 Subject: [openstack-dev] [tc][ptl][goals][storyboard] tracking the rocky goals with storyboard In-Reply-To: References: <1515789340-sup-6629@lrrr.local> Message-ID: <20180112212905.GA22417@sm-xps> > > > > How does this sound as an approach? Does anyone have any reservations > > about using storyboard this way? > > Sounds like a good idea, and will help to "Eat Our Own Dog Food" (if > we want Storyboard adopted at some point). My thoughts as well. This would be a good way to get more people exposed to storyboard, and a way to get more runtime on storyboard, so when the time comes to migrate away from launchpad there is a higher comfort level with the tool and less of a chance for any surprises. Sean From fungi at yuggoth.org Fri Jan 12 21:30:26 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 12 Jan 2018 21:30:26 +0000 Subject: [openstack-dev] [storyboard] need help figuring out how to use auth with storyboard client In-Reply-To: <1515790548-sup-2612@lrrr.local> References: <1515790548-sup-2612@lrrr.local> Message-ID: <20180112213026.2q2ioax6yvhx75ov@yuggoth.org> On 2018-01-12 15:57:44 -0500 (-0500), Doug Hellmann wrote: > The storyboard client docs mention an "access token" [1] as something > a client needs in order to create stories and make other sorts of > changes. They don't explain what that token is or how to get one, > though. > > Where do I get a token? How long does the token work? Can I safely > put a token in a configuration file, or do I need to get a new one > each time I want to do something with the client? https://docs.openstack.org/infra/storyboard/webapi/v1.html#api suggests that logging in and going to https://storyboard.openstack.org/#!/profile/tokens will allow you to issue one (with up to a 10-year expiration based on my modest experimentation). I believe this to be the same solution we're using to grant teh storyboard-its Gerrit plugin to update tasks/stories from review.openstack.org. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From pabelanger at redhat.com Fri Jan 12 21:37:54 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Fri, 12 Jan 2018 16:37:54 -0500 Subject: [openstack-dev] [qa][neutron][octavia][horizon][networking-l2gw] Renaming tox_venvlist in Zuul v3 run-tempest In-Reply-To: <81bf1e9d-2180-4ff6-0b9a-c1873ab285bb@suse.com> References: <81bf1e9d-2180-4ff6-0b9a-c1873ab285bb@suse.com> Message-ID: <20180112213754.GB649@localhost.localdomain> On Fri, Jan 12, 2018 at 05:13:28PM +0100, Andreas Jaeger wrote: > The Zuul v3 tox jobs use "tox_envlist" to name the tox environment to > use, the tempest run-tempest role used "tox_venvlist" with an extra "v" > in it. This lead to some confusion and a wrong fix, so let's be > consistent across these jobs. > > I've just pushed changes under the topic tox_envlist to sync these. > > To have working jobs, I needed the usual rename dance: Add the new > variable, change the job, remove the old one. > > Neutron, octavia, horizon, networking-l2gw team, please review and merge > the first one quickly. > > https://review.openstack.org/#/q/topic:tox_envlist > ++ Agree, in fact it would be good to see what would need to change with our existing run-tox role and have tempest consume it directly over using its own tasks for running tox. From kennelson11 at gmail.com Fri Jan 12 21:54:46 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 12 Jan 2018 21:54:46 +0000 Subject: [openstack-dev] [tc][ptl][goals][storyboard] tracking the rocky goals with storyboard In-Reply-To: <1515789340-sup-6629@lrrr.local> References: <1515789340-sup-6629@lrrr.local> Message-ID: I think this is a great idea! This would really help with getting more eyes on StoryBoard. I think a lot of people haven't touched it much in the last year or two and aren't aware of all of its capabilities at this point and this would be a great way to get people up to speed. This use case (cross project efforts) is also what StoryBoard was built for after all :) If anyone has questions about StoryBoard or feedback, please join our channel #storyboard! Thanks Doug! -Kendall (diablo_rojo) On Fri, Jan 12, 2018 at 12:38 PM Doug Hellmann wrote: > Since we are discussing goals for the Rocky cycle, I would like to > propose a change to the way we track progress on the goals. > > We've started to see lots and lots of changes to the goal documents, > more than anticipated when we designed the system originally. That > leads to code review churn within the governance repo, and it means > the goal champions have to wait for the TC to review changes before > they have complete tracking information published somewhere. We've > talked about moving the tracking out of git and using an etherpad > or a wiki page, but I propose that we use storyboard. > > Specifically, I think we should create 1 story for each goal, and > one task for each project within the goal. We can then use a board > to track progress, with lanes like "New", "Acknowledged", "In > Progress", "Completed", and "Not Applicable". It would be the > responsibility of the goal champion to create the board, story, and > tasks and provide links to the board and story in the goal document > (so we only need 1 edit after the goal is approved). From that point > on, teams and goal champions could collaborate on keeping the board > up to date. > > Not all projects are registered in storyboard, yet. Since that > migration is itself a goal under discussion, I think for now we can > just associate all tasks with the governance repository. > > It doesn't look like changes to a board trigger any sort of > notifications for the tasks or stories involved, but that's probably > OK. If we really want notifications we can look at adding them as > a feature of Storyboard at the board level. > > How does this sound as an approach? Does anyone have any reservations > about using storyboard this way? > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Fri Jan 12 22:19:32 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 12 Jan 2018 16:19:32 -0600 Subject: [openstack-dev] [all] [tc] Community Goals for Rocky In-Reply-To: <397BB99F-D7B2-47B3-9724-E8B628EFD5C2@cern.ch> References: <397BB99F-D7B2-47B3-9724-E8B628EFD5C2@cern.ch> Message-ID: <76c4df1e-2e82-96c6-a983-36040855a42d@gmail.com> On 01/12/2018 11:09 AM, Tim Bell wrote: > I was reading a tweet from Jean-Daniel and wondering if there would be an appropriate community goal regarding support of some of the later API versions or whether this would be more of a per-project goal. > > https://twitter.com/pilgrimstack/status/951860289141641217 > > Interesting numbers about customers tools used to talk to our @OpenStack APIs and the Keystone v3 compatibility: > - 10% are not KeystoneV3 compatible > - 16% are compatible > - for the rest, the tools documentation has no info > > I think Keystone V3 and Glance V2 are the ones with APIs which have moved on significantly from the initial implementations and not all projects have been keeping up. Yeah, I'm super interested in this, too. I'll be honest I'm not quite sure where to start. If the tools are open source we can start contributing to them directly. > > Tim > > -----Original Message----- > From: Emilien Macchi > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Friday, 12 January 2018 at 16:51 > To: OpenStack Development Mailing List > Subject: Re: [openstack-dev] [all] [tc] Community Goals for Rocky > > Here's a quick update before the weekend: > > 2 goals were proposed to governance: > > Remove mox > https://review.openstack.org/#/c/532361/ > Champion: Sean McGinnis (unless someone else steps up) > > Ensure pagination links > https://review.openstack.org/#/c/532627/ > Champion: Monty Taylor > > 2 more goals are about to be proposed: > > Enable mutable configuration > Champion: ChangBo Guo > > Cold upgrades capabilities > Champion: Masayuki Igawa > > > Thanks everyone for your participation, > We hope to make a vote within the next 2 weeks so we can prepare the > PTG accordingly. > > On Tue, Jan 9, 2018 at 10:37 AM, Emilien Macchi wrote: > > As promised, let's continue the discussion and move things forward. > > > > This morning Thierry brought the discussion during the TC office hour > > (that I couldn't attend due to timezone): > > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/latest.log.html#t2018-01-09T09:18:33 > > > > Some outputs: > > > > - One goal has been proposed so far. > > > > Right now, we only have one goal proposal: Storyboard Migration. There > > are some concerns about the ability to achieve this goal in 6 months. > > At that point, we think it would be great to postpone the goal to S > > cycle, continue the progress (kudos to Kendall) and fine other goals > > for Rocky. > > > > > > - We still have a good backlog of goals, we're just missing champions. > > > > https://etherpad.openstack.org/p/community-goals > > > > Chris brought up "pagination links in collection resources" in api-wg > > guidelines theme. He said in the past this goal was more a "should" > > than a "must". > > Thierry mentioned privsep migration (done in Nova and Zun). (action, > > ping mikal about it). > > Thierry also brought up the version discovery (proposed by Monty). > > Flavio proposed mutable configuration, which might be very useful for operators. > > He also mentioned that IPv6 support goal shouldn't be that far from > > done, but we're currently lacking in CI jobs that test IPv6 > > deployments (question for infra/QA, can we maybe document the gap so > > we can run some gate jobs on ipv6 ?) > > (personal note on that one, since TripleO & Puppet OpenStack CI > > already have IPv6 jobs, we can indeed be confident that it shouldn't > > be that hard to complete this goal in 6 months, I guess the work needs > > to happen in the projects layouts). > > Another interesting goal proposed by Thierry, also useful for > > operators, is to move more projects to assert:supports-upgrade tag. > > Thierry said we are probably not that far from this goal, but the > > major lack is in testing. > > Finally, another "simple" goal is to remove mox/mox3 (Flavio said most > > of projects don't use it anymore already). > > > > With that said, let's continue the discussion on these goals, see > > which ones can be actionable and find champions. > > > > - Flavio asked how would it be perceived if one cycle wouldn't have at > > least one community goal. > > > > Thierry said we could introduce multi-cycle goals (Storyboard might be > > a good candidate). > > Chris and Thierry thought that it would be a bad sign for our > > community to not have community goals during a cycle, "loss of > > momentum" eventually. > > > > > > Thanks for reading so far, > > > > On Fri, Dec 15, 2017 at 9:07 AM, Emilien Macchi wrote: > >> On Tue, Nov 28, 2017 at 2:22 PM, Emilien Macchi wrote: > >> [...] > >>> Suggestions are welcome: > >>> - on the mailing-list, in a new thread per goal [all] [tc] Proposing > >>> goal XYZ for Rocky > >>> - on Gerrit in openstack/governance like Kendall did. > >> > >> Just a fresh reminder about Rocky goals. > >> A few questions that we can ask ourselves: > >> > >> 1) What common challenges do we have? > >> > >> e.g. Some projects don't have mutable configuration or some projects > >> aren't tested against IPv6 clouds, etc. > >> > >> 2) Who is willing to drive a community goal (a.k.a. Champion)? > >> > >> note: a Champion is someone who volunteer to drive the goal, but > >> doesn't commit to write the code necessarily. The Champion will > >> communicate with projects PTLs about the goal, and make the liaison if > >> needed. > >> > >> The list of ideas for Community Goals is documented here: > >> https://etherpad.openstack.org/p/community-goals > >> > >> Please be involved and propose some ideas, I'm sure our community has > >> some common goals, right ? :-) > >> Thanks, and happy holidays. I'll follow-up in January of next year. > >> -- > >> Emilien Macchi > > > > > > > > -- > > Emilien Macchi > > > > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From saverio.proto at switch.ch Sat Jan 13 06:06:13 2018 From: saverio.proto at switch.ch (Saverio Proto) Date: Sat, 13 Jan 2018 07:06:13 +0100 Subject: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID In-Reply-To: <1515771070-sup-7997@lrrr.local> References: <054e9a0c-1c99-51dd-9e66-8adb3c910802@switch.ch> <1515696336-sup-7054@lrrr.local> <7142b655-abca-ed58-bc4c-07fa51cb401c@switch.ch> <1515771070-sup-7997@lrrr.local> Message-ID: <07ee3262-8aec-a4c4-f981-bc448afab0ba@switch.ch> > I don't think this is a configuration problem. > > Which version of the oslo.log library do you have installed? Hello Doug, I use the Ubuntu packages, at the moment I have this version: python-oslo.log 3.16.0-0ubuntu1~cloud0 thank you Saverio From mthode at mthode.org Sat Jan 13 06:41:28 2018 From: mthode at mthode.org (Matthew Thode) Date: Sat, 13 Jan 2018 00:41:28 -0600 Subject: [openstack-dev] [glance][oslo][requirements] oslo.serialization fails with glance Message-ID: <20180113064128.byill2yngkjgbys2@mthode.org> https://review.openstack.org/531788 is the review we are seeing it in, but 2.22.0 failed as well. I'm guessing it was introduced in either https://github.com/openstack/oslo.serialization/commit/c1a7079c26d27a2e46cca26963d3d9aa040bdbe8 or https://github.com/openstack/oslo.serialization/commit/cdb2f60d26e3b65b6370f87b2e9864045651c117 -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From moshele at mellanox.com Sun Jan 14 07:46:28 2018 From: moshele at mellanox.com (Moshe Levi) Date: Sun, 14 Jan 2018 07:46:28 +0000 Subject: [openstack-dev] [tripleO] quickstart with containers deployment failed Message-ID: Hi, We are trying to add container support ovs hw offload. We were able to do the deployment a few weeks ago, but now we getting an errors . Jan 14 07:14:32 localhost os-collect-config: "2018-01-14 07:14:28,568 WARNING: 18476 -- retrying pulling image: 192.168.24.1:8787/tripleomaster/centos-binary-neutron-server:current-tripleo", Jan 14 07:14:32 localhost os-collect-config: "2018-01-14 07:14:31,587 WARNING: 18476 -- docker pull failed: Get https://192.168.24.1:8787/v1/_ping: dial tcp 192.168.24.1:8787: getsockopt: connection refused", see log below [1]. We tried to run openstack overcloud container image tag discover --image trunk.registry.rdoproject.org/master/centos-binary-base:current-tripleo-rdo --tag-from-label rdo_version and we are getting the errors below [2] Help would be much appreciated. [1] http://paste.openstack.org/show/644227/ [2] http://paste.openstack.org/show/644228/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Sun Jan 14 17:36:45 2018 From: aschultz at redhat.com (Alex Schultz) Date: Sun, 14 Jan 2018 10:36:45 -0700 Subject: [openstack-dev] [tripleo][ptl] TripleO PTL unavailable Message-ID: Due to a loss in my family, I will not be around for the next few weeks. If you have any TripleO issues, please reach out to Emilien Macchi (emilien at redhat.com) or Steven Hardy (shardy at redhat.com). Thanks, -Alex From chao.xu at timanetworks.com Mon Jan 15 02:19:44 2018 From: chao.xu at timanetworks.com (chao.xu) Date: Mon, 15 Jan 2018 10:19:44 +0800 Subject: [openstack-dev] =?gb2312?b?IE9wZW5TdGFjayBNYWdudW0gc2VydmljZSB3?= =?gb2312?b?aGV0aGVyIHRvIHN1cHBvcnRzIEt1YmVybmV0ZXMgQ2x1c3Rlci9C?= =?gb2312?b?YXkgYXV0byBzY2FsaW5nIKO/?= Message-ID: <201801151019442903991@timanetworks.com> Hi,all In Pike version, the OpenStack Magnum container service whether to supports Kubernetes Cluster/Bay node to auto scaling? At present, the backend engine Swarm supports Cluster/Bay node auto scaling. Best Regards chao.xu -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichihara.hirofumi at gmail.com Mon Jan 15 06:42:34 2018 From: ichihara.hirofumi at gmail.com (Hirofumi Ichihara) Date: Mon, 15 Jan 2018 15:42:34 +0900 Subject: [openstack-dev] [Neutron] Bug deputy report Message-ID: Hi all, There is no critical bug but I'd like to report some bugs last week. Three high priority bugs. - https://bugs.launchpad.net/neutron/+bug/1741954 : create_and_list_trunk_subports rally scenario failed with timeouts - https://bugs.launchpad.net/neutron/+bug/1742401 : Fullstack tests neutron.tests.fullstack.test_securitygroup.TestSecurityGroupsSameNetwork fails often - https://bugs.launchpad.net/neutron/+bug/1741889 : functional: DbAddCommand sometimes times out after 10 seconds The following bug is difficult to judge. Please feedback of Miguel as L3 Lieutenant. https://bugs.launchpad.net/neutron/+bug/1742093 : ip_allocation attribute is not accessible over REST requests Thanks, Hirofumi -------------- next part -------------- An HTML attachment was scrubbed... URL: From wenranxiao at gmail.com Mon Jan 15 07:18:08 2018 From: wenranxiao at gmail.com (wenran xiao) Date: Mon, 15 Jan 2018 15:18:08 +0800 Subject: [openstack-dev] [neutron] [OVN] L3 traffic Message-ID: Hey all, I have found Network OVN will support to distributed floating ip ( https://docs.openstack.org/releasenotes/networking-ovn/unreleased.html), how about the snat in the future? Still need network node or not? Any suggestions are welcomed. Best regards Ran -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Mon Jan 15 09:11:04 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Mon, 15 Jan 2018 16:11:04 +0700 Subject: [openstack-dev] [mistral] Adding Adriano Petrich to the core team Message-ID: <352f9934-be79-46c0-bdfb-a20cbd4d22b6@Spark> Hi, I’d like to promote Adriano Petrich to the Mistral core team. Adriano has shown the good review rate and quality at least over the last two cycles and implemented several important features (including new useful YAQL/JINJA functions). Please vote whether you agree to add Adriano to the core team. Adriano’s statistics: http://stackalytics.com/?module=mistral-group&release=queens&user_id=apetrich Thanks Renat Akhmerov @Nokia -------------- next part -------------- An HTML attachment was scrubbed... URL: From shardy at redhat.com Mon Jan 15 09:24:45 2018 From: shardy at redhat.com (Steven Hardy) Date: Mon, 15 Jan 2018 09:24:45 +0000 Subject: [openstack-dev] [tripleO] quickstart with containers deployment failed In-Reply-To: References: Message-ID: On Sun, Jan 14, 2018 at 7:46 AM, Moshe Levi wrote: > Hi, > > > > We are trying to add container support ovs hw offload. > > We were able to do the deployment a few weeks ago, but now we getting an > errors . > > Jan 14 07:14:32 localhost os-collect-config: "2018-01-14 07:14:28,568 > WARNING: 18476 -- retrying pulling image: > 192.168.24.1:8787/tripleomaster/centos-binary-neutron-server:current-tripleo", > > Jan 14 07:14:32 localhost os-collect-config: "2018-01-14 07:14:31,587 > WARNING: 18476 -- docker pull failed: Get > > https://192.168.24.1:8787/v1/_ping: dial tcp 192.168.24.1:8787: getsockopt: > connection refused", Sounds like the docker registry is not running on the undercloud? How does your environment compare to these outputs: (undercloud) [stack at undercloud tripleo-heat-templates]$ sudo systemctl status docker-distribution ● docker-distribution.service - v2 Registry server for Docker Loaded: loaded (/usr/lib/systemd/system/docker-distribution.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2018-01-12 11:39:25 UTC; 2 days ago Main PID: 5859 (registry) CGroup: /system.slice/docker-distribution.service └─5859 /usr/bin/registry serve /etc/docker-distribution/registry/config.yml (undercloud) [stack at undercloud tripleo-heat-templates]$ sudo netstat -taupen | grep registry tcp 0 0 192.168.24.1:8787 0.0.0.0:* LISTEN 0 47530 5859/registry (undercloud) [stack at undercloud tripleo-heat-templates]$ curl http://192.168.24.1:8787/v2/_catalog {"repositories":["tripleomaster/centos-binary-aodh-api","tripleomaster/centos-binary-aodh-evaluator","tripleomaster/centos-binary-aodh-listener","tripleomaster/centos-binary-aodh-notifier","tripleomaster/centos-binary-ceilometer-central","tripleomaster/centos-binary-ceilometer-compute","tripleomaster/centos-binary-ceilometer-notification","tripleomaster/centos-binary-cinder-api","tripleomaster/centos-binary-cinder-scheduler","tripleomaster/centos-binary-cinder-volume","tripleomaster/centos-binary-cron","tripleomaster/centos-binary-glance-api","tripleomaster/centos-binary-gnocchi-api","tripleomaster/centos-binary-gnocchi-metricd","tripleomaster/centos-binary-gnocchi-statsd","tripleomaster/centos-binary-haproxy","tripleomaster/centos-binary-heat-api","tripleomaster/centos-binary-heat-api-cfn","tripleomaster/centos-binary-heat-engine","tripleomaster/centos-binary-horizon","tripleomaster/centos-binary-iscsid","tripleomaster/centos-binary-keystone","tripleomaster/centos-binary-mariadb","tripleomaster/centos-binary-memcached","tripleomaster/centos-binary-neutron-dhcp-agent","tripleomaster/centos-binary-neutron-l3-agent","tripleomaster/centos-binary-neutron-metadata-agent","tripleomaster/centos-binary-neutron-openvswitch-agent","tripleomaster/centos-binary-neutron-server","tripleomaster/centos-binary-nova-api","tripleomaster/centos-binary-nova-compute","tripleomaster/centos-binary-nova-conductor","tripleomaster/centos-binary-nova-consoleauth","tripleomaster/centos-binary-nova-libvirt","tripleomaster/centos-binary-nova-novncproxy","tripleomaster/centos-binary-nova-placement-api","tripleomaster/centos-binary-nova-scheduler","tripleomaster/centos-binary-panko-api","tripleomaster/centos-binary-rabbitmq","tripleomaster/centos-binary-redis","tripleomaster/centos-binary-swift-account","tripleomaster/centos-binary-swift-container","tripleomaster/centos-binary-swift-object","tripleomaster/centos-binary-swift-proxy-server"]} (undercloud) [stack at undercloud tripleo-heat-templates]$ > see log below [1]. > > We tried to run > > openstack overcloud container image tag discover --image > trunk.registry.rdoproject.org/master/centos-binary-base:current-tripleo-rdo > --tag-from-label rdo_version > > > > and we are getting the errors below [2] Ok this looks like a different problem, does the quickstart generated overcloud-prep-containers.sh script work? What --release argument did you give to quickstart? This may be a docs issue or bug in the discover CLI as running with the same image/tag also fails for me, but the overcloud-prep-containers.sh script (which doesn't use the discover CLI) works fine. Steve From thierry at openstack.org Mon Jan 15 09:59:51 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 15 Jan 2018 10:59:51 +0100 Subject: [openstack-dev] [tc][ptl][goals][storyboard] tracking the rocky goals with storyboard In-Reply-To: References: <1515789340-sup-6629@lrrr.local> Message-ID: Emilien Macchi wrote: > On Fri, Jan 12, 2018 at 12:37 PM, Doug Hellmann wrote: >> [...] >> How does this sound as an approach? Does anyone have any reservations >> about using storyboard this way? > > Sounds like a good idea, and will help to "Eat Our Own Dog Food" (if > we want Storyboard adopted at some point). +1! -- Thierry Carrez (ttx) From adam.coldrick at codethink.co.uk Mon Jan 15 11:26:14 2018 From: adam.coldrick at codethink.co.uk (Adam Coldrick) Date: Mon, 15 Jan 2018 11:26:14 +0000 Subject: [openstack-dev] [storyboard] need help figuring out how to use auth with storyboard client In-Reply-To: <20180112213026.2q2ioax6yvhx75ov@yuggoth.org> References: <1515790548-sup-2612@lrrr.local> <20180112213026.2q2ioax6yvhx75ov@yuggoth.org> Message-ID: <1516015574.5039.4.camel@codethink.co.uk> On Fri, 2018-01-12 at 21:30 +0000, Jeremy Stanley wrote: > On 2018-01-12 15:57:44 -0500 (-0500), Doug Hellmann wrote: > > The storyboard client docs mention an "access token" [1] as > > something > > a client needs in order to create stories and make other sorts of > > changes.  They don't explain what that token is or how to get one, > > though. > > > > Where do I get a token? How long does the token work? Can I safely > > put a token in a configuration file, or do I need to get a new one > > each time I want to do something with the client? > > https://docs.openstack.org/infra/storyboard/webapi/v1.html#api > suggests that logging in and going to > https://storyboard.openstack.org/#!/profile/tokens will allow you to > issue one (with up to a 10-year expiration based on my modest > experimentation). I believe this to be the same solution we're using > to grant teh storyboard-its Gerrit plugin to update tasks/stories > from review.openstack.org. This is likely the easiest solution. Some other options: - Admin users can issue tokens for any users, so an automation user could have a token granted by infra-root using the API directly (see the API docs[0] for detail). - Add some functionality in python-storyboardclient to handle authenticating with the OpenID provider that the API sends a redirect link for. [0]: https://docs.openstack.org/infra/storyboard/webapi/v1.html#post--v 1-users--user_id--tokens From aj at suse.com Mon Jan 15 12:52:13 2018 From: aj at suse.com (Andreas Jaeger) Date: Mon, 15 Jan 2018 13:52:13 +0100 Subject: [openstack-dev] Retirement of astara repos? In-Reply-To: <0DE3CB09-5CA1-4557-9158-C40F0FC37E6E@mcclain.xyz> References: <572FF9CF-9AB5-4CBA-A4C8-26E7A012309E@gmx.com> <0DE3CB09-5CA1-4557-9158-C40F0FC37E6E@mcclain.xyz> Message-ID: <2911309d-5bf7-9475-02ed-5970569cf620@suse.com> On 2018-01-11 22:55, Mark McClain wrote: > Sean, Andreas- > > Sorry I missed Andres’ message earlier in December about retiring astara. Everyone is correct that development stopped a good while ago. We attempted in Barcelona to find others in the community to take over the day-to-day management of the project. Unfortunately, nothing sustained resulted from that session. > > I’ve intentionally delayed archiving the repos because of background conversations around restarting active development for some pieces bubble up from time-to-time. I’ll contact those I know were interested and try for a resolution to propose before the PTG. Mark, we can always unretire a repository - it's just a revert of the retire patches... So, if you don't hear anything by then, let's retire, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From ekuvaja at redhat.com Mon Jan 15 12:59:44 2018 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Mon, 15 Jan 2018 12:59:44 +0000 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: References: Message-ID: On Thu, Jan 11, 2018 at 4:36 PM, Colleen Murphy wrote: > Hi everyone, > > We have governance review under debate[1] that we need the community's help on. > The debate is over what recommendation the TC should make to the Interop team > on where the tests it uses for the OpenStack trademark program should be > located, specifically those for the new add-on program being introduced. Let me > badly summarize: > > A couple of years ago we issued a resolution[2] officially recommending that > the Interop team use solely tempest as its source of tests for capability > verification. The Interop team has always had the view that the developers, > being the people closest to the project they're creating, are the best people > to write tests verifying correct functionality, and so the Interop team doesn't > maintain its own test suite, instead selecting tests from those written in > coordination between the QA team and the other project teams. These tests are > used to validate clouds applying for the OpenStack Powered tag, and since all > of the projects included in the OpenStack Powered program already had tests in > tempest, this was a natural fit. When we consider adding new trademark programs > comprising of other projects, the test source is less obvious. Two examples are > designate, which has never had tests in the tempest repo, and heat, which > recently had its tests removed from the tempest repo. > > So far the patch proposes three options: > > 1) All trademark-related tests should go in the tempest repo, in accordance > with the original resolution. This would mean that even projects that have > never had tests in tempest would now have to add at least some of their > black-box tests to tempest. > > The value of this option is that centralizes tests used for the Interop program > in a location where interop-minded folks from the QA team can control them. The > downside is that projects that so far have avoided having a dependency on > tempest will now lose some control over the black-box tests that they use for > functional and integration that would now also be used for trademark > certification. > There's also concern for the review bandwidth of the QA team - we can't expect > the QA team to be continually responsible for an ever-growing list of projects > and their trademark tests. > > 2) All trademark-related tests for *add-on projects* should be sourced from > plugins external to tempest. > > The value of this option is it allows project teams to retain control over > these tests. The potential problem with it is that individual project teams are > not necessarily reviewing test changes with an eye for interop concerns and so > could inadvertently change the behavior of the trademark-verification tools. > > 3) All trademark-related tests should go in a single separate tempest plugin. > > This has the value of giving the QA and Interop teams control over > interop-related tests while also making clear the distinction between tests > used for trademark verification and tests used for CI. Matt's argument against > this is that there actually is very little distinction between those two cases, > and that a given test could have many different applications. > > Other ideas that have been thrown around are: > > * Maintaining a branch in the tempest repo that Interop tests are pulled from. > > * Tagging Interop-related tests with decorators to make it clear that they need > to be handled carefully. > > At the heart of the issue is the perception that projects that keep their > integration tests within the tempest tree are somehow blessed, maybe by the QA > team or by the TC. It would be nice to try to clarify what technical > and political > reasons we have for why different projects have tests in different places - > review bandwidth of the QA team, ownership/control by the project teams, > technical interdependency between certain projects, or otherwise. > As someone who has been in middle of all that already once I'd like to bring up bit more fundamental problem into this topic. I'm not able to provide one size fits all solution but hopefully some insight that would help the community to make the right decision. I think the biggest problem is who's fox is let to guard the chicken coop. By that I mean the basic problem of our testing still relies on what is tested based on which assumptions and by whom. If the tests are provided by the project teams, the test is more likely to cover the intended usecase of the feature as it's implemented and if there is bug found on that, the likelyhood that the test is altered is quite high also the individual projects might not have the best idea what might be the important things to the interoperability and trademark purposes. Obviously when the test is written against intended behavior it's less likely but also those changes might sneak in affecting the interoprability. On the other hand if the test is written by QA/interoperability people, is it actually testing the right thing and is there more fundamental need to break it due to the fact that instead of catching and reporting the bug when the test is written, we start enforcing it. Are the tests written based on the intended behavior, documented behavior or the current actual behavior? And the biggest question of them all is who is going to have the bandwidth to understand the depth of the projects and their ties between to ensure we minimize the above? In perfect world all features are bug free, rational to use and well documented so that anyone can easily write a test that can be ran against any version to verify that we do not have regressions. We just are not living in that perfect world and each of the options are risky to cause conflicts. I think the optimal solution if we were introducing this as new fresh concept would be using tempest as engine to run trademark test plugins from their own repo and those plugins would be provided in collaboration between the trademark group as what are the functionalities tested, QA to ensure that the tests actually verify what they should be testing and the project teams ensuring that the tested feature is a) behaving and b) tested as it's intended to work and documentation is aligned with that, where the faults on any 3 could be rectified before enforcing. Unfortunately I do not see us as the community having the resources to this "the right way" and I have really hard time trying to decide which of the proposed options would be least bad. I think the worst case scenario is that we scrape together what ever we can just to have something to say that we test it and not have consistency nor clear responsibility of who, what and how. (Unfortunately I think this is the current situation and I'm super happy to hear that this is being discussed and the decision is not made lightly.) Best, Erno -jokke- Kuvaja > Ultimately, as Jeremy said in the comments on the resolution patch, the > recommendation should be one that works best for the QA and Interop teams. So > far we've heard from Matt and Mark expressing moderate support for option 2. > We'd like to hear more from those teams about how they see this working, > especially with regard to concerns about the quality and stability standards > that out-of-tree tests may be held to. We additionally need input from the > whole community on how maintaining trademark-related tests in tempest will > affect you if you don't already have your tests there. We'd especially like to > address any perceptions of favoritism or exclusionism that stem from these > issues. > > And to quickly clear up one detail before it makes it onto this thread, the > Queens Community Goal about splitting tempest plugins out of the main project's > tree[3] is entirely about addressing technical problems related to packaging for > existing tempest plugins, it's not a decree about what should live > within the tempest > repository nor does it have anything to do with the Interop program. > > As I'm not deeply steeped in the history of either the Interop or QA teams I am > sure I've misrepresented some details here, I'm sorry about that. But we'd like > to get this resolution moving forward and we're currently stuck, so this thread > is intended to gather enough community input to get unstuck and avoid letting > this proposal become stale. Please respond to this thread or comment on the > resolution proposal[1] if you have any thoughts. > > Colleen > > [1] https://review.openstack.org/#/c/521602 > [2] https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html > [3] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dougal at redhat.com Mon Jan 15 13:31:50 2018 From: dougal at redhat.com (Dougal Matthews) Date: Mon, 15 Jan 2018 13:31:50 +0000 Subject: [openstack-dev] [mistral] Adding Adriano Petrich to the core team In-Reply-To: <352f9934-be79-46c0-bdfb-a20cbd4d22b6@Spark> References: <352f9934-be79-46c0-bdfb-a20cbd4d22b6@Spark> Message-ID: +1! On Mon, 15 Jan 2018 at 09:12, Renat Akhmerov wrote: > Hi, > > I’d like to promote Adriano Petrich to the Mistral core team. Adriano has > shown the good review rate and quality at least over the last two cycles > and implemented several important features (including new useful YAQL/JINJA > functions). > > Please vote whether you agree to add Adriano to the core team. > > Adriano’s statistics: > http://stackalytics.com/?module=mistral-group&release=queens&user_id=apetrich > > Thanks > > Renat Akhmerov > @Nokia > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at est.fujitsu.com Mon Jan 15 14:03:16 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Mon, 15 Jan 2018 14:03:16 +0000 Subject: [openstack-dev] [monasca] RE: alarm transition events are missing in kafka queue - mysql alarm state is updated properly Message-ID: <146554e10a4645b0a9b26d6084fffe0b@R01UKEXCASM126.r01.fujitsu.local> Hi Yuan, please compare similar issue described in this bug report [1] and the corresponding fix [2]. As Craig explained in his comment to patch set 4, Kafka server closes idle connections after around 10 minutes, which causes first write to the topic fail. I hope it helps. Witek [1] https://storyboard.openstack.org/#!/story/2001386 [2] https://review.openstack.org/525669 From: Yuan.Pen at t-systems.com [mailto:Yuan.Pen at t-systems.com] Sent: Freitag, 12. Januar 2018 19:36 To: roland.hochmuth at hpe.com Cc: Bedyk, Witold ; monasca at lists.launchpad.net; Trebski, Tomasz ; bradley.klein at charter.com Subject: alarm transition events are missing in kafka queue - mysql alarm state is updated properly Hi Roland, This is Yuan Pen from Deutsche Telekom. I am sending this email to the monasca community asking for help on monasca threshold engine. We have found that when sometime alarm state transitions happened, the threshold engine updated mysql alarm state properly, but failed to put state transition events in kafka queue (alarm-state-transitions). Does this ring a bell to anyone in the community? If this is a real problem, is there anything we can do to make sure the event in transition queue and state in mysql is synched? Any comments or help are greatly appreciated. Best Regard, Yuan Pen 571-594-6155 -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Jan 15 14:26:09 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 15 Jan 2018 09:26:09 -0500 Subject: [openstack-dev] [oslo] proposing Stephen Finucan for oslo-core In-Reply-To: <1515423211-sup-8000@lrrr.local> References: <1515423211-sup-8000@lrrr.local> Message-ID: <1516026323-sup-247@lrrr.local> Excerpts from Doug Hellmann's message of 2018-01-08 09:55:26 -0500: > Stephen (sfinucan) has been working on pbr, oslo.config, and > oslo.policy and reviewing several of the other Oslo libraries for > a while now. His reviews are always helpful and I think he would > make a good addition to the oslo-core team. > > As per our usual practice, please reply here with a +1 or -1 and > any reservations. > > Doug As we have gone a week with no objections, I added Stephen to the oslo-core group this morning. Stephen, welcome to the team! Doug From balazs.gibizer at ericsson.com Mon Jan 15 14:29:38 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 15 Jan 2018 15:29:38 +0100 Subject: [openstack-dev] Notification update week 3 Message-ID: <1516026578.5674.12@smtp.office365.com> Hi, Here is the status update / focus settings mail for 2018 w3. Bugs ---- [High] https://bugs.launchpad.net/nova/+bug/1742935TestServiceUpdateNotificationSample fails intermittently: u'host2' != u'host1': path: root.payload.nova_object.data.host The openstack-tox-functional (and functional-py35) test environment was totally broken during last Friday. Sorry for that. The patch that caused the break has been reverted https://review.openstack.org/#/c/533190/ A follow up bug has been opened (see next) to avoid similar break in the future. [High] https://bugs.launchpad.net/nova/+bug/1742962 nova functional test does not triggered on notification sample only changes During the zuul v3 migration the project-config generated based on the zuul v2 jobs. It contained a proper definition of when nova wants to trigger the functional job. Unfortunately this job definition does not override the openstack-tox-functional job definition from the openstack-zuul-jobs repo. This caused that the openstack-tox-functional (and functional-py35) jobs were not triggered for certain commits. The fix is to create a nova specific tox-functional job in tree. Patches has been proposed: * https://review.openstack.org/#/c/533210/ Make sure that functional test triggered on sample changes * https://review.openstack.org/#/c/533608/ Moving nova functional test def to in tree In general we have to review all nova jobs in the project-config and move those in-tree that try to override parameters of the job definitions in openstack-zuul-jobs repo. [High] https://bugs.launchpad.net/nova/+bug/1737201 TypeError when sending notification during attach_interface Fix merged to master. Backports have been proposed: * Pike: https://review.openstack.org/#/c/531745/ * Queens: https://review.openstack.org/#/c/531746/ [High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations fail to complete with versioned notifications if payload contains unset non-nullable fields Patch has been proposed: https://review.openstack.org/#/c/529194/ Dan left feedback on it and I accept his comment that this is mostly papering over a problem that we don't fully understand how can happen in the first place. In the other hand I don't know how can we figure out what happend. So if somebody has an idea then don't hesistate to tell me. [Low] https://bugs.launchpad.net/nova/+bug/1742688 test_live_migration_actions notification sample test fails intermittently with 'notification instance.live_migration_rollback.start hasn't been received' It seems that test execution in CI is a lot slower than before and it makes the 1 second timeout in the notification test too small. Fix is on the gate: https://review.openstack.org/#/c/532816 [Low] https://bugs.launchpad.net/nova/+bug/1487038 nova.exception._cleanse_dict should use oslo_utils.strutils._SANITIZE_KEYS Old abandoned patches exist but need somebody to pick them up: * https://review.openstack.org/#/c/215308/ * https://review.openstack.org/#/c/388345/ Versioned notification transformation ------------------------------------- Here are the patches ready to review: * https://review.openstack.org/#/c/385644 Transform rescue/unrescue instance notifications Needs only a second +2 * https://review.openstack.org/#/c/403660 Transform instance.exists notification * https://review.openstack.org/#/c/410297 Transform missing delete notifications * https://review.openstack.org/#/c/476459 Send soft_delete from context manager Introduce instance.lock and instance.unlock notifications ----------------------------------------------------------- A specless bp has been proposed to the Rocky cycle https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances Some preliminary discussion happened in an earlier patch https://review.openstack.org/#/c/526251/ Add the user id and project id of the user initiated the instance action to the notification -------------------------------------------------------------------------------------------- A new bp has been proposed https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications As the user who initiates the instance action (e.g. reboot) could be different from the user owning the instance it would make sense to include the user_id and project_id of the action initiatior to the versioned instance action notifications as well. Factor out duplicated notification sample ----------------------------------------- https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open We have to be carefull to approve these type of commits until the solution for https://bugs.launchpad.net/nova/+bug/1742962 merged as functional tests could be broken silently. Weekly meeting -------------- There will not be a meeting this week. The next meeting will be held on 23th of January. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180123T170000 Cheers, gibi -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Jan 15 14:40:14 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 15 Jan 2018 14:40:14 +0000 Subject: [openstack-dev] [pbr] support v_version In-Reply-To: References: Message-ID: <20180115144013.c3bh2qli6hgbccx3@yuggoth.org> On 2018-01-09 10:25:56 +0100 (+0100), Gaetan wrote: > I have submitted this patch ([1]) that add support for v_version > in PBR. Basically I can tag v1.0.0 instead of 1.0.0 to release > 1.0.0. [...] Looks like the patch you linked has since merged. Any issues with it so far? > Second point, to go to the end of the logic of my change, I would > like to propose an optional way (in setup.cfg?) to **prevent** any > tag without the 'v' prefix, ie, where a bare version tag like > `1.0.0` is not to be considered as a valid version. [...] I'm not heavily involved in PBR maintenance so my ideas may be terrible, but have you considered just making it possible to set a version pattern option in setup.cfg so that this sort of filtering is more easily generalized? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Mon Jan 15 14:44:22 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 15 Jan 2018 09:44:22 -0500 Subject: [openstack-dev] [storyboard] need help figuring out how to use auth with storyboard client In-Reply-To: <1516015574.5039.4.camel@codethink.co.uk> References: <1515790548-sup-2612@lrrr.local> <20180112213026.2q2ioax6yvhx75ov@yuggoth.org> <1516015574.5039.4.camel@codethink.co.uk> Message-ID: <1516027357-sup-5412@lrrr.local> Excerpts from Adam Coldrick's message of 2018-01-15 11:26:14 +0000: > On Fri, 2018-01-12 at 21:30 +0000, Jeremy Stanley wrote: > > On 2018-01-12 15:57:44 -0500 (-0500), Doug Hellmann wrote: > > > The storyboard client docs mention an "access token" [1] as > > > something > > > a client needs in order to create stories and make other sorts of > > > changes.  They don't explain what that token is or how to get one, > > > though. > > > > > > Where do I get a token? How long does the token work? Can I safely > > > put a token in a configuration file, or do I need to get a new one > > > each time I want to do something with the client? > > > > https://docs.openstack.org/infra/storyboard/webapi/v1.html#api > > suggests that logging in and going to > > https://storyboard.openstack.org/#!/profile/tokens will allow you to > > issue one (with up to a 10-year expiration based on my modest > > experimentation). I believe this to be the same solution we're using > > to grant teh storyboard-its Gerrit plugin to update tasks/stories > > from review.openstack.org. > > This is likely the easiest solution. Some other options: > > - Admin users can issue tokens for any users, so an automation user > could have a token granted by infra-root using the API directly (see > the API docs[0] for detail). The script I'm thinking of would create the story, tasks, and board associated with a community goal. So it could be run by a goal champion on their local system, and wouldn't need a special user to own the results. > - Add some functionality in python-storyboardclient to handle > authenticating with the OpenID provider that the API sends a > redirect link for. That would be useful, although having directions for getting a token manually is probably fine, for this case. > > [0]: https://docs.openstack.org/infra/storyboard/webapi/v1.html#post--v > 1-users--user_id--tokens > From doug at doughellmann.com Mon Jan 15 14:57:18 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 15 Jan 2018 09:57:18 -0500 Subject: [openstack-dev] [pbr] support v_version In-Reply-To: <20180115144013.c3bh2qli6hgbccx3@yuggoth.org> References: <20180115144013.c3bh2qli6hgbccx3@yuggoth.org> Message-ID: <1516028190-sup-5253@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-01-15 14:40:14 +0000: > On 2018-01-09 10:25:56 +0100 (+0100), Gaetan wrote: > > I have submitted this patch ([1]) that add support for v_version > > in PBR. Basically I can tag v1.0.0 instead of 1.0.0 to release > > 1.0.0. > [...] > > Looks like the patch you linked has since merged. Any issues with it > so far? > > > Second point, to go to the end of the logic of my change, I would > > like to propose an optional way (in setup.cfg?) to **prevent** any > > tag without the 'v' prefix, ie, where a bare version tag like > > `1.0.0` is not to be considered as a valid version. > [...] > > I'm not heavily involved in PBR maintenance so my ideas may be > terrible, but have you considered just making it possible to set a > version pattern option in setup.cfg so that this sort of filtering > is more easily generalized? That seems reasonable. It may even help solve the package filename problem. Doug From sfinucan at redhat.com Mon Jan 15 15:09:52 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Mon, 15 Jan 2018 15:09:52 +0000 Subject: [openstack-dev] [pbr] support v_version In-Reply-To: References: Message-ID: <1516028992.2220.15.camel@redhat.com> On Tue, 2018-01-09 at 10:25 +0100, Gaetan wrote: > Hello > > I have submitted this patch ([1]) that add support for v_version in > PBR. Basically I can tag v1.0.0 instead of 1.0.0 to release 1.0.0. > > However, after rework it appears PBR does not behaves well, even if > the unit tests pass: > On tag for instance v1.0.0, the result packages in named > `-1.0.0.dev1`. > > Do you know where I need to hack PBR to fix it? So 'pbr' correctly parses the prefixed tags, but it's just the output packages (sdists, wheels) that always unversioned? If so, this sounds correct. Python packaging, as defined in PEP-440 [1], doesn't use the 'v' prefixes, so it doesn't really make sense to stick them in here. Out of curiosity, what's your rationale for modifying the package name? > Second point, to go to the end of the logic of my change, I would > like to propose an optional way (in setup.cfg?) to **prevent** any > tag without the 'v' prefix, ie, where a bare version tag like `1.0.0` > is not to be considered as a valid version. > That way, on system such as gitlab or github: > - repository owners "protect" tags with pattern "v*", ie, all tags > for release such as "v1.0.0", ... cannot be pushed by anyone but the > owners/masters > - other developer can still push other tags for other purpose So this could be used to prevent pbr reading the tags, but it won't stop anyone from creating them in the first place (i.e. "protect" tags). We can do this but it would be a separate feature and, to be honest, I'd suggest using Git hooks or some form of access control as a better way to do this (Note: it seems GitLab already supports something similar [2]). Stephen [1] https://www.python.org/dev/peps/pep-0440/ [2] https://gitlab.com/gitlab-org/gitlab-ce/issues/18471 From sfinucan at redhat.com Mon Jan 15 15:11:37 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Mon, 15 Jan 2018 15:11:37 +0000 Subject: [openstack-dev] [os-api-ref][doc] Adding openstack-doc-core to os-api-ref In-Reply-To: <1515080357.32193.33.camel@redhat.com> References: <1515080357.32193.33.camel@redhat.com> Message-ID: <1516029097.2220.17.camel@redhat.com> On Thu, 2018-01-04 at 15:39 +0000, Stephen Finucane wrote: > I'm not sure what the procedure for this is but here goes. > > I've noticed that the 'os-api-ref' project seems to have its own group > of cores [1], many of whom are no longer working on OpenStack (at > least, not full-time), and has a handful of open patches against it > [2]. Since the doc team has recently changed its scope from writing > documentation to enabling individual projects to maintain their own > docs, we've become mainly responsible for projects like 'openstack-doc- > theme'. Given that the 'os-api-ref' project is a Sphinx thing required > for multiple OpenStack projects, it seems like something that > could/should fall into the doc team's remit. > > I'd like to move this project into the remit of the 'openstack-doc- > core' team, by way of removing the 'os-api-ref-core' group or adding > 'openstack-doc-core' to the list of included groups. In both cases, > existing active cores will be retained. Do any of the existing 'os-api- > ref' cores have any objections to this? It's been two weeks with no -1s, so we've gone ahead and done this. Thanks, everyone. Stephen > Stephen > > PS: I'm not sure how this affects things from a release management > perspective. Are there PTLs for these sorts of projects? Turns out this project was already listed as a doc team deliverable. Who knew?! > [1] https://review.openstack.org/#/admin/groups/1391,members > [2] https://review.openstack.org/#/q/project:openstack/os-api-ref+statu > s:open From aj at suse.com Mon Jan 15 15:33:04 2018 From: aj at suse.com (Andreas Jaeger) Date: Mon, 15 Jan 2018 16:33:04 +0100 Subject: [openstack-dev] [os-api-ref][doc] Adding openstack-doc-core to os-api-ref In-Reply-To: <1516029097.2220.17.camel@redhat.com> References: <1515080357.32193.33.camel@redhat.com> <1516029097.2220.17.camel@redhat.com> Message-ID: <65166457-4e40-48e7-405b-235bf82a9412@suse.com> On 2018-01-15 16:11, Stephen Finucane wrote: > On Thu, 2018-01-04 at 15:39 +0000, Stephen Finucane wrote: >> I'm not sure what the procedure for this is but here goes. >> >> I've noticed that the 'os-api-ref' project seems to have its own group >> of cores [1], many of whom are no longer working on OpenStack (at >> least, not full-time), and has a handful of open patches against it >> [2]. Since the doc team has recently changed its scope from writing >> documentation to enabling individual projects to maintain their own >> docs, we've become mainly responsible for projects like 'openstack-doc- >> theme'. Given that the 'os-api-ref' project is a Sphinx thing required >> for multiple OpenStack projects, it seems like something that >> could/should fall into the doc team's remit. >> >> I'd like to move this project into the remit of the 'openstack-doc- >> core' team, by way of removing the 'os-api-ref-core' group or adding >> 'openstack-doc-core' to the list of included groups. In both cases, >> existing active cores will be retained. Do any of the existing 'os-api- >> ref' cores have any objections to this? > > It's been two weeks with no -1s, so we've gone ahead and done this. > Thanks, everyone. Note that the ownership is already in government repo since May 2016 (https://review.openstack.org/317686), Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From isaku.yamahata at gmail.com Mon Jan 15 16:07:52 2018 From: isaku.yamahata at gmail.com (Isaku Yamahata) Date: Mon, 15 Jan 2018 08:07:52 -0800 Subject: [openstack-dev] [neutron-dev] [neutron] Generalized issues in the unit testing of ML2 mechanism drivers In-Reply-To: References: Message-ID: <20180115160752.GA29488@private.email.ne.jp> Hello. Any comments/thoughts? This issue is generic to neutron. not specific to networking-odl. So this should be addressed as Neutron wide, and neutron and sub-project should be addressed uniformly. Thanks, On Mon, Dec 18, 2017 at 12:52:37PM +0200, Mike Kolesnik wrote: > On Wed, Dec 13, 2017 at 2:30 PM, Michel Peterson wrote: > > > Through my work in networking-odl I've found what I believe is an issue > > present in a majority of ML2 drivers. An issue I think needs awareness so > > each project can decide a course of action. > > > > The issue stems from the adopted practice of importing > > `neutron.tests.unit.plugins.ml2.test_plugin` and creating classes with > > noop operation to "inherit" tests for free [1]. The idea behind is nice, > > you inherit >600 tests that cover several scenarios. > > > > There are several issues of adopting this pattern, two of which are > > paramount: > > > > 1. If the mechanism driver is not loaded correctly [2], the tests then > > don't test the mechanism driver but still succeed and therefore there is no > > indication that there is something wrong with the code. In the case of > > networking-odl it wasn't discovered until last week, which means that for > > >1 year it this was adding PASSed tests uselessly. > > > > 2. It gives a false sense of reassurance. If the code of those tests is > > analyzed it's possible to see that the code itself is mostly centered > > around testing the REST endpoint of neutron than actually testing that the > > mechanism succeeds on the operation it was supposed to test. As a result of > > this, there is marginally added value on having those tests. To be clear, > > the hooks for the respective operations are called on the mechanism driver, > > but the result of the operation is not asserted. > > > > I would love to hear more voices around this, so feel free to comment. > > > > ???i talked to a few guys from networking-ovn which are now processing this > info so they could chime in, but from what I've understood the issue wasn't > given much thought in networking-ovn (and I suspect other mechanism > drivers). > ??? > > > > > Regarding networking-odl the solution I propose is the following: > > **First**, discard completely the change mentioned in the footnote #2. > > **Second**, create a patch that completely removes the tests that follow > > this pattern. > > **Third**, incorporate the neutron tempest plugin into the CI and rely > > on that for assuring coverage of the different scenarios. > > > > ???This sounds like a good plan to me. > ??? > > > > > Also to mention that when discovered this issue in networking-odl we took > > a decision not to merge more patches until the PS of footnote #2 was > > addressed. I think we can now decide to overrule that decision and proceed > > as usual. > > > > ???Agreed. > ??? > > > > > > > > > [1]: http://codesearch.openstack.org/?q=class%20.*\(.*TestMl2 > > > > [2]: something that was happening in networking-odl and addressed by > > https://review.openstack.org/#/c/523934 > > > > _______________________________________________ > > neutron-dev mailing list > > neutron-dev at lists.opendaylight.org > > https://lists.opendaylight.org/mailman/listinfo/neutron-dev > > > > > > > -- > Regards, > Mike > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Isaku Yamahata From prometheanfire at gentoo.org Mon Jan 15 16:12:00 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 15 Jan 2018 10:12:00 -0600 Subject: [openstack-dev] [glance][oslo][requirements] oslo.serialization fails with glance In-Reply-To: <20180113064128.byill2yngkjgbys2@mthode.org> References: <20180113064128.byill2yngkjgbys2@mthode.org> Message-ID: <20180115161200.nvg34l653w3rxggy@gentoo.org> On 18-01-13 00:41:28, Matthew Thode wrote: > https://review.openstack.org/531788 is the review we are seeing it in, > but 2.22.0 failed as well. > > I'm guessing it was introduced in either > > https://github.com/openstack/oslo.serialization/commit/c1a7079c26d27a2e46cca26963d3d9aa040bdbe8 > or > https://github.com/openstack/oslo.serialization/commit/cdb2f60d26e3b65b6370f87b2e9864045651c117 bamp -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gaetan at xeberon.net Mon Jan 15 16:29:01 2018 From: gaetan at xeberon.net (Gaetan) Date: Mon, 15 Jan 2018 17:29:01 +0100 Subject: [openstack-dev] [pbr] support v_version In-Reply-To: <1516028992.2220.15.camel@redhat.com> References: <1516028992.2220.15.camel@redhat.com> Message-ID: First, thanks a lot for your support and your kindness ! Really appreciate that :) > > Do you know where I need to hack PBR to fix it? > > So 'pbr' correctly parses the prefixed tags, but it's just the output > packages (sdists, wheels) that always unversioned? If so, this sounds > correct. Python packaging, as defined in PEP-440 [1], doesn't use the > 'v' prefixes, so it doesn't really make sense to stick them in here. > Out of curiosity, what's your rationale for modifying the package name? > The package name is not changed actually. With the patch in PBR that has been merged, one could add a tag named "v1.0.0" on mypackage package, and the sdist will generate a distribution package mypackage-0.0.4.tar.gz So I think (hope?) this is still PEP440 compliant. I tried this feature on another software that also uses pbr and there is no problem: v version works great with sdist/bdist/wheel packages. I use it inside a Gitlab CE pipeline on a tag pipeline (CI is triggered when a git tag that follows the "v*" regular expression), and instead of creating a package mypackage-0.0.4-py2.py3-none-any.whl it created mypackage-0.0.3.dev3-py2.py3-none-any.whl. When I retried manually on my development environment, pbr works perfectly again on the same code. I guess it somehow didn't used my build of the pbr package when running in gitlab pipeline. Do you plan on releasing PBR soon on pypi? I have to build myself and push it on our internal nexus pypi, but I think the safest way is to wait for an official pbr release on pypi.python.org :) > Second point, to go to the end of the logic of my change, I would > > like to propose an optional way (in setup.cfg?) to **prevent** any > > tag without the 'v' prefix, ie, where a bare version tag like `1.0.0` > > is not to be considered as a valid version. > > That way, on system such as gitlab or github: > > - repository owners "protect" tags with pattern "v*", ie, all tags > > for release such as "v1.0.0", ... cannot be pushed by anyone but the > > owners/masters > > - other developer can still push other tags for other purpose > > So this could be used to prevent pbr reading the tags, but it won't > stop anyone from creating them in the first place (i.e. "protect" > tags). Yes, I agree this is not really mandatory. Gitlab tag protection should be enough. I am using a "protected environment variables" on gitlab, and indeed, the credentials to push on Pypi are only sent when the pipeline is triggered on such a "protected" branch or "protected tag". So we "protect" only tags starting with a "v*" and only this triggered pipeline can publish to pypi (we use Nexus). This allows other developers to add any tags not started with v (only repository owners can create tags starting with a "v*"). Note this "v*" regular expression is configurable and seem to default/good practice on GitLab CE/EE. > We can do this but it would be a separate feature and, to be > honest, I'd suggest using Git hooks or some form of access control as a > better way to do this (Note: it seems GitLab already supports something > similar [2]). > Yes this is what I actually use :) Thanks In short: pbr v_version seems to work great, just hoping for the official PBR release on pypi.python.org :) Thanks Gaetan Semet -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Mon Jan 15 16:45:46 2018 From: mdulko at redhat.com (mdulko at redhat.com) Date: Mon, 15 Jan 2018 17:45:46 +0100 Subject: [openstack-dev] [kuryr][os-vif][nova] os-vif 1.8.0 breaks kuryr-kubernetes Message-ID: <1516034746.4371.40.camel@redhat.com> Hi, os-vif commit [1] introduced a non-backward compatible change to the Subnet object - removal of ips field. Turns out kuryr-kubernetes were depending on that e.g. here [1] and we're now broken with os-vif 1.8.0. kuryr-kubernetes is saving the VIF objects into the K8s resources annotations, so to keep backwards compatibility we need VIFBase.obj_make_compatible able to backport the data back into the Subnet object. Or be able to load the older data to the newer object. Anyone have an advice how we should proceed with that issue? It would also be nice to setup a kuryr-kubernetes gate on the os-vif repo. If there are no objections to that I'd volunteer to submit a commit that adds it. Thanks, Michal [1] https://review.openstack.org/#/c/508498 [2] https://github.com/openstack/kuryr-kubernetes/blob/18db6499432e6cab61059eb5abeeaad3ea40b6e4/kuryr_kubernetes/cni/binding/base.py#L64-L66 From doug at doughellmann.com Mon Jan 15 16:48:35 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 15 Jan 2018 11:48:35 -0500 Subject: [openstack-dev] [pbr] support v_version In-Reply-To: References: <1516028992.2220.15.camel@redhat.com> Message-ID: <1516034554-sup-2404@lrrr.local> Excerpts from Gaetan's message of 2018-01-15 17:29:01 +0100: > First, thanks a lot for your support and your kindness ! Really appreciate > that :) > > > > Do you know where I need to hack PBR to fix it? > > > > So 'pbr' correctly parses the prefixed tags, but it's just the output > > packages (sdists, wheels) that always unversioned? If so, this sounds > > correct. Python packaging, as defined in PEP-440 [1], doesn't use the > > 'v' prefixes, so it doesn't really make sense to stick them in here. > > Out of curiosity, what's your rationale for modifying the package name? > > > > The package name is not changed actually. With the patch in PBR that has > been merged, one could add a tag named "v1.0.0" on mypackage package, > and the sdist will generate a distribution package > > mypackage-0.0.4.tar.gz > > > So I think (hope?) this is still PEP440 compliant. > > I tried this feature on another software that also uses pbr and there is no > problem: v version works great with sdist/bdist/wheel packages. > > I use it inside a Gitlab CE pipeline on a tag pipeline (CI is triggered when > a git tag that follows the "v*" regular expression), and instead of > creating > a package > > mypackage-0.0.4-py2.py3-none-any.whl > > it created > > mypackage-0.0.3.dev3-py2.py3-none-any.whl. > > > When I retried manually on my development environment, pbr works > perfectly again on the same code. > I guess it somehow didn't used my build of the pbr package when > running in gitlab pipeline. It sounds like the CE pipeline is not building packages in the same way? Or is using an old version of pbr? > > Do you plan on releasing PBR soon on pypi? > I have to build myself and push it on our internal nexus pypi, but I think > the > safest way is to wait for an official pbr release on pypi.python.org :) Unfortunately we're approaching a significant set of feature freeze deadlines for this OpenStack development cycle. Because it is very difficult to "pin" a library like pbr that appears in the setup_requires list for projects in our CI pipelines, and this next release of pbr would have quite a few changes, we have decided in our meeting today to be cautious and delay the next release for a few weeks. I have scheduled a reminder to tag the pbr release during the first week of March, after the OpenStack release is safely completed. I'm sorry if this causes inconvenience. We're going to work on ensuring we have more frequent releases so we reduce our chance of ending up in a similar situation again in the future. Doug [1] http://eavesdrop.openstack.org/irclogs/%23openstack-oslo/%23openstack-oslo.2018-01-15.log.html From alexandre.van-kempen at inria.fr Mon Jan 15 17:01:39 2018 From: alexandre.van-kempen at inria.fr (avankemp) Date: Mon, 15 Jan 2018 18:01:39 +0100 Subject: [openstack-dev] [FEMDC] Wed. 17 Jan - IRC Meeting 15:00 UTC Message-ID: Dear all, A gentle reminder for our Wednesday meeting at 15:00 UTC A draft of the 2018 agenda is available, you are very welcome to add any item. https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2018 For the record, the 2017 agenda is still available here: https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2017 Best, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Jan 15 17:04:57 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 15 Jan 2018 17:04:57 +0000 Subject: [openstack-dev] PTL Election Season Message-ID: Election details: https://governance.openstack.org/election/ Please read the stipulations and timelines for candidates and electorate contained in this governance documentation. Be aware, in the PTL elections if the program only has one candidate, that candidate is acclaimed and there will be no poll. There will only be a poll if there is more than one candidate stepping forward for a program's PTL position. There will be further announcements posted to the mailing list as action is required from the electorate or candidates. This email is for information purposes only. If you have any questions which you feel affect others please reply to this email thread. If you have any questions that you which to discuss in private please email any of the election judges[1] so that we may address your concerns. Thank you, -Kendall Nelson (diablo_rojo) [1] https://governance.openstack.org/election/#election-officials -------------- next part -------------- An HTML attachment was scrubbed... URL: From lhinds at redhat.com Mon Jan 15 17:10:51 2018 From: lhinds at redhat.com (Luke Hinds) Date: Mon, 15 Jan 2018 17:10:51 +0000 Subject: [openstack-dev] PTL Election Season In-Reply-To: References: Message-ID: On Mon, Jan 15, 2018 at 5:04 PM, Kendall Nelson wrote: > Election details: https://governance.openstack.org/election/ > > Please read the stipulations and timelines for candidates and electorate > contained in this governance documentation. > > Be aware, in the PTL elections if the program only has one candidate, that > candidate is acclaimed and there will be no poll. There will only be a poll > if there is more than one candidate stepping forward for a program's PTL > position. > > There will be further announcements posted to the mailing list as action > is required from the electorate or candidates. This email is for > information purposes only. > > If you have any questions which you feel affect others please reply to > this email thread. > > If you have any questions that you which to discuss in private please > email any of the election judges[1] so that we may address your concerns. > > Thank you, > > -Kendall Nelson (diablo_rojo) > > [1] https://governance.openstack.org/election/#election-officials > Keep in mind there will be no Security PTL election for rocky as we will be changing to a SIG and will no longer be a project. -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack.org at sodarock.com Mon Jan 15 17:20:27 2018 From: openstack.org at sodarock.com (John Villalovos) Date: Mon, 15 Jan 2018 09:20:27 -0800 Subject: [openstack-dev] [Ironic] ironic tempest plugin code has been moved to openstack/ironic-tempest-plugin/ Message-ID: The ironic tempest plugin code that was in openstack/ironic and openstack/ironic-inspector has been moved to openstack/ironic-tempest-plugin/ As of 10-Jan-2018 (Wednesday) the plugin code was deleted from openstack/ironic and openstack/ironic-inspector. All users of the tempest plugin code need to be using it from the openstack/ironic-tempest-plugin repository now. We believe that all of the 3rd Party CIs have already made this change. Thanks to everyone working on this! Thanks, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.k.mooney at intel.com Mon Jan 15 17:21:02 2018 From: sean.k.mooney at intel.com (Mooney, Sean K) Date: Mon, 15 Jan 2018 17:21:02 +0000 Subject: [openstack-dev] [kuryr][os-vif][nova] os-vif 1.8.0 breaks kuryr-kubernetes In-Reply-To: <1516034746.4371.40.camel@redhat.com> References: <1516034746.4371.40.camel@redhat.com> Message-ID: <4B1BB321037C0849AAE171801564DFA6889ADE28@IRSMSX107.ger.corp.intel.com> > -----Original Message----- > From: mdulko at redhat.com [mailto:mdulko at redhat.com] > Sent: Monday, January 15, 2018 4:46 PM > To: openstack-dev at lists.openstack.org > Subject: [openstack-dev] [kuryr][os-vif][nova] os-vif 1.8.0 breaks > kuryr-kubernetes > > Hi, > > os-vif commit [1] introduced a non-backward compatible change to the > Subnet object - removal of ips field. Turns out kuryr-kubernetes were > depending on that e.g. here [1] and we're now broken with os-vif 1.8.0. > > kuryr-kubernetes is saving the VIF objects into the K8s resources > annotations, so to keep backwards compatibility we need > VIFBase.obj_make_compatible able to backport the data back into the > Subnet object. Or be able to load the older data to the newer object. > Anyone have an advice how we should proceed with that issue? [Mooney, Sean K] I belive obj_make_compatible methods were in the original Patch but they were removed as we did not know of any user of this field. The IPs field in the subnet object Was a legacy hold over from when the object was ported from nova-networks. it is never used by nova when calling os-vif today hence the change to align the data structure more closely with neutrons where the fixed ips are an attribute of the port. The change was made to to ensure no future users of os-vif consumed the fixed ips from the subnet object but I guess kuryr-kubernetes had already done so. Ideally we would migrate kuryr-kubernetes to consume fixed_ips form The vif object instead of the subnet but if we can introduce a patch to os-vif to provide backwards compatibly before the non-client lib freeze on thrusday we can include that in queens. > > It would also be nice to setup a kuryr-kubernetes gate on the os-vif > repo. If there are no objections to that I'd volunteer to submit a > commit that adds it. [Mooney, Sean K] I would be happy to see gate form all consumer of os-vif so go for it. Related to this https://review.openstack.org/#/c/509107/4 is currently abandoned but I would Also like revivew this change in Rocky. Neutron has supported multiple dhcp servers for some time, Nova-networks only supported one hench why the dhcp_server field is currently singular. Will this effect kuryr-kubernetes? Are ye currently working around this issue in some other way? > > Thanks, > Michal > > [1] https://review.openstack.org/#/c/508498 > [2] https://github.com/openstack/kuryr- > kubernetes/blob/18db6499432e6cab61059eb5abeeaad3ea40b6e4/kuryr_kubernet > es/cni/binding/base.py#L64-L66 > > _______________________________________________________________________ > ___ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jaypipes at gmail.com Mon Jan 15 17:30:13 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 15 Jan 2018 12:30:13 -0500 Subject: [openstack-dev] [kuryr][os-vif][nova] os-vif 1.8.0 breaks kuryr-kubernetes In-Reply-To: <1516034746.4371.40.camel@redhat.com> References: <1516034746.4371.40.camel@redhat.com> Message-ID: <4fb2222f-f0bb-9b8c-c2dd-55b504a06c20@gmail.com> On 01/15/2018 11:45 AM, mdulko at redhat.com wrote: > Hi, > > os-vif commit [1] introduced a non-backward compatible change to the > Subnet object - removal of ips field. Turns out kuryr-kubernetes were > depending on that e.g. here [1] and we're now broken with os-vif 1.8.0. > > kuryr-kubernetes is saving the VIF objects into the K8s resources > annotations, so to keep backwards compatibility we need > VIFBase.obj_make_compatible able to backport the data back into the > Subnet object. Or be able to load the older data to the newer object. > Anyone have an advice how we should proceed with that issue? It would have been great to know kuryr-kubernetes was saving/using these objects :) as mentioned on the original os-vif code review, the versioned objects in os-vif have yet to be used in over-the-wire communication nor have they been saved to a backing data store by Nova or Neutron. Thus, we haven't bothered with the obj_make_compatible() stuff yet. If we had known there was a non-Nova non-Neutron client of os-vif, of course we would have been tracking changes using obj_make_compatible(). That said, even if we *were* using obj_make_compatible() and allowing for the backversioning of object formats, that would not have magically enabled kuryr-kubernetes to work with our objects without modification. kuryr-kubernetes would still need to do the dance of telling os-vif somehow what version of the objects that it expects to be given. This is what all the infrastructure in oslo.versionedobject's client-server negotiation does and it's non-trivial. Bottom line, we can straight revert the os-vif patch in question (because it's really just a cleanup), release os-vif 1.8.1 by the cutoff on Thursday and "fix" kuryr-kubernetes. However, we will want to have a call with you guys to tell you exactly how to do the versioning negotiation that you will now need to do since you're storing these objects somewhere. Best, -jay > It would also be nice to setup a kuryr-kubernetes gate on the os-vif > repo. If there are no objections to that I'd volunteer to submit a > commit that adds it. > > Thanks, > Michal > > [1] https://review.openstack.org/#/c/508498 > [2] https://github.com/openstack/kuryr-kubernetes/blob/18db6499432e6cab61059eb5abeeaad3ea40b6e4/kuryr_kubernetes/cni/binding/base.py#L64-L66 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gaetan at xeberon.net Mon Jan 15 18:29:01 2018 From: gaetan at xeberon.net (Gaetan) Date: Mon, 15 Jan 2018 19:29:01 +0100 Subject: [openstack-dev] [pbr] support v_version In-Reply-To: <1516034554-sup-2404@lrrr.local> References: <1516028992.2220.15.camel@redhat.com> <1516034554-sup-2404@lrrr.local> Message-ID: > > > > I guess it somehow didn't used my build of the pbr package when > > running in gitlab pipeline. > > It sounds like the CE pipeline is not building packages in the same way? > Or is using an old version of pbr? > > I guess it was the pbr version from pypi.python.org, not my customized build published on our internal pypi https://books.sonatype.com/nexus-book/3.0/reference/pypi.html > > Do you plan on releasing PBR soon on pypi? > > I have to build myself and push it on our internal nexus pypi, but I > think > > the > > safest way is to wait for an official pbr release on pypi.python.org :) > > Unfortunately we're approaching a significant set of feature freeze > deadlines for this OpenStack development cycle. Because it is very > difficult to "pin" a library like pbr that appears in the setup_requires > list for projects in our CI pipelines, and this next release of pbr > would have quite a few changes, we have decided in our meeting today > to be cautious and delay the next release for a few weeks. > > Have you ever considered using pipenv/pipfile ? I started using it a few months ago it helps a lot with dependency freezing I have scheduled a reminder to tag the pbr release during the first > week of March, after the OpenStack release is safely completed. > > I'm sorry if this causes inconvenience. We're going to work on > ensuring we have more frequent releases so we reduce our chance of > ending up in a similar situation again in the future. > That is not a problem by itself, I still have this self-hosted Pypi repository in the meantime. To react on the IRL log, indeed, the proposal I make in this thread is purely optional, for my need, gitlab handle correctly the protection against unwanted publication. Just hope it will be useful for other project, probably not for OpenStack itself. Thanks a lot for supporting external projects :) Gaetan -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Jan 15 19:36:27 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 15 Jan 2018 14:36:27 -0500 Subject: [openstack-dev] [pbr] support v_version In-Reply-To: References: <1516028992.2220.15.camel@redhat.com> <1516034554-sup-2404@lrrr.local> Message-ID: <1516044897-sup-8017@lrrr.local> Excerpts from Gaetan's message of 2018-01-15 19:29:01 +0100: > > > > > > > I guess it somehow didn't used my build of the pbr package when > > > running in gitlab pipeline. > > > > It sounds like the CE pipeline is not building packages in the same way? > > Or is using an old version of pbr? > > > > I guess it was the pbr version from pypi.python.org, not my customized > build > published on our internal pypi > https://books.sonatype.com/nexus-book/3.0/reference/pypi.html > > > > Do you plan on releasing PBR soon on pypi? > > > I have to build myself and push it on our internal nexus pypi, but I > > think > > > the > > > safest way is to wait for an official pbr release on pypi.python.org :) > > > > Unfortunately we're approaching a significant set of feature freeze > > deadlines for this OpenStack development cycle. Because it is very > > difficult to "pin" a library like pbr that appears in the setup_requires > > list for projects in our CI pipelines, and this next release of pbr > > would have quite a few changes, we have decided in our meeting today > > to be cautious and delay the next release for a few weeks. > > > Have you ever considered using pipenv/pipfile ? I started using it a few > months ago it helps a lot with dependency freezing We have a system in place to constrain most of the libraries we use (using pip's -c option) but pip does not pay attention to that list in all code paths when installing things listed in setup_requires. > > I have scheduled a reminder to tag the pbr release during the first > > week of March, after the OpenStack release is safely completed. > > > > I'm sorry if this causes inconvenience. We're going to work on > > ensuring we have more frequent releases so we reduce our chance of > > ending up in a similar situation again in the future. > > > > That is not a problem by itself, I still have this self-hosted Pypi > repository > in the meantime. OK, good. > To react on the IRL log, indeed, the proposal I make in this thread is > purely optional, > for my need, gitlab handle correctly the protection against unwanted > publication. > Just hope it will be useful for other project, probably not for OpenStack > itself. > > Thanks a lot for supporting external projects :) > > Gaetan Thanks for working with us to improve pbr! :-) Doug From Louie.Kwan at windriver.com Mon Jan 15 19:48:25 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Mon, 15 Jan 2018 19:48:25 +0000 Subject: [openstack-dev] [masakari] problems starting up masakari instance monitoring in devstack @ master In-Reply-To: References: <21B58845-C9B9-47D3-AF4B-C6F8CF262D57@windriver.com> <3d8559b8-7238-0955-86a9-4a80590a72aa@po.ntt-tx.co.jp> <5AFBDD85-1159-409F-A72B-C1119F012909@windriver.com> <079161FE-9DE7-4A4D-86B7-0EA55DCDA0D7@windriver.com> <010EFCF1-8383-4EB2-922B-738108F44722@windriver.com> , , Message-ID: <47EFB32CD8770A4D9590812EE28C977E961DABEE@ALA-MBC.corp.ad.wrs.com> Hi Dinesh, Thanks for the info. 'recovery_method' (choose from 'auto', 'reserved_host', 'auto_priority', 'rh_priority') What should be put for Same as the segment host What should be put as , it seems it accepts any arbitrary values. Any more pointers. Much appreciated. Thank you. Louie Kwan ________________________________________ From: Bhor, Dinesh [Dinesh.Bhor at nttdata.com] Sent: Wednesday, December 13, 2017 11:22 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [masakari] problems starting up masakari instance monitoring in devstack @ master Hi Greg, Looks like you don’t have “devstack-masakari” host registered in Masakari database. Currently operator have to add all the hosts from failover-segment manually to Masakari database. You can use below command: Register failover-segment to Masakari database: openstack segment create Register hosts under the created segment to Masakari database: openstack segment host create There is a work in progress on auto compute node registration. Please refer: https://blueprints.launchpad.net/masakari-monitors/+spec/auto-compute-node-registration Thank you, Dinesh Bhor ________________________________ From: Waines, Greg Sent: 13 December 2017 19:26:32 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [masakari] problems starting up masakari instance monitoring in devstack @ master ok ... i changed all the domain related attributes in /etc/masakarimonitors/masakarimonitors.conf to be set to ‘default’ . I seemed to get further, but now get the following error when instancemonitor tries to send a notification: Bad Request (HTTP 400) (Request-ID: req-556c6c4d-0e12-414e-ac8c-8ef3a61d4864), Host with name devstack-masakari could not be found. however: - devstack-masakari is in the /etc/hosts - nova hypervisor-list shows devstack-masakari as a hypervisor hostname - i am running both hostmonitor, processmonitor and instancemonitor any ideas ? see details below, Greg. 2017-12-13 13:50:34.353 7637 INFO masakarimonitors.instancemonitor.libvirt_handler.callback [-] Libvirt Event: type=VM, hostname=devstack-masakari, uuid=6ae0b09b-3e93-4f0c-b81b-fa140636f267, time=2017-12-13 13:50:34.353037, event_id=LIFECYCLE, detail=STOPPED_DESTROYED) 2017-12-13 13:50:34.353 7637 INFO masakarimonitors.ha.masakari [-] Send a notification. {'notification': {'hostname': 'devstack-masakari', 'type': 'VM', 'payload': {'instance_uuid': '6ae0b09b-3e93-4f0c-b81b-fa140636f267', 'vir_domain_event': 'STOPPED_DESTROYED', 'event': 'LIFECYCLE'}, 'generated_time': datetime.datetime(2017, 12, 13, 13, 50, 34, 353037)}} 2017-12-13 13:50:34.461 7637 WARNING masakarimonitors.ha.masakari [-] Retry sending a notification. (HttpException: Bad Request (HTTP 400) (Request-ID: req-556c6c4d-0e12-414e-ac8c-8ef3a61d4864), Host with name devstack-masakari could not be found.): HttpException: HttpException: Bad Request (HTTP 400) (Request-ID: req-556c6c4d-0e12-414e-ac8c-8ef3a61d4864), Host with name devstack-masakari could not be found. ^C2017-12-13 13:50:42.462 7637 INFO oslo_service.service [-] Caught SIGINT signal, instantaneous exiting root at devstack-masakari:~# root at devstack-masakari:~# ping devstack-masakari PING localhost (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.049 ms 64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.022 ms ^C --- localhost ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.022/0.035/0.049/0.014 ms root at devstack-masakari:~# stack at devstack-masakari:~$ nova hypervisor-list +--------------------------------------+---------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +--------------------------------------+---------------------+-------+---------+ | 5fb1b09b-e5f5-465a-828a-2101135ff700 | devstack-masakari | up | enabled | +--------------------------------------+---------------------+-------+---------+ stack at devstack-masakari:~$ From: Greg Waines Reply-To: "openstack-dev at lists.openstack.org" Date: Wednesday, December 13, 2017 at 8:17 AM To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [masakari] problems starting up masakari instance monitoring in devstack @ master So i believe the error is: HttpException: HttpException: Expecting to find domain in project. I have attached the /etc/masakarimonitors/masakarimonitors.conf file and the /etc/masakari/masakari.conf file . See the domain related attributes in each file below. Is the Default vs default causing this problem ? Are there other domain related attributes that should be set to default ? stack at devstack-masakari:~$ fgrep -i domain /etc/masakari/masakari.conf project_domain_name = Default user_domain_name = Default stack at devstack-masakari:~$ fgrep -i domain /etc/masakarimonitors/masakarimonitors.conf #logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s # Domain ID to scope to (string value) #domain_id = # Domain name to scope to (string value) #domain_name = # Domain ID containing project (string value) #project_domain_id = # Domain name containing project (string value) #project_domain_name = project_domain_name = default # Optional domain ID to use with v3 and v2 parameters. It will be used for both # the user and project domain in v3 and ignored in v2 authentication. (string #default_domain_id = # Optional domain name to use with v3 API and v2 parameters. It will be used for # both the user and project domain in v3 and ignored in v2 authentication. #default_domain_name = # User's domain id (string value) #user_domain_id = # User's domain name (string value) #user_domain_name = # Indicate whether this resource may be shared with the domain received in the stack at devstack-masakari:~$ Greg. From: Greg Waines Reply-To: "openstack-dev at lists.openstack.org" Date: Tuesday, December 12, 2017 at 7:19 AM To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [masakari] problems starting up masakari instance monitoring in devstack @ master Any thoughts on what i don’t have setup correctly wrt the masakari client from the errors below ? Greg. From: Greg Waines Reply-To: "openstack-dev at lists.openstack.org" Date: Monday, December 11, 2017 at 8:05 AM To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [masakari] problems starting up masakari instance monitoring in devstack @ master I started up a VM and then deleted the VM ... and got some errors that make me think i don’t have the masakari client setup correctly. Any ideas ? Greg. i.e. stack at devstack-masakari:~/devstack$ nova list +--------------------------------------+-------------+--------+------------+-------------+---------------------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------+--------+------------+-------------+---------------------------------------------------------+ | de8922c6-450d-4e2e-954d-ee4bd05ab909 | vm-1-cirros | ACTIVE | - | Running | private=10.0.0.10, fd1a:8f71:1a96:0:f816:3eff:fe5e:f315 | +--------------------------------------+-------------+--------+------------+-------------+---------------------------------------------------------+ stack at devstack-masakari:~/devstack$ stack at devstack-masakari:~/devstack$ nova delete vm-1-cirros Request to delete server vm-1-cirros has been accepted. stack at devstack-masakari:~/devstack$ 2017-12-11 12:58:28.319 25974 INFO masakarimonitors.instancemonitor.libvirt_handler.callback [-] Libvirt Event: type=VM, hostname=devstack-masakari, uuid=de8922c6-450d-4e2e-954d-ee4bd05ab909, time=2017-12-11 12:58:28.318781, event_id=LIFECYCLE, detail=STOPPED_DESTROYED) 2017-12-11 12:58:28.319 25974 INFO masakarimonitors.ha.masakari [-] Send a notification. {'notification': {'hostname': 'devstack-masakari', 'type': 'VM', 'payload': {'instance_uuid': 'de8922c6-450d-4e2e-954d-ee4bd05ab909', 'vir_domain_event': 'STOPPED_DESTROYED', 'event': 'LIFECYCLE'}, 'generated_time': datetime.datetime(2017, 12, 11, 12, 58, 28, 318781)}} 2017-12-11 12:58:28.351 25974 WARNING masakarimonitors.ha.masakari [-] Retry sending a notification. (HttpException: Expecting to find domain in project. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-feaa30f9-fbb2-4259-8851-488ff7ab82f9), Expecting to find domain in project. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error.): HttpException: HttpException: Expecting to find domain in project. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-feaa30f9-fbb2-4259-8851-488ff7ab82f9), Expecting to find domain in project. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. ... several times ... and then eventually 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari Traceback (most recent call last): 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/masakarimonitors/ha/masakari.py", line 91, in send_notification 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari payload=event['notification']['payload']) 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/masakariclient/sdk/ha/v1/_proxy.py", line 65, in create_notification 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari return self._create(_notification.Notification, **attrs) 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/proxy2.py", line 194, in _create 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari return res.create(self._session) 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/resource2.py", line 588, in create 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari json=request.body, headers=request.headers) 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 848, in post 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari return self.request(url, 'POST', **kwargs) 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/session.py", line 64, in map_exceptions_wrapper 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari return func(*args, **kwargs) 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/session.py", line 352, in request 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari return super(Session, self).request(*args, **kwargs) 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 573, in request 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari auth_headers = self.get_auth_headers(auth) 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 900, in get_auth_headers 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari return auth.get_headers(self, **kwargs) 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/plugin.py", line 95, in get_headers 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari token = self.get_token(session) 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", line 88, in get_token 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari return self.get_access(session).auth_token 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", line 134, in get_access 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari self.auth_ref = self.get_auth_ref(session) 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/generic/base.py", line 198, in get_auth_ref 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari return self._plugin.get_auth_ref(session, **kwargs) 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/v3/base.py", line 165, in get_auth_ref 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari authenticated=False, log=False, **rkwargs) 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 848, in post 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari return self.request(url, 'POST', **kwargs) 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/session.py", line 66, in map_exceptions_wrapper 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari raise exceptions.from_exception(e) 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari HttpException: HttpException: Expecting to find domain in project. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-3640dd4f-b9d9-4a10-ae37-43a0e202393f), Expecting to find domain in project. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. 2017-12-11 13:00:28.536 25974 ERROR masakarimonitors.ha.masakari stack at devstack-masakari:~/devstack$ Greg. From: Greg Waines Reply-To: "openstack-dev at lists.openstack.org" Date: Monday, December 11, 2017 at 7:52 AM To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [masakari] problems starting up masakari instance monitoring in devstack @ master Went back and started up corosync and pacemaker and now masakari-instancemonitor starts successfully. stack at devstack-masakari:~/masakari-monitors$ sudo masakari-instancemonitor & [1] 25973 stack at devstack-masakari:~/masakari-monitors$ 2017-12-11 12:47:49.483 25974 INFO masakarimonitors.service [-] Starting masakarimonitors-instancemonitor stack at devstack-masakari:~/masakari-monitors$ I don’t see any log file in /var/log for masakari-instancemonitor or masakari-engine ? All processes seem to be up and running fine: stack at devstack-masakari:~/masakari-monitors$ ps -ef | fgrep masakari stack 11625 1 0 12:29 ? 00:00:10 /usr/bin/python /usr/local/bin/masakari-api --config-file=/etc/masakari/masakari.conf --debug stack 11778 1 0 12:29 ? 00:00:02 /usr/bin/python /usr/local/bin/masakari-engine --config-file=/etc/masakari/masakari.conf --debug stack 12005 11625 0 12:29 ? 00:00:00 /usr/bin/python /usr/local/bin/masakari-api --config-file=/etc/masakari/masakari.conf --debug stack 12006 11625 0 12:29 ? 00:00:00 /usr/bin/python /usr/local/bin/masakari-api --config-file=/etc/masakari/masakari.conf --debug root 19336 1 0 12:19 ? 00:00:05 /opt/stack/bin/etcd --name devstack-masakari --data-dir /opt/stack/data/etcd --initial-cluster-state new --initial-cluster-token etcd-cluster-01 --initial-cluster devstack-masakari=http://10.10.10.7:2380 --initial-advertise-peer-urls http://10.10.10.7:2380 --advertise-client-urls http://10.10.10.7:2379 --listen-peer-urls http://0.0.0.0:2380 --listen-client-urls http://10.10.10.7:2379 root 25973 25760 0 12:47 pts/0 00:00:00 sudo masakari-instancemonitor root 25974 25973 0 12:47 pts/0 00:00:00 /usr/bin/python /usr/local/bin/masakari-instancemonitor stack 26123 25760 0 12:50 pts/0 00:00:00 grep -F --color=auto masakari rabbitmq 26477 26348 0 12:06 ? 00:00:15 /usr/lib/erlang/erts-7.3/bin/beam.smp -W w -A 64 -P 1048576 -K true -B i -- -root /usr/lib/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa /usr/lib/rabbitmq/lib/rabbitmq_server-3.5.7/sbin/../ebin -noshell -noinput -s rabbit boot -sname rabbit at devstack-masakari -boot start_sasl -kernel inet_default_connect_options [{nodelay,true}] -sasl errlog_type error -sasl sasl_error_logger false -rabbit error_logger {file,"/var/log/rabbitmq/rabbit at devstack-masakari.log"} -rabbit sasl_error_logger {file,"/var/log/rabbitmq/rabbit at devstack-masakari-sasl.log"} -rabbit enabled_plugins_file "/etc/rabbitmq/enabled_plugins" -rabbit plugins_dir "/usr/lib/rabbitmq/lib/rabbitmq_server-3.5.7/sbin/../plugins" -rabbit plugins_expand_dir "/var/lib/rabbitmq/mnesia/rabbit at devstack-masakari-plugins-expand" -os_mon start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup false -mnesia dir "/var/lib/rabbitmq/mnesia/rabbit at devstack-masakari" -kernel inet_dist_listen_min 25672 -kernel inet_dist_listen_max 25672 stack at devstack-masakari:~/masakari-monitors$ Greg. From: Greg Waines Reply-To: "openstack-dev at lists.openstack.org" Date: Monday, December 11, 2017 at 7:38 AM To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [masakari] problems starting up masakari instance monitoring in devstack @ master Actually, i’m guessing this is wrong because masakari-instancemonitor fails to start when i use these instructions. stack at devstack-masakari:~/masakari-monitors$ sudo masakari-instancemonitor & [1] 22959 stack at devstack-masakari:~/masakari-monitors$ Traceback (most recent call last): File "/usr/local/bin/masakari-instancemonitor", line 10, in sys.exit(main()) File "/usr/local/lib/python2.7/dist-packages/masakarimonitors/cmd/instancemonitor.py", line 31, in main config.parse_args(sys.argv) File "/usr/local/lib/python2.7/dist-packages/masakarimonitors/config.py", line 32, in parse_args default_config_files=default_config_files) File "/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2495, in __call__ self._check_required_opts() File "/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3134, in _check_required_opts raise RequiredOptError(opt.name, group) oslo_config.cfg.RequiredOptError: value required for option auth-url in group [api] [1]+ Exit 1 sudo masakari-instancemonitor Greg. From: Greg Waines Reply-To: "openstack-dev at lists.openstack.org" Date: Monday, December 11, 2017 at 7:17 AM To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [masakari] problems starting up masakari instance monitoring in devstack @ master thanks Honjo, I’ll try this out now. I’m assuming that it is ok to start ONLY the instance-monitor if that’s all i want to test. Is that correct ? In that case, following the instructions you pointed me at, I would do the following: · setup devstack o i.e. with ‘enable_plugin masakari git://git.openstack.org/openstack/masakari’ · · ( don’t need corosync and pacemaker ) · · install & startup client o cd o git clone https://github.com/openstack/python-masakariclient.git o cd python-masakariclient o sudo python setup.py build o sudo python setup.py install · · install & startup instance monitor o cd o git clone https://github.com/openstack/masakari-monitors.git o sudo mkdir /etc/masakarimonitors o cd masakari-monitors o sudo python setup.py build o sudo python setup.py install o o sudo masakari-instancemonitor & Is this correct ? Greg. From: Rikimaru Honjo Reply-To: "openstack-dev at lists.openstack.org" Date: Thursday, December 7, 2017 at 12:24 AM To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [masakari] problems starting up masakari instance monitoring in devstack @ master Hello Greg, I forgot to tell you. Please use process_list.yaml instead of proc.list.sample. On 2017/12/07 14:03, Rikimaru Honjo wrote: Hello Greg, Please use masakarimonitors.conf instead of hostmonitor.conf and processmonitor.conf. You can generate it by "tox -egenconfig". hostmonitor.conf and processmonitor.conf are used for monitors implemented by shell script. masakarimonitors.conf is a configuration file for monitors implemented by python that you installed. And, we are preparing setting guides. Please see it if you are good. masakari: https://review.openstack.org/#/c/489570/ masakari-monitors: https://review.openstack.org/#/c/489095/ Best regards, On 2017/12/06 22:48, Waines, Greg wrote: I am just getting started working with masakari. I am working on master. I have setup Masakari in Devstack (see details at end of email) ... which starts up masakari-engine and masakari-api processes. I have git cloned the masakari-monitors and started them up (roughly) following the instructions at https://github.com/openstack/masakari-monitors . Specifically: # install & startup monitors cd git clone https://github.com/openstack/masakari-monitors.git cd masakari-monitors sudo python setup.py install cd sudo mkdir /etc/masakarimonitors sudo cp ~/masakari-monitors/etc/masakarimonitors/hostmonitor.conf.sample /etc/masakarimonitors/hostmonitor.conf sudo cp ~/masakari-monitors/etc/masakarimonitors/processmonitor.conf.sample /etc/masakarimonitors/processmonitor.conf sudo cp ~/masakari-monitors/etc/masakarimonitors/proc.list.sample /etc/masakarimonitors/proc.list cd ~/masakari-monitors/masakarimonitors/cmd sudo masakari-processmonitor.sh /etc/masakarimonitors/processmonitor.conf /etc/masakarimonitors/proc.list & sudo masakari-hostmonitor.sh /etc/masakarimonitors/hostmonitor.conf & sudo /usr/bin/python ./instancemonitor.py & However the instancemonitor.py starts and exits ... and does not appear to start any process(es) ... with no error messages and no log file. Is this the correct way to startup masakari instance monitoring ? Greg. My Masakari setup in Devstack sudo useradd -s /bin/bash -d /opt/stack -m stack echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack sudo su - stack git clone https://github.com/openstack-dev/devstack cd devstack local.conf file: [[local|localrc]] ADMIN_PASSWORD=admin DATABASE_PASSWORD=admin RABBIT_PASSWORD=admin SERVICE_PASSWORD=admin # setup Neutron services disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta # ceilometer enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer enable_plugin aodh https://git.openstack.org/openstack/aodh # heat enable_plugin heat https://git.openstack.org/openstack/heat # vitrage enable_plugin vitrage https://git.openstack.org/openstack/vitrage enable_plugin vitrage-dashboard https://git.openstack.org/openstack/vitrage-dashboard # masakari enable_plugin masakari git://git.openstack.org/openstack/masakari ./stack.sh __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at po.ntt-tx.co.jp __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From doug at doughellmann.com Mon Jan 15 19:59:04 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 15 Jan 2018 14:59:04 -0500 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: References: Message-ID: <1516045136-sup-738@lrrr.local> Excerpts from Erno Kuvaja's message of 2018-01-15 12:59:44 +0000: > I think the worst case scenario is that we scrape together what ever > we can just to have something to say that we test it and not have > consistency nor clear responsibility of who, what and how. > (Unfortunately I think this is the current situation and I'm super > happy to hear that this is being discussed and the decision is not > made lightly.) That seems very far from the current situation to me. We have a large integration test suite written primarily by contributors to the projects it tests. A subset of that is used for the trademark tests. That same subset is in 1 repo, managed by the QA team, who apply the extra review criteria needed for the trademark program to be stable. The fact that so many people seem uninformed about how all of this works is exactly why I think it's a mistake to spread the tests out and have a bunch of different teams applying different review criteria to them. Doug From mdulko at redhat.com Mon Jan 15 20:37:06 2018 From: mdulko at redhat.com (mdulko at redhat.com) Date: Mon, 15 Jan 2018 21:37:06 +0100 Subject: [openstack-dev] [kuryr][os-vif][nova] os-vif 1.8.0 breaks kuryr-kubernetes In-Reply-To: <4fb2222f-f0bb-9b8c-c2dd-55b504a06c20@gmail.com> References: <1516034746.4371.40.camel@redhat.com> <4fb2222f-f0bb-9b8c-c2dd-55b504a06c20@gmail.com> Message-ID: <1516048626.4371.60.camel@redhat.com> On Mon, 2018-01-15 at 12:30 -0500, Jay Pipes wrote: > On 01/15/2018 11:45 AM, mdulko at redhat.com wrote: > > Hi, > > > > os-vif commit [1] introduced a non-backward compatible change to the > > Subnet object - removal of ips field. Turns out kuryr-kubernetes were > > depending on that e.g. here [1] and we're now broken with os-vif 1.8.0. > > > > kuryr-kubernetes is saving the VIF objects into the K8s resources > > annotations, so to keep backwards compatibility we need > > VIFBase.obj_make_compatible able to backport the data back into the > > Subnet object. Or be able to load the older data to the newer object. > > Anyone have an advice how we should proceed with that issue? > > It would have been great to know kuryr-kubernetes was saving/using these > objects :) as mentioned on the original os-vif code review, the > versioned objects in os-vif have yet to be used in over-the-wire > communication nor have they been saved to a backing data store by Nova > or Neutron. Thus, we haven't bothered with the obj_make_compatible() > stuff yet. > > If we had known there was a non-Nova non-Neutron client of os-vif, of > course we would have been tracking changes using obj_make_compatible(). Sure thing, I've already learned that without CI breaking things is expected. :) > That said, even if we *were* using obj_make_compatible() and allowing > for the backversioning of object formats, that would not have magically > enabled kuryr-kubernetes to work with our objects without modification. > kuryr-kubernetes would still need to do the dance of telling os-vif > somehow what version of the objects that it expects to be given. This is > what all the infrastructure in oslo.versionedobject's client-server > negotiation does and it's non-trivial. Agreed, though now the only solution is either to implement backporting on kuryr-kubernetes side or drop backward compatibility with already existing deployments. > Bottom line, we can straight revert the os-vif patch in question > (because it's really just a cleanup), release os-vif 1.8.1 by the cutoff > on Thursday and "fix" kuryr-kubernetes. However, we will want to have a > call with you guys to tell you exactly how to do the versioning > negotiation that you will now need to do since you're storing these > objects somewhere. If that change isn't critical to anyone, I'd prefer to do the revert now and discuss how to do such changes correctly in the future. Implementing obj_make_compatible would help us, but I'm not sure it's a good long-term solution, because it would require kuryr-kubernetes to *always* specify the version of all the o.vo it uses, which may lead to locking it to the lowest version that's available. In case of o.vo used for DB & RPC communication that's solved by online data migrations. We don't have such framework now and I simply don't know if we should copy the same approach. > Best, > -jay > > > It would also be nice to setup a kuryr-kubernetes gate on the os-vif > > repo. If there are no objections to that I'd volunteer to submit a > > commit that adds it. > > > > Thanks, > > Michal > > > > [1] https://review.openstack.org/#/c/508498 > > [2] https://github.com/openstack/kuryr-kubernetes/blob/18db6499432e6cab61059eb5abeeaad3ea40b6e4/kuryr_kubernetes/cni/binding/base.py#L64-L66 > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mdulko at redhat.com Mon Jan 15 20:43:19 2018 From: mdulko at redhat.com (mdulko at redhat.com) Date: Mon, 15 Jan 2018 21:43:19 +0100 Subject: [openstack-dev] [kuryr][os-vif][nova] os-vif 1.8.0 breaks kuryr-kubernetes In-Reply-To: <4B1BB321037C0849AAE171801564DFA6889ADE28@IRSMSX107.ger.corp.intel.com> References: <1516034746.4371.40.camel@redhat.com> <4B1BB321037C0849AAE171801564DFA6889ADE28@IRSMSX107.ger.corp.intel.com> Message-ID: <1516048999.4371.65.camel@redhat.com> On Mon, 2018-01-15 at 17:21 +0000, Mooney, Sean K wrote: > > -----Original Message----- > > From: mdulko at redhat.com [mailto:mdulko at redhat.com] > > Sent: Monday, January 15, 2018 4:46 PM > > To: openstack-dev at lists.openstack.org > > Subject: [openstack-dev] [kuryr][os-vif][nova] os-vif 1.8.0 breaks > > kuryr-kubernetes > > > > Hi, > > > > os-vif commit [1] introduced a non-backward compatible change to the > > Subnet object - removal of ips field. Turns out kuryr-kubernetes were > > depending on that e.g. here [1] and we're now broken with os-vif 1.8.0. > > > > kuryr-kubernetes is saving the VIF objects into the K8s resources > > annotations, so to keep backwards compatibility we need > > VIFBase.obj_make_compatible able to backport the data back into the > > Subnet object. Or be able to load the older data to the newer object. > > Anyone have an advice how we should proceed with that issue? > > [Mooney, Sean K] I belive obj_make_compatible methods were in the original > Patch but they were removed as we did not know of any user of this field. > The IPs field in the subnet object Was a legacy hold over from when the > object was ported from nova-networks. > it is never used by nova when calling os-vif today hence the > change to align the data structure more closely with neutrons > where the fixed ips are an attribute of the port. > The change was made to to ensure no future users of os-vif consumed > the fixed ips from the subnet object but I guess kuryr-kubernetes had already done so. > > Ideally we would migrate kuryr-kubernetes to consume fixed_ips form The vif object instead of the subnet > but if we can introduce a patch to os-vif to provide backwards compatibly > before the non-client lib freeze on thrusday we can include that in queens. I prefer Jay's proposed solution of simply doing a revert of the patch, though this would work as well, at least as short-term remediation. > > > > It would also be nice to setup a kuryr-kubernetes gate on the os-vif > > repo. If there are no objections to that I'd volunteer to submit a > > commit that adds it. > > [Mooney, Sean K] I would be happy to see gate form all consumer of os-vif so go for it. Will do! > Related to this https://review.openstack.org/#/c/509107/4 is currently abandoned but I would > Also like revivew this change in Rocky. Neutron has supported multiple dhcp servers for > some time, Nova-networks only supported one hench why the dhcp_server field is currently singular. > Will this effect kuryr-kubernetes? > Are ye currently working around this issue in some other way? Looks like kuryr-kubernetes doesn't depend on dhcp_server field, though I believe we should work out a way of doing such changes that's safe for everyone. Making guesses on projects depending on certain fields is dangerous. From breton at cynicmansion.ru Mon Jan 15 20:47:40 2018 From: breton at cynicmansion.ru (Boris Bobrov) Date: Mon, 15 Jan 2018 21:47:40 +0100 Subject: [openstack-dev] [keystone] Stepping down from Keystone core Message-ID: <8631bb8c-d907-fde8-c509-0a6f0b5d52f6@cynicmansion.ru> Hey! I don't work on Keystone as much as I used to any more, so i'm stepping down from core reviewers. Don't expect to get rid of me though. I still work on OpenStack-related stuff and i will annoy you all in #openstack-keystone and in other IRC channels. From lbragstad at gmail.com Mon Jan 15 21:20:14 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 15 Jan 2018 15:20:14 -0600 Subject: [openstack-dev] [keystone] Stepping down from Keystone core In-Reply-To: <8631bb8c-d907-fde8-c509-0a6f0b5d52f6@cynicmansion.ru> References: <8631bb8c-d907-fde8-c509-0a6f0b5d52f6@cynicmansion.ru> Message-ID: <996ffdc3-c711-ac0d-42ed-dbb0f1a7f894@gmail.com> Boris, Thank you for all the contributions you've made to the project. I always found your reviews extremely thorough and they certainly improved the quality of our code. I'm sad to see you go, but I'm relieved to know that you're not completely leaving us :) This goes without being said, but should you find yourself looking to get involved in keystone again, we can certainly expedite your path to core. Thanks again, Boris On 01/15/2018 02:47 PM, Boris Bobrov wrote: > Hey! > > I don't work on Keystone as much as I used to any more, so i'm > stepping down from core reviewers. > > Don't expect to get rid of me though. I still work on OpenStack-related > stuff and i will annoy you all in #openstack-keystone and in other IRC > channels. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From anlin.kong at gmail.com Mon Jan 15 22:02:36 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Tue, 16 Jan 2018 11:02:36 +1300 Subject: [openstack-dev] [mistral] Adding Adriano Petrich to the core team Message-ID: welcome to the team, Adriano! Cheers, Lingxian Kong (Larry) On Mon, Jan 15, 2018 at 10:11 PM, Renat Akhmerov wrote: > Hi, > > I’d like to promote Adriano Petrich to the Mistral core team. Adriano has > shown the good review rate and quality at least over the last two cycles > and implemented several important features (including new useful YAQL/JINJA > functions). > > Please vote whether you agree to add Adriano to the core team. > > Adriano’s statistics: http://stackalytics.com/?module= > mistral-group&release=queens&user_id=apetrich > > Thanks > > Renat Akhmerov > @Nokia > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Tue Jan 16 04:03:19 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Tue, 16 Jan 2018 11:03:19 +0700 Subject: [openstack-dev] [mistral] Adding Adriano Petrich to the core team In-Reply-To: References: Message-ID: Adriano, you now have +2 vote and can approve patches :) Welcome! Thanks Renat Akhmerov @Nokia On 16 Jan 2018, 05:03 +0700, Lingxian Kong , wrote: > welcome to the team, Adriano! > > > Cheers, > Lingxian Kong (Larry) > > > On Mon, Jan 15, 2018 at 10:11 PM, Renat Akhmerov wrote: > > > Hi, > > > > > > I’d like to promote Adriano Petrich to the Mistral core team. Adriano has shown the good review rate and quality at least over the last two cycles and implemented several important features (including new useful YAQL/JINJA functions). > > > > > > Please vote whether you agree to add Adriano to the core team. > > > > > > Adriano’s statistics: http://stackalytics.com/?module=mistral-group&release=queens&user_id=apetrich > > > > > > Thanks > > > > > > Renat Akhmerov > > > @Nokia > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhengzhenyulixi at gmail.com Tue Jan 16 06:35:02 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Tue, 16 Jan 2018 14:35:02 +0800 Subject: [openstack-dev] [nova][cinder] Questions about truncked disk serial number Message-ID: Hi, I meet a problem like this recently: When attaching a volume to an instance, in the xml, the disk is described as: [image: Inline image 1] where the serial number here is the volume uuid in Cinder. While inside the vm: in /dev/disk/by-id, there is a link for /vdb with the name of "virtio"+truncated serial number: [image: Inline image 2] and according to https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/Getting_Started_Guide/ch16s03.html it seems that we will use this mount the volume. The truncate seems to be happen in here [1][2] which is 20 digits. *My question here is: *if two volume have the identical first 20 digits in their uuids, it seems that the latter attached one will overwrite the first one's link: [image: Inline image 3] (the above graph is snapshot for an volume backed instance, the virtio-15exxxxx was point to vda before, the by-path seems correct though) It is rare to have the identical first 20 digits of two uuids, but possible, so what was the consideration of truncate only 20 digits of the volume uuid instead of use full 32? BR, Kevin Zheng -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 13550 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 46374 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 5228 bytes Desc: not available URL: From zhengzhenyulixi at gmail.com Tue Jan 16 06:36:00 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Tue, 16 Jan 2018 14:36:00 +0800 Subject: [openstack-dev] [nova][cinder] Questions about truncked disk serial number In-Reply-To: References: Message-ID: Ops, forgot references: [1] https://github.com/torvalds/linux/blob/1cc15701cd89b0ce695bbc5cff3a2bf3e2efd25f/include/uapi/linux/virtio_blk.h#L54 [2] https://github.com/torvalds/linux/blob/1cc15701cd89b0ce695bbc5cff3a2bf3e2efd25f/drivers/block/virtio_blk.c#L363 On Tue, Jan 16, 2018 at 2:35 PM, Zhenyu Zheng wrote: > Hi, > > I meet a problem like this recently: > > When attaching a volume to an instance, in the xml, the disk is described > as: > > [image: Inline image 1] > where the serial number here is the volume uuid in Cinder. While inside > the vm: > in /dev/disk/by-id, there is a link for /vdb with the name of > "virtio"+truncated serial number: > > [image: Inline image 2] > > and according to https://access.redhat.com/documentation/en-US/Red_Hat_ > Enterprise_Linux_OpenStack_Platform/2/html/Getting_ > Started_Guide/ch16s03.html > > it seems that we will use this mount the volume. > > The truncate seems to be happen in here [1][2] which is 20 digits. > > *My question here is: *if two volume have the identical first 20 digits > in their uuids, it seems that the latter attached one will overwrite the > first one's link: > [image: Inline image 3] > (the above graph is snapshot for an volume backed instance, the > virtio-15exxxxx was point to vda before, the by-path seems correct though) > > It is rare to have the identical first 20 digits of two uuids, but > possible, so what was the consideration of truncate only 20 digits of the > volume uuid instead of use full 32? > > BR, > > Kevin Zheng > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 46374 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 5228 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 13550 bytes Desc: not available URL: From yikunkero at gmail.com Tue Jan 16 09:19:37 2018 From: yikunkero at gmail.com (Yikun Jiang) Date: Tue, 16 Jan 2018 17:19:37 +0800 Subject: [openstack-dev] [nova][cinder] Questions about truncked disk serial number In-Reply-To: References: Message-ID: Some detail steps as below: 1. First, We have 2 volumes with same part-uuid prefix. [image: 内嵌图片 1] volume(yikun2) is attached to server(test) 2. In GuestOS(Cent OS 7), take a look at by path and by id: [image: 内嵌图片 2] we found both by-path and by-id vdb links was generated successfully. 3. attach volume(yikun2_1) to server(test) [image: 内嵌图片 4] 4. In GuestOS(Cent OS 7), take a look at by path and by id: [image: 内嵌图片 6] by-path soft link was generated successfully, but by-id link was failed to generate. *That is, in this case, if a user find the device by by-id, it would be failed to find it or find a wrong device.* one of the user cases was happened on k8s device finding, more info you can see the ref as below: https://github.com/kubernetes/kubernetes/blob/53a8ac753bf468eaf6bcb5a07e34a0a67480df43/pkg/cloudprovider/providers/openstack/openstack_volumes.go#L463 So, I think by-id is NOT a good way to find the device, but what the best practice is? let's see other idea. Regards, Yikun ---------------------------------------- Jiang Yikun(Kero) Mail: yikunkero at gmail.com 2018-01-16 14:36 GMT+08:00 Zhenyu Zheng : > Ops, forgot references: > [1] https://github.com/torvalds/linux/blob/1cc15701cd89b0ce695bbc5cff3a2b > f3e2efd25f/include/uapi/linux/virtio_blk.h#L54 > [2] https://github.com/torvalds/linux/blob/1cc15701cd89b0ce695bbc5cff3a2b > f3e2efd25f/drivers/block/virtio_blk.c#L363 > > On Tue, Jan 16, 2018 at 2:35 PM, Zhenyu Zheng > wrote: > >> Hi, >> >> I meet a problem like this recently: >> >> When attaching a volume to an instance, in the xml, the disk is described >> as: >> >> [image: Inline image 1] >> where the serial number here is the volume uuid in Cinder. While inside >> the vm: >> in /dev/disk/by-id, there is a link for /vdb with the name of >> "virtio"+truncated serial number: >> >> [image: Inline image 2] >> >> and according to https://access.redhat.com/d >> ocumentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platfo >> rm/2/html/Getting_Started_Guide/ch16s03.html >> >> it seems that we will use this mount the volume. >> >> The truncate seems to be happen in here [1][2] which is 20 digits. >> >> *My question here is: *if two volume have the identical first 20 digits >> in their uuids, it seems that the latter attached one will overwrite the >> first one's link: >> [image: Inline image 3] >> (the above graph is snapshot for an volume backed instance, the >> virtio-15exxxxx was point to vda before, the by-path seems correct though) >> >> It is rare to have the identical first 20 digits of two uuids, but >> possible, so what was the consideration of truncate only 20 digits of the >> volume uuid instead of use full 32? >> >> BR, >> >> Kevin Zheng >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 5228 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 46374 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 10798 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 13550 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 21095 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 9638 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 19561 bytes Desc: not available URL: From apetrich at redhat.com Tue Jan 16 09:37:52 2018 From: apetrich at redhat.com (Adriano Petrich) Date: Tue, 16 Jan 2018 09:37:52 +0000 Subject: [openstack-dev] [mistral] Adding Adriano Petrich to the core team In-Reply-To: References: Message-ID: Thank you! On Tue, Jan 16, 2018 at 4:03 AM, Renat Akhmerov wrote: > Adriano, you now have +2 vote and can approve patches :) Welcome! > > Thanks > > Renat Akhmerov > @Nokia > > On 16 Jan 2018, 05:03 +0700, Lingxian Kong , wrote: > > welcome to the team, Adriano! > > > Cheers, > Lingxian Kong (Larry) > > On Mon, Jan 15, 2018 at 10:11 PM, Renat Akhmerov > wrote: > >> Hi, >> >> I’d like to promote Adriano Petrich to the Mistral core team. Adriano has >> shown the good review rate and quality at least over the last two cycles >> and implemented several important features (including new useful YAQL/JINJA >> functions). >> >> Please vote whether you agree to add Adriano to the core team. >> >> Adriano’s statistics: http://stackalytics.com/?module=mistral-group& >> release=queens&user_id=apetrich >> >> Thanks >> >> Renat Akhmerov >> @Nokia >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lpetrut at cloudbasesolutions.com Tue Jan 16 10:54:31 2018 From: lpetrut at cloudbasesolutions.com (Lucian Petrut) Date: Tue, 16 Jan 2018 10:54:31 +0000 Subject: [openstack-dev] [nova][cinder] Questions about truncked disk serial number In-Reply-To: References: Message-ID: <1516100070.2492.1.camel@cloudbasesolutions.com> Hi, This seems to be a QEMU limitation, the 20 character limit being enforced here: https://github.com/qemu/qemu/blob/v2.11.0/hw/scsi/scsi-disk.c#L645 We do have an alternative approach for safely identifying disks. When using Hyper-V, we cannot change the disk serial id that gets exposed to the guest. For this reason, we're always providing the disk SCSI address through the nova instance metadata, which already gets used by kubernetes[1]. Note that the Nova libvirt driver only provides it when a volume tag is set [2]. [1] https://github.com/kubernetes/kubernetes/blob/53a8ac753bf468eaf6bcb5a07e34a0a67480df43/pkg/cloudprovider/providers/openstack/openstack_volumes.go#L485 [2] https://github.com/openstack/nova/blob/17.0.0.0b2/nova/virt/libvirt/driver.py#L7956-L7969 Regards, Lucian Petrut (lpetrut) On Tue, 2018-01-16 at 17:19 +0800, Yikun Jiang wrote: Some detail steps as below: 1. First, We have 2 volumes with same part-uuid prefix. [内嵌图片 1] volume(yikun2) is attached to server(test) 2. In GuestOS(Cent OS 7), take a look at by path and by id: [内嵌图片 2] we found both by-path and by-id vdb links was generated successfully. 3. attach volume(yikun2_1) to server(test) [内嵌图片 4] 4. In GuestOS(Cent OS 7), take a look at by path and by id: [内嵌图片 6] by-path soft link was generated successfully, but by-id link was failed to generate. That is, in this case, if a user find the device by by-id, it would be failed to find it or find a wrong device. one of the user cases was happened on k8s device finding, more info you can see the ref as below: https://github.com/kubernetes/kubernetes/blob/53a8ac753bf468eaf6bcb5a07e34a0a67480df43/pkg/cloudprovider/providers/openstack/openstack_volumes.go#L463 So, I think by-id is NOT a good way to find the device, but what the best practice is? let's see other idea. Regards, Yikun ---------------------------------------- Jiang Yikun(Kero) Mail: yikunkero at gmail.com 2018-01-16 14:36 GMT+08:00 Zhenyu Zheng >: Ops, forgot references: [1] https://github.com/torvalds/linux/blob/1cc15701cd89b0ce695bbc5cff3a2bf3e2efd25f/include/uapi/linux/virtio_blk.h#L54 [2] https://github.com/torvalds/linux/blob/1cc15701cd89b0ce695bbc5cff3a2bf3e2efd25f/drivers/block/virtio_blk.c#L363 On Tue, Jan 16, 2018 at 2:35 PM, Zhenyu Zheng > wrote: Hi, I meet a problem like this recently: When attaching a volume to an instance, in the xml, the disk is described as: [Inline image 1] where the serial number here is the volume uuid in Cinder. While inside the vm: in /dev/disk/by-id, there is a link for /vdb with the name of "virtio"+truncated serial number: [Inline image 2] and according to https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/Getting_Started_Guide/ch16s03.html it seems that we will use this mount the volume. The truncate seems to be happen in here [1][2] which is 20 digits. My question here is: if two volume have the identical first 20 digits in their uuids, it seems that the latter attached one will overwrite the first one's link: [Inline image 3] (the above graph is snapshot for an volume backed instance, the virtio-15exxxxx was point to vda before, the by-path seems correct though) It is rare to have the identical first 20 digits of two uuids, but possible, so what was the consideration of truncate only 20 digits of the volume uuid instead of use full 32? BR, Kevin Zheng __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 46374 bytes Desc: image.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 5228 bytes Desc: image.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 13550 bytes Desc: image.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 21095 bytes Desc: image.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 9638 bytes Desc: image.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 19561 bytes Desc: image.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 10798 bytes Desc: image.png URL: From glongwave at gmail.com Tue Jan 16 10:55:26 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Tue, 16 Jan 2018 18:55:26 +0800 Subject: [openstack-dev] [oslo] not run for PTL Message-ID: Hi Oslo folks, I taken the role of PTL in last 2 cycles, and would like to focus on coding this cycle. it's time to let new leader to make oslo better. So I won't be running for PTL reelection for Rocky cycle. Thanks all of your support and trust in last 2 cycles. -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From glongwave at gmail.com Tue Jan 16 11:12:16 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Tue, 16 Jan 2018 19:12:16 +0800 Subject: [openstack-dev] [glance][oslo][requirements] oslo.serialization fails with glance In-Reply-To: <20180115161200.nvg34l653w3rxggy@gentoo.org> References: <20180113064128.byill2yngkjgbys2@mthode.org> <20180115161200.nvg34l653w3rxggy@gentoo.org> Message-ID: What's the issue for Glance, any bug link ? 2018-01-16 0:12 GMT+08:00 Matthew Thode : > On 18-01-13 00:41:28, Matthew Thode wrote: > > https://review.openstack.org/531788 is the review we are seeing it in, > > but 2.22.0 failed as well. > > > > I'm guessing it was introduced in either > > > > https://github.com/openstack/oslo.serialization/commit/ > c1a7079c26d27a2e46cca26963d3d9aa040bdbe8 > > or > > https://github.com/openstack/oslo.serialization/commit/ > cdb2f60d26e3b65b6370f87b2e9864045651c117 > > bamp > > -- > Matthew Thode (prometheanfire) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.chadin at servionica.ru Tue Jan 16 11:31:29 2018 From: a.chadin at servionica.ru (=?utf-8?B?0JDQu9C10LrRgdCw0L3QtNGAINCn0LDQtNC40L0gKEFsZXhhbmRlciBDaGFk?= =?utf-8?Q?in=29?=) Date: Tue, 16 Jan 2018 11:31:29 +0000 Subject: [openstack-dev] [watcher] Dublin PTG agenda Message-ID: <864B0300-27BB-44F0-BFEB-16FD48DC2428@servionica.ru> Watcher team, We are preparing PTG Agenda here: https://etherpad.openstack.org/p/rocky-watcher-ptg Feel free to add your topics. I’m going to discuss this content on the next weekly meeting (which will be held tomorrow at 13:00 UTC). Best Regards, ____ Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From davanum at gmail.com Tue Jan 16 11:43:12 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Tue, 16 Jan 2018 06:43:12 -0500 Subject: [openstack-dev] [oslo] not run for PTL In-Reply-To: References: Message-ID: Thanks for all your help @gcb! On Tue, Jan 16, 2018 at 5:55 AM, ChangBo Guo wrote: > Hi Oslo folks, > > I taken the role of PTL in last 2 cycles, and would like to focus on coding > this cycle. > it's time to let new leader to make oslo better. So I won't be running > for PTL reelection for Rocky cycle. Thanks all of your support and trust > in last 2 cycles. > > > -- > ChangBo Guo(gcb) > Community Director @EasyStack > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims From gkotton at vmware.com Tue Jan 16 12:51:45 2018 From: gkotton at vmware.com (Gary Kotton) Date: Tue, 16 Jan 2018 12:51:45 +0000 Subject: [openstack-dev] [Neutron] Bug deputy report Message-ID: <85126F6B-E961-4B24-BFEB-B07124F97C5D@vmware.com> Hi, Things have been relatively quiet. There are two bugs: 1. https://bugs.launchpad.net/neutron/+bug/1743480 - I think that we can leverage tags here so that should address the issue. Would be interesting to know what others thing. 2. https://bugs.launchpad.net/neutron/+bug/1743552 - patch in review https://review.openstack.org/#/c/534263/ I need to drop off bug duty on Thursday night so if someone can please swap me for Friday. Thanks Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Jan 16 13:34:04 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 16 Jan 2018 08:34:04 -0500 Subject: [openstack-dev] [oslo] not run for PTL In-Reply-To: References: Message-ID: <1516109309-sup-589@lrrr.local> Excerpts from ChangBo Guo's message of 2018-01-16 18:55:26 +0800: > Hi Oslo folks, > > I taken the role of PTL in last 2 cycles, and would like to focus on coding > this cycle. > it's time to let new leader to make oslo better. So I won't be running > for PTL reelection for Rocky cycle. Thanks all of your support and trust > in last 2 cycles. > Thank you for serving as PTL, gcb! The libraries have been quite stable lately under your leadership. Doug From openstack at medberry.net Tue Jan 16 14:24:45 2018 From: openstack at medberry.net (David Medberry) Date: Tue, 16 Jan 2018 07:24:45 -0700 Subject: [openstack-dev] Ops Mid Cycle in Tokyo Mar 7-8 2018 Message-ID: Hi all, Broad distribution to make sure folks are aware of the upcoming Ops Meetup in Tokyo. You can help "steer" this meetup by participating in the planning meetings or more practically by editing this page (respectfully): https://etherpad.openstack.org/p/TYO-ops-meetup-2018 Sign up for the meetup is here:https://goo.gl/HBJkPy We'll see you there! -dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Tue Jan 16 14:38:03 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 16 Jan 2018 23:38:03 +0900 Subject: [openstack-dev] [Neutron] Bug deputy report In-Reply-To: <85126F6B-E961-4B24-BFEB-B07124F97C5D@vmware.com> References: <85126F6B-E961-4B24-BFEB-B07124F97C5D@vmware.com> Message-ID: Gary, > I need to drop off bug duty on Thursday night so if someone can please swap me for Friday. I am a bug deputy of the next week. I can start my coverage from this Friday. Thanks, Akihiro 2018-01-16 21:51 GMT+09:00 Gary Kotton : > Hi, > > Things have been relatively quiet. There are two bugs: > > 1. https://bugs.launchpad.net/neutron/+bug/1743480 - I think that we can > leverage tags here so that should address the issue. Would be interesting to > know what others thing. > > 2. https://bugs.launchpad.net/neutron/+bug/1743552 - patch in review > https://review.openstack.org/#/c/534263/ > > I need to drop off bug duty on Thursday night so if someone can please swap > me for Friday. > > Thanks > > Gary > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From emilien at redhat.com Tue Jan 16 14:55:03 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 16 Jan 2018 06:55:03 -0800 Subject: [openstack-dev] [tripleo] storyboard evaluation Message-ID: Hey folks, Alex and I spent a bit of time to look at Storyboard, as it seems that some OpenStack projects are migrating from Launchpad to Storyboard. I created a dev instance of storyboard and imported all bugs from TripleO so we could have a look at how it would be if we were using the tool: http://storyboard.macchi.pro:9000/ So far, I liked: - the import went just... fine. Really good work! Title, status, descriptions, tags, comments were successfully imported. - the simplicity of bug status which is more clear than Launchpad: Active, Merged, Invalid. - the UI is really good and it works fine on mobile. - if we manage to make the migration good, each TripleO squad would have their own backlogs and owns boards / worklists / ways to manage todos. What we need to investigate: - how do we deal milestones in stories and also how can we have a dashboard with an overview per milestone (useful for PTL + TripleO release managers). - how to update old Launchpad bugs with the new link in storyboard (probably by hacking the migration script during the import). - how do we want the import to be like: * all bugs into a single project? * closing launchpad/tripleo/bugs access? if so we loose web search on popular bugs - (more a todo) update the elastic-recheck bugs - investigate our TripleO Alerts (probably will have to use Storyboard API instead of Launchpad). Anyway, in a short conclusion I think the project is great but until we figure our what we need to investigate we can't migrate easily. If you would like to be involved on that topic, please let us know, any help is welcome. Thanks, -- Emilien Macchi From zhipengh512 at gmail.com Tue Jan 16 15:15:43 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 16 Jan 2018 23:15:43 +0800 Subject: [openstack-dev] [ResMgmt SIG]Proposal to form Resource Management SIG In-Reply-To: References: <11ce8607-0a59-401d-0605-c36c2a901cf9@gmail.com> Message-ID: application filed at https://review.openstack.org/534342 and wiki created at https://wiki.openstack.org/wiki/Res_Mgmt_SIG . On Wed, Jan 10, 2018 at 12:51 AM, Zhipeng Huang wrote: > i think I could do it, but I gotta rely on you guys to attend the Resource > Management WG meeting since its time is really bad for us in APAC timezone > :P > > On Tue, Jan 9, 2018 at 6:30 PM, Chris Dent wrote: > >> On Mon, 8 Jan 2018, Jay Pipes wrote: >> >> I think having a bi-weekly cross-project (or even cross-ecosystem if >>> we're talking about OpenStack+k8s) status email reporting any big events in >>> the resource tracking world would be useful. As far as regular meetings for >>> a resource management SIG, I'm +0 on that. I prefer to have targeted >>> topical meetings over regular meetings. >>> >> >> I agree, would much prefer to see more email and less meetings. It >> would be fantastic if we can get some cross pollination disucssion >> happening. >> >> A status email, especially one that was cross-ecosystem, would be >> great. Unfortunately I can't commit to doing that myself (the >> existing 2 a week I do is plenty) but hope someone will take it up. >> >> -- >> Chris Dent (⊙_⊙') https://anticdent.org/ >> freenode: cdent tw: @anticdent >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Zhipeng (Howard) Huang > > Standard Engineer > IT Standard & Patent/IT Product Line > Huawei Technologies Co,. Ltd > Email: huangzhipeng at huawei.com > Office: Huawei Industrial Base, Longgang, Shenzhen > > (Previous) > Research Assistant > Mobile Ad-Hoc Network Lab, Calit2 > University of California, Irvine > Email: zhipengh at uci.edu > Office: Calit2 Building Room 2402 > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From persia at shipstone.jp Tue Jan 16 15:31:48 2018 From: persia at shipstone.jp (Emmet Hikory) Date: Tue, 16 Jan 2018 10:31:48 -0500 Subject: [openstack-dev] [tripleo] storyboard evaluation In-Reply-To: References: Message-ID: <743c66ea-07d4-45ba-8284-8122e25135a4@Spark> Emilien Macchi wrote: > What we need to investigate: > - how do we deal milestones in stories and also how can we have a > dashboard with an overview per milestone (useful for PTL + TripleO > release managers).     While the storyboard API supports milestones, they don’t work very similarly to “milestones” in launchpad, so are probably confusing to adopt (and have no UI support).  Some folk use tags for this (perhaps with an automatic worklist that selects all the stories with the tag, for overview). >  - how to update old Launchpad bugs with the new link in storyboard > (probably by hacking the migration script during the import).     There had been a task on the migration story to do this, but it it was dropped in favour of project-wide communications, rather than per-bug communications.  An add-on script to modify bugs after migration is complete is probably a richer solution than direct modification of the current migration script (which allows multiple runs to keep up to date during transition), if tripleO wishes per-bug communication.     Some context at https://storyboard.openstack.org/#!/story/2000876 — Emmet HIKORY -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Tue Jan 16 15:35:06 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Tue, 16 Jan 2018 09:35:06 -0600 Subject: [openstack-dev] [glance][oslo][requirements] oslo.serialization fails with glance In-Reply-To: References: <20180113064128.byill2yngkjgbys2@mthode.org> <20180115161200.nvg34l653w3rxggy@gentoo.org> Message-ID: <20180116153506.f3digy3ohuawpwas@gentoo.org> On 18-01-16 19:12:16, ChangBo Guo wrote: > What's the issue for Glance, any bug link ? > > 2018-01-16 0:12 GMT+08:00 Matthew Thode : > > > On 18-01-13 00:41:28, Matthew Thode wrote: > > > https://review.openstack.org/531788 is the review we are seeing it in, > > > but 2.22.0 failed as well. > > > > > > I'm guessing it was introduced in either > > > > > > https://github.com/openstack/oslo.serialization/commit/ > > c1a7079c26d27a2e46cca26963d3d9aa040bdbe8 > > > or > > > https://github.com/openstack/oslo.serialization/commit/ > > cdb2f60d26e3b65b6370f87b2e9864045651c117 > > > > bamp > > The best bug for this is https://bugs.launchpad.net/oslo.serialization/+bug/1728368 and we are currently getting test fails in https://review.openstack.org/531788 -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From openstack at nemebean.com Tue Jan 16 15:51:05 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 16 Jan 2018 09:51:05 -0600 Subject: [openstack-dev] [tripleo] storyboard evaluation In-Reply-To: References: Message-ID: <79188057-d71b-4f66-f277-2dd9dca1f881@nemebean.com> On 01/16/2018 08:55 AM, Emilien Macchi wrote: > Hey folks, > > Alex and I spent a bit of time to look at Storyboard, as it seems that > some OpenStack projects are migrating from Launchpad to Storyboard. > I created a dev instance of storyboard and imported all bugs from > TripleO so we could have a look at how it would be if we were using > the tool: > > http://storyboard.macchi.pro:9000/ > > So far, I liked: > > - the import went just... fine. Really good work! Title, status, > descriptions, tags, comments were successfully imported. > - the simplicity of bug status which is more clear than Launchpad: > Active, Merged, Invalid. > - the UI is really good and it works fine on mobile. > - if we manage to make the migration good, each TripleO squad would > have their own backlogs and owns boards / worklists / ways to manage > todos. > > What we need to investigate: > - how do we deal milestones in stories and also how can we have a > dashboard with an overview per milestone (useful for PTL + TripleO > release managers). > - how to update old Launchpad bugs with the new link in storyboard > (probably by hacking the migration script during the import). > - how do we want the import to be like: > * all bugs into a single project? > * closing launchpad/tripleo/bugs access? if so we loose web search > on popular bugs > - (more a todo) update the elastic-recheck bugs > - investigate our TripleO Alerts (probably will have to use Storyboard > API instead of Launchpad). We discussed this in the last Designate meeting too, and it was noted that there are some stories tracking the migration blocking issues: https://storyboard.openstack.org/#!/search?tags=blocking-storyboard-migration It might be good to add stories for any of these issues that aren't already covered. > > Anyway, in a short conclusion I think the project is great but until > we figure our what we need to investigate we can't migrate easily. > If you would like to be involved on that topic, please let us know, > any help is welcome. > > Thanks, > From German.Eichberger at rackspace.com Tue Jan 16 16:04:11 2018 From: German.Eichberger at rackspace.com (German Eichberger) Date: Tue, 16 Jan 2018 16:04:11 +0000 Subject: [openstack-dev] [neutron][fwaas] YouTube video demoing FWaaS V2 L2 Message-ID: All, With great pleasure I am sharing this link of Chandan’s excellent video showing the new L2 functionality: https://www.youtube.com/watch?v=gBYJIZ4tUaw&feature=youtu.be Psyched and many thanks to Chandan -- German From davidgab283 at gmail.com Tue Jan 16 16:28:06 2018 From: davidgab283 at gmail.com (David Gabriel) Date: Tue, 16 Jan 2018 17:28:06 +0100 Subject: [openstack-dev] ping between 2 instances using an ovs in the middle Message-ID: Dears, I am writing you this email to look for your help in order to fix a problem, I am facing since a while, related to creating two ubuntu instances in Openstack (Fuel 9.2 for Mitaka) and setting an ovs bridge in each VM. Here is the problem description: I have defined two instances called VM1 and VM2 and ovs bridge, each one of them is deployed in one Virtual Machine (VM) based on this simple topology: *VM1* ---LAN1----*OVS*---LAN2--- *VM2* I used the following commands, taken from some tutorial, for OVS: ovs-vsctl add-br mybridge1 ifconfig mybridge1 up ovs-vsctl add-port eth1 mybridge1 ifconfig eth1 0 ovs-vsctl add-port eth1 mybridge1 ovs-vsctl set-controller mybridge tcp:AddressOfController:6633 Then I tried to make the ping between the two VMs but it fails ! Could you please tell/guide me how to fix this problem. Thanks in advance. Best regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Jan 16 16:29:32 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 16 Jan 2018 16:29:32 +0000 Subject: [openstack-dev] [tripleo] storyboard evaluation In-Reply-To: References: Message-ID: <20180116162932.urmfaviw7b3ihnel@yuggoth.org> On 2018-01-16 06:55:03 -0800 (-0800), Emilien Macchi wrote: [...] > I created a dev instance of storyboard and imported all bugs from > TripleO so we could have a look at how it would be if we were using > the tool: [...] Awesome! We do also have a https://storyboard-dev.openstack.org/ deployment we can do test migrations into if you'd prefer something more central with which to play around. > - how do we deal milestones in stories and also how can we have a > dashboard with an overview per milestone (useful for PTL + TripleO > release managers). So far, the general suggestion for stuff like this is to settle on a consistent set of story tags to apply. It really depends on whether you're trying to track this at a story or task level (there is no per-task tagging implemented yet at any rate). I could imagine, for example, setting something like tripleo-r2 as a tag on stories whose TripleO deliverable tasks are targeting Rocky milestone #2, and then you could have an automatic board with stories matching that tag and lanes based on the story status. > - how to update old Launchpad bugs with the new link in storyboard > (probably by hacking the migration script during the import). We've debated this... unfortunately mass bug updates are a challenge with LP due to the slowness and instability of its API. We might be able to get away with leaving a comment on each open LP bug for a project with a link to its corresponding story, but it would take a long time and may need many retries for some bugs with large numbers of subscribers. Switching the status of all the LP bugtasks en masse is almost guaranteed to be a dead end since bugtask status changes trigger API timeouts far more often based on our prior experience with LP integration, though I suppose we could just live with the idea that some of them might be uncloseable and ignore that fraction. If the migration script is to do any of this, it will also need to be extended to support LP authentication (since it currently only performs anonymous queries it doesn't need to authenticate). Further, that tool is currently designed to support being rerun against the same set of projects for iterative imports in the case of failure or to pick up newer comments/bugs so would need to know to filter out its own comments for purposes of sanity. > - how do we want the import to be like: > * all bugs into a single project? Remember that the model for StoryBoard is like LP in that stories (analogous to bugs in LP) are themselves projectless. It's their tasks (similar to bugtasks in LP) which map to specific projects and a story can have tasks related to multiple projects. In our deployment of SB we create an SB project for each Git repository so over time you would expect the distribution of tasks to cover many "projects" (repositories) maintained by your team. The piece you may be missing here is that you can also define SB projects as belonging to one or more project groups, and in most cases by convention we've defined groups corresponding to official project teams (the governance concept of a "project") for ease of management. > * closing launchpad/tripleo/bugs access? if so we loose web search > on popular bugs They don't disappear from bugs.launchpad.net, and in fact you can't really even prevent people from updating those bugs or adding bugtasks for your project to other bug reports. What you have control over is disabling the ability to file new bugs and list existing bugs from your project page in LP. I would also recommend updating the project description on LP to prominently feature the URL to a closely corresponding project or group in SB. Separately, I notice https://storyboard.openstack.org/robots.txt has been disallowing indexing by search engines... I think this is probably an oversight we should correct ASAP and I've just now added it to the agenda to discuss at today's Infra team meeting. > - (more a todo) update the elastic-recheck bugs This should hopefully be more of a (trivial?) feature add to ER, since the imported stories keep the same story numbers as the bugs from which they originated. > - investigate our TripleO Alerts (probably will have to use Storyboard > API instead of Launchpad). [...] Thankfully, SB was designed from the very beginning in an API-first manner with the WebUI merely one possible API client (there are also other clients like the boartty console client and a . In theory pretty much anything you can do through the WebUI can also be done through the API, as opposed to LP where the API is sort of bolted-on. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From emilien at redhat.com Tue Jan 16 16:29:31 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 16 Jan 2018 08:29:31 -0800 Subject: [openstack-dev] [all] [tc] Community Goals for Rocky In-Reply-To: <76c4df1e-2e82-96c6-a983-36040855a42d@gmail.com> References: <397BB99F-D7B2-47B3-9724-E8B628EFD5C2@cern.ch> <76c4df1e-2e82-96c6-a983-36040855a42d@gmail.com> Message-ID: Here's an update so we can hopefully, as a community, take a decision in the next days or so. * Migration to StoryBoard Champion: Kendall Nelson https://review.openstack.org/#/c/513875/ Some projects already migrated, some projects will migrate soon but there is still a gap of things that prevents some projects to not migrate. See https://storyboard.openstack.org/#!/search?tags=blocking-storyboard-migration For that reason, we are postponing this goal to later but work needs to keep going to make that happen one day. * Remove mox Champion: Sean McGinnis (unless someone else steps up) https://review.openstack.org/#/c/532361/ This goal is to clean some technical debt in the code. It remains a good candidate for Queens. * Ensure pagination links Champion: Monty Taylor https://review.openstack.org/#/c/532627/ This one would improve API users experience. It remains a good candidate for Queens. * Enable mutable configuration Champion: ChangBo Guo Nothing was proposed in governance so far and we have enough proposals now, I guess it could be a candidate for a future cycle though. This one would make happy our operators. * Cold upgrades capabilities Champion: Masayuki Igawa https://review.openstack.org/#/c/533544/ This one would be appreciated by our operators who always need improvements on upgrades experience - I believe it would be a good candidate. Note: some projects requested about having less goals so they have more time to work on their backlogs. While I agree with that, I would like to know who asked exactly, and if they would be affected by the goals or not. It will help us to decide which ones we take. So now, it's really a good time to speak-up and say if: - your project could commit to 2 of these goals or not (and why? backlog? etc) - which ones you couldn't commit to - the ones you prefer We need to take a decision as a community, not just TC members, so please bring feedback. Thanks, On Fri, Jan 12, 2018 at 2:19 PM, Lance Bragstad wrote: > > > On 01/12/2018 11:09 AM, Tim Bell wrote: >> I was reading a tweet from Jean-Daniel and wondering if there would be an appropriate community goal regarding support of some of the later API versions or whether this would be more of a per-project goal. >> >> https://twitter.com/pilgrimstack/status/951860289141641217 >> >> Interesting numbers about customers tools used to talk to our @OpenStack APIs and the Keystone v3 compatibility: >> - 10% are not KeystoneV3 compatible >> - 16% are compatible >> - for the rest, the tools documentation has no info >> >> I think Keystone V3 and Glance V2 are the ones with APIs which have moved on significantly from the initial implementations and not all projects have been keeping up. > Yeah, I'm super interested in this, too. I'll be honest I'm not quite > sure where to start. If the tools are open source we can start > contributing to them directly. >> >> Tim >> >> -----Original Message----- >> From: Emilien Macchi >> Reply-To: "OpenStack Development Mailing List (not for usage questions)" >> Date: Friday, 12 January 2018 at 16:51 >> To: OpenStack Development Mailing List >> Subject: Re: [openstack-dev] [all] [tc] Community Goals for Rocky >> >> Here's a quick update before the weekend: >> >> 2 goals were proposed to governance: >> >> Remove mox >> https://review.openstack.org/#/c/532361/ >> Champion: Sean McGinnis (unless someone else steps up) >> >> Ensure pagination links >> https://review.openstack.org/#/c/532627/ >> Champion: Monty Taylor >> >> 2 more goals are about to be proposed: >> >> Enable mutable configuration >> Champion: ChangBo Guo >> >> Cold upgrades capabilities >> Champion: Masayuki Igawa >> >> >> Thanks everyone for your participation, >> We hope to make a vote within the next 2 weeks so we can prepare the >> PTG accordingly. >> >> On Tue, Jan 9, 2018 at 10:37 AM, Emilien Macchi wrote: >> > As promised, let's continue the discussion and move things forward. >> > >> > This morning Thierry brought the discussion during the TC office hour >> > (that I couldn't attend due to timezone): >> > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/latest.log.html#t2018-01-09T09:18:33 >> > >> > Some outputs: >> > >> > - One goal has been proposed so far. >> > >> > Right now, we only have one goal proposal: Storyboard Migration. There >> > are some concerns about the ability to achieve this goal in 6 months. >> > At that point, we think it would be great to postpone the goal to S >> > cycle, continue the progress (kudos to Kendall) and fine other goals >> > for Rocky. >> > >> > >> > - We still have a good backlog of goals, we're just missing champions. >> > >> > https://etherpad.openstack.org/p/community-goals >> > >> > Chris brought up "pagination links in collection resources" in api-wg >> > guidelines theme. He said in the past this goal was more a "should" >> > than a "must". >> > Thierry mentioned privsep migration (done in Nova and Zun). (action, >> > ping mikal about it). >> > Thierry also brought up the version discovery (proposed by Monty). >> > Flavio proposed mutable configuration, which might be very useful for operators. >> > He also mentioned that IPv6 support goal shouldn't be that far from >> > done, but we're currently lacking in CI jobs that test IPv6 >> > deployments (question for infra/QA, can we maybe document the gap so >> > we can run some gate jobs on ipv6 ?) >> > (personal note on that one, since TripleO & Puppet OpenStack CI >> > already have IPv6 jobs, we can indeed be confident that it shouldn't >> > be that hard to complete this goal in 6 months, I guess the work needs >> > to happen in the projects layouts). >> > Another interesting goal proposed by Thierry, also useful for >> > operators, is to move more projects to assert:supports-upgrade tag. >> > Thierry said we are probably not that far from this goal, but the >> > major lack is in testing. >> > Finally, another "simple" goal is to remove mox/mox3 (Flavio said most >> > of projects don't use it anymore already). >> > >> > With that said, let's continue the discussion on these goals, see >> > which ones can be actionable and find champions. >> > >> > - Flavio asked how would it be perceived if one cycle wouldn't have at >> > least one community goal. >> > >> > Thierry said we could introduce multi-cycle goals (Storyboard might be >> > a good candidate). >> > Chris and Thierry thought that it would be a bad sign for our >> > community to not have community goals during a cycle, "loss of >> > momentum" eventually. >> > >> > >> > Thanks for reading so far, >> > >> > On Fri, Dec 15, 2017 at 9:07 AM, Emilien Macchi wrote: >> >> On Tue, Nov 28, 2017 at 2:22 PM, Emilien Macchi wrote: >> >> [...] >> >>> Suggestions are welcome: >> >>> - on the mailing-list, in a new thread per goal [all] [tc] Proposing >> >>> goal XYZ for Rocky >> >>> - on Gerrit in openstack/governance like Kendall did. >> >> >> >> Just a fresh reminder about Rocky goals. >> >> A few questions that we can ask ourselves: >> >> >> >> 1) What common challenges do we have? >> >> >> >> e.g. Some projects don't have mutable configuration or some projects >> >> aren't tested against IPv6 clouds, etc. >> >> >> >> 2) Who is willing to drive a community goal (a.k.a. Champion)? >> >> >> >> note: a Champion is someone who volunteer to drive the goal, but >> >> doesn't commit to write the code necessarily. The Champion will >> >> communicate with projects PTLs about the goal, and make the liaison if >> >> needed. >> >> >> >> The list of ideas for Community Goals is documented here: >> >> https://etherpad.openstack.org/p/community-goals >> >> >> >> Please be involved and propose some ideas, I'm sure our community has >> >> some common goals, right ? :-) >> >> Thanks, and happy holidays. I'll follow-up in January of next year. >> >> -- >> >> Emilien Macchi >> > >> > >> > >> > -- >> > Emilien Macchi >> >> >> >> -- >> Emilien Macchi >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Emilien Macchi From chkumar246 at gmail.com Tue Jan 16 16:35:28 2018 From: chkumar246 at gmail.com (Chandan kumar) Date: Tue, 16 Jan 2018 22:05:28 +0530 Subject: [openstack-dev] [all] [tc] Community Goals for Rocky In-Reply-To: References: <397BB99F-D7B2-47B3-9724-E8B628EFD5C2@cern.ch> <76c4df1e-2e82-96c6-a983-36040855a42d@gmail.com> Message-ID: Hello Em On Tue, Jan 16, 2018 at 9:59 PM, Emilien Macchi wrote: > Here's an update so we can hopefully, as a community, take a decision > in the next days or so. > > > * Migration to StoryBoard > > Champion: Kendall Nelson > https://review.openstack.org/#/c/513875/ > Some projects already migrated, some projects will migrate soon but > there is still a gap of things that prevents some projects to not > migrate. > See https://storyboard.openstack.org/#!/search?tags=blocking-storyboard-migration > For that reason, we are postponing this goal to later but work needs > to keep going to make that happen one day. > > > * Remove mox > > Champion: Sean McGinnis (unless someone else steps up) > https://review.openstack.org/#/c/532361/ > This goal is to clean some technical debt in the code. > It remains a good candidate for Queens. > May I step up for this goal for Rocky release? I am currently involved with Tempest plugin split goal in Queens Release. I wanted to help on this one. Thanks, Chandan Kumar From akapoor87 at gmail.com Tue Jan 16 16:37:16 2018 From: akapoor87 at gmail.com (Akshay Kapoor) Date: Tue, 16 Jan 2018 22:07:16 +0530 Subject: [openstack-dev] Facing issues with Openstack subnet static routes Message-ID: Hello everyone, I am working on a task, requirement for which is to share services/VMs provisioned inside tenant X (say: network X-n subnet X-s) with another tenant Y(say: network Y-n and subnet Y-s). Did the following steps: 1) Shared the network X-n using 'openstack network rbac...' with tenant Y 2) Created a port X-p on Subnet X-s and added the port as an interface to the default router in tenant Y 3) Created a static route in subnet X-s to forward all traffic intended for CIDR range of tenant Y (Y-s) subnet to port X-p. VMs provisioned in X-s are not reachable from within tenant Y VMs. However, if I remove the static route from the subnet X-s static route settings and add the route to default router in tenant X instead, it works. Can someone please help with why this could happen ? Regards, Akshay -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Tue Jan 16 16:40:09 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 16 Jan 2018 10:40:09 -0600 Subject: [openstack-dev] [all] [tc] Community Goals for Rocky In-Reply-To: References: <397BB99F-D7B2-47B3-9724-E8B628EFD5C2@cern.ch> <76c4df1e-2e82-96c6-a983-36040855a42d@gmail.com> Message-ID: <20180116164008.GA25666@sm-xps> > > > Champion: Sean McGinnis (unless someone else steps up) > > https://review.openstack.org/#/c/532361/ > > This goal is to clean some technical debt in the code. > > It remains a good candidate for Queens. > > > > May I step up for this goal for Rocky release? > I am currently involved with Tempest plugin split goal in Queens > Release. I wanted to help on this one. > > Thanks, > > Chandan Kumar > Excellent, thanks Chandan. There are a few updates to do to the proposal that I've been waiting to do. I will probably make those updates in the next couple days, and when I do I will put you down as the champion. I plan to help see it through as well, but great to have someone like you working on this. Thanks a lot for stepping up and for the current work you've been doing on the tempest goal. Sean From emilien at redhat.com Tue Jan 16 16:42:29 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 16 Jan 2018 08:42:29 -0800 Subject: [openstack-dev] [tripleo] The Weekly Owl - 5th Edition Message-ID: Note: this is the fifth edition of a weekly update of what happens in TripleO, with a little touch of fun. The goal is to provide a short reading (less than 5 minutes) to learn where we are and what we're doing. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-January/126091.html +---------------------------------+ | General announcements | +---------------------------------+ +--> Focus is on Queens-m3 (next week): stabilization and getting RDO promotion before the milestone. +--> No new contributor this week. +--> The team should be planning for Rocky, and prepare the specs / blueprints if needed. +--> Storyboard is being evaluated, take a look how it would be! http://storyboard.macchi.pro:9000 +------------------------------+ | Continuous Integration | +------------------------------+ +--> Rover is Gabriele and ruck is Arx. Please let them know any new CI issue. +--> Master promotion is 11 days, Pike is 10 days and Ocata is 10 days. +--> The team is working hard to get a promotion asap. +--> Sprint 6 is still ongoing, major focus on TripleO CI data collection in grafana. +--> We're re-enabling voting on all scenarios on both pike & master (jobs are passing now). +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting and https://goo.gl/D4WuBP +-------------+ | Upgrades | +-------------+ +--> As usual, reviews are needed on FFU, Backups, Upgrades to Pike & Queens; please check the etherpads +--> RDO-cloud upgrade retrospective was done, good feedback and will hopefully make our upgrades stronger. +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status and https://etherpad.openstack.org/p/tripleo-upgrade-squad-meeting +---------------+ | Containers | +---------------+ +--> Kubernetes: dealing with networking during OpenShift deployment. +--> Containerized undercloud: undercloud-passwords.conf is now generated to mimic instack-undercloud behavior. +--> Containerized overcloud: "container prepare" workflow looks good, need review now. +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +--------------+ | Integration | +--------------+ +--> Some ongoing work to run Manila & Sahara tempest tests in the gate! +--> Need reviews on Manila/CephNFS. +--> Multiple Ceph clusters is still work in progress. +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> Currently having CI issues with a javascript error possibly caused by prettier. +--> Working to upgrade npm to avoid future deps issues. +--> Enforcing NPM versions. +--> Roles and Network UI work continuing. +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> Configuring VFs in the overcloud for non-tenant networking use (for queens) +--> Fondation routed networks support +--> Octavia LBaaS configuration (for queens) +--> TLS support for ODL +--> OVN parity feature support (OVN Metadata just merged!) (for queens) +--> Configuration support for OVS SR-IOV offload feature (for queens) +--> Jinja2 rendered network templates (for queens?) +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> Preparing the PTG: https://etherpad.openstack.org/p/tripleo-workflows-squad-ptg +--> Work on API design & documentation +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-------------+ | Owl facts | +-------------+ The Golden Masked Owl is a relatively small barn owl with no ear-tufts. It is also known as the New Britain Masked Owl or New Britain Barn Owl. This one is uncommon to rare and vulnerable! You can find them on the island of New Britain in Papua New Guinea. (source: https://www.owlpages.com/owls/species.php?s=130) Stay tuned! -- Your fellow reporter, Emilien Macchi From fungi at yuggoth.org Tue Jan 16 16:51:07 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 16 Jan 2018 16:51:07 +0000 Subject: [openstack-dev] [tripleo] storyboard evaluation In-Reply-To: <20180116162932.urmfaviw7b3ihnel@yuggoth.org> References: <20180116162932.urmfaviw7b3ihnel@yuggoth.org> Message-ID: <20180116165106.q23ipxwbuoosanhx@yuggoth.org> On 2018-01-16 16:29:32 +0000 (+0000), Jeremy Stanley wrote: [...] > Thankfully, SB was designed from the very beginning in an API-first > manner with the WebUI merely one possible API client (there are also > other clients like the boartty console client and a . [...] Oops, to complete my thought there: ...and a CLI but that's seen less development activity since boartty came into being. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Tue Jan 16 16:56:00 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 16 Jan 2018 11:56:00 -0500 Subject: [openstack-dev] [tripleo] storyboard evaluation In-Reply-To: <20180116162932.urmfaviw7b3ihnel@yuggoth.org> References: <20180116162932.urmfaviw7b3ihnel@yuggoth.org> Message-ID: <1516121717-sup-3561@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-01-16 16:29:32 +0000: > On 2018-01-16 06:55:03 -0800 (-0800), Emilien Macchi wrote: > [...] > > I created a dev instance of storyboard and imported all bugs from > > TripleO so we could have a look at how it would be if we were using > > the tool: > [...] > > Awesome! We do also have a https://storyboard-dev.openstack.org/ > deployment we can do test migrations into if you'd prefer something > more central with which to play around. > > > - how do we deal milestones in stories and also how can we have a > > dashboard with an overview per milestone (useful for PTL + TripleO > > release managers). > > So far, the general suggestion for stuff like this is to settle on a > consistent set of story tags to apply. It really depends on whether > you're trying to track this at a story or task level (there is no > per-task tagging implemented yet at any rate). I could imagine, for > example, setting something like tripleo-r2 as a tag on stories whose > TripleO deliverable tasks are targeting Rocky milestone #2, and then > you could have an automatic board with stories matching that tag and > lanes based on the story status. That sounds like it might also be a useful way to approach the goal tracking. Can someone point me to an example of how to set up an automatic board like that? Doug From corvus at inaugust.com Tue Jan 16 17:01:01 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 16 Jan 2018 09:01:01 -0800 Subject: [openstack-dev] [tripleo] storyboard evaluation In-Reply-To: <743c66ea-07d4-45ba-8284-8122e25135a4@Spark> (Emmet Hikory's message of "Tue, 16 Jan 2018 10:31:48 -0500") References: <743c66ea-07d4-45ba-8284-8122e25135a4@Spark> Message-ID: <87zi5d4u2a.fsf@meyer.lemoncheese.net> Emmet Hikory writes: > Emilien Macchi wrote: > >> What we need to investigate: >> - how do we deal milestones in stories and also how can we have a >> dashboard with an overview per milestone (useful for PTL + TripleO >> release managers). > >     While the storyboard API supports milestones, they don’t work very > similarly to “milestones” in launchpad, so are probably confusing to > adopt (and have no UI support).  Some folk use tags for this (perhaps > with an automatic worklist that selects all the stories with the tag, > for overview). We're currently using tags like "zuulv3.0" and "zuulv3.1" to make this automatic board: https://storyboard.openstack.org/#!/board/53 -Jim From ramamani.yeleswarapu at intel.com Tue Jan 16 17:24:44 2018 From: ramamani.yeleswarapu at intel.com (Yeleswarapu, Ramamani) Date: Tue, 16 Jan 2018 17:24:44 +0000 Subject: [openstack-dev] [ironic] this week's priorities and subteam reports Message-ID: Hi, We are glad to present this week's priorities and subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. This Week's Priorities (as of the weekly ironic meeting) ======================================================== 1. ironic-lib patches to finish before the freeze 1.1. fix waiting for partition: https://review.openstack.org/#/c/529325/ 2. Classic drivers deprecation 2.1. upgrade: Patch to be posted early this week 3. Traits 3.1. RPC https://review.openstack.org/#/c/532268/ 3.2. API https://review.openstack.org/#/c/532269/ 4. ironicclient version negotiation 4.1. expose negotiated latest: https://review.openstack.org/531029 4.2. accept list of versions: https://review.openstack.org/#/c/531271/ 5. Rescue: 5.1. RPC https://review.openstack.org/#/c/509336/ 5.2. network interface update: https://review.openstack.org/#/c/509342 6. Fix for non-x86 architectures: https://review.openstack.org/#/c/501799/ Vendor priorities ----------------- cisco-ucs: Patches in works for SDK update, but not posted yet, currently rebuilding third party CI infra after a disaster... idrac: RFE and first several patches for adding UEFI support will be posted by Tuesday, 1/9 ilo: https://review.openstack.org/#/c/530838/ - OOB Raid spec for iLO5 irmc: None oneview: Introduce hpOneView and ilorest to OneView - https://review.openstack.org/#/c/523943/ Subproject priorities --------------------- bifrost: (TheJulia): Fedora support fixes - https://review.openstack.org/#/c/471750/ ironic-inspector (or its client): (dtantsur) keystoneauth adapters https://review.openstack.org/#/c/515787/ networking-baremetal: neutron baremetal agent https://review.openstack.org/#/c/456235/ sushy and the redfish driver: (dtantsur) implement redfish sessions: https://review.openstack.org/#/c/471942/ Bugs (dtantsur, vdrok, TheJulia) -------------------------------- - Stats (diff between 08 Jan 2018 and 15 Jan 2018) - Ironic: 216 bugs (-3) + 260 wishlist items. 1 new (-1), 156 in progress (-2), 0 critical, 33 high (-1) and 27 incomplete (-1) - Inspector: 14 bugs (-1) + 28 wishlist items. 0 new, 10 in progress, 0 critical, 2 high (-1) and 6 incomplete (+1) - Nova bugs with Ironic tag: 13. 1 new, 0 critical, 0 high - via http://dashboard-ironic.7e14.starter-us-west-2.openshiftapps.com/ - HIGH bugs with patches to review: - Clean steps are not tested in gate https://bugs.launchpad.net/ironic/+bug/1523640: Add manual clean step ironic standalone test https://review.openstack.org/#/c/429770/15 - Needs to be reproposed to the ironic tempest plugin repository. - prepare_instance() is not called for whole disk images with 'agent' deploy interface https://bugs.launchpad.net/ironic/+bug/1713916: - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` https://review.openstack.org/#/c/499050/ - (TheJulia) Currently WF-1, as revision is required for deprecation. - If provisioning network is changed, Ironic conductor does not behave correctly https://bugs.launchpad.net/ironic/+bug/1679260: Ironic conductor works correctly on changes of networks: https://review.openstack.org/#/c/462931/ - (rloo) needs some direction - may be fixed as part of https://review.openstack.org/#/c/460564/ - IPA may not find partition created by conductor https://bugs.launchpad.net/ironic-lib/+bug/1739421 - Fix proposed: https://review.openstack.org/#/c/529325/ CI refactoring and missing test coverage ---------------------------------------- - not considered a priority, it's a 'do it always' thing - Standalone CI tests (vsaienk0) - next patch to be reviewed, needed for 3rd party CI: https://review.openstack.org/#/c/429770/ - localboot with partitioned image patches: - Ironic - add localboot partitioned image test: https://review.openstack.org/#/c/502886/ - when previous are merged TODO (vsaienko) - Upload tinycore partitioned image to tarbals.openstack.org - Switch ironic to use tinyipa partitioned image by default - Missing test coverage (all) - portgroups and attach/detach tempest tests: https://review.openstack.org/382476 - adoption: https://review.openstack.org/#/c/344975/ - should probably be changed to use standalone tests - root device hints: TODO - node take over - resource classes integration tests: https://review.openstack.org/#/c/443628/ - radosgw (https://bugs.launchpad.net/ironic/+bug/1737957) Essential Priorities ==================== Ironic client API version negotiation (TheJulia, dtantsur) ---------------------------------------------------------- - RFE https://bugs.launchpad.net/python-ironicclient/+bug/1671145 - Nova bug https://bugs.launchpad.net/nova/+bug/1739440 - gerrit topic: https://review.openstack.org/#/q/topic:bug/1671145 - status as of 15 Jan 2017: - Nova request was accepted as a bug for now: https://bugs.launchpad.net/nova/+bug/1739440 - we will upgrade it to a blueprint if it starts looking a feature; no spec is probably needed - TODO: - easier access to versions in ironicclient - see https://etherpad.openstack.org/p/ironic-api-version-negotiation - discussion of various ways to implement it happened on the midcycle - dtantsur wants to have an API-SIG guideline on consuming versions in SDKs - ready for review https://review.openstack.org/532814 - patches for ironicclient by TheJulia: - expose negotiated latest: https://review.openstack.org/531029 - accept list of versions: https://review.openstack.org/#/c/531271/ - establish foundation for using version negotiation in nova External project authentication rework (pas-ha, TheJulia) --------------------------------------------------------- - gerrit topic: https://review.openstack.org/#/q/topic:bug/1699547 - status as of 15 Jan 2017: - Ironic Done - 1 inspector patch left - https://review.openstack.org/#/c/515786/ MERGED - https://review.openstack.org/#/c/515787 Classic drivers deprecation (dtantsur) -------------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/classic-drivers-future.html - status as of 15 Jan 2017: - dev documentation for hardware types: TODO - switch documentation to hardware types: - status https://etherpad.openstack.org/p/ironic-switch-to-hardware-types - admin guide update (minus vendor bits): https://review.openstack.org/#/c/528337/ MERGED - needs help from vendors updating their pages - migration of classic drivers to hardware types, in discussion... - http://lists.openstack.org/pipermail/openstack-dev/2017-November/124509.html - spec update: https://review.openstack.org/#/c/528308/ MERGED Traits support planning (mgoddard, johnthetubaguy, dtantsur) ------------------------------------------------------------ - http://specs.openstack.org/openstack/ironic-specs/specs/approved/node-traits.html - Nova patches: https://review.openstack.org/#/q/topic:bp/ironic-driver-traits+(status:open+OR+status:merged) - status as of 8 Jan 2018: - deploy templates spec: https://review.openstack.org/504952 needs reviews - depends on deploy-steps spec: https://review.openstack.org/#/c/412523 - patches for traits API - https://review.openstack.org/#/c/528238/ - https://review.openstack.org/#/c/530723 (WIP) - johnthetubaguy is picking the ironic side of traits up now, mgoddard is taking a look at the nova virt driver side - If we don't land this code (at least the API) this week, highly unlikely the nova part will land before next week's FF. Reference architecture guide (dtantsur, sambetts) ------------------------------------------------- - status as of 15 Jan 2017: - dtantsur needs volunteers to help move this forward - list of cases from https://etherpad.openstack.org/p/ironic-queens-ptg-open-discussion - Admin-only provisioner - small and/or rare: TODO - large and/or frequent: TODO - Bare metal cloud for end users - smaller single-site: TODO - larger single-site: TODO - larger multi-site: TODO High Priorities =============== Neutron event processing (vdrok, vsaienk0, sambetts) ---------------------------------------------------- - status as of 27 Sep 2017: - spec at https://review.openstack.org/343684, ready for reviews, replies from authors - WIP code at https://review.openstack.org/440778 Routed network support (sambetts, vsaienk0, bfournie, hjensas) -------------------------------------------------------------- - status as of 15 Jan 2018: - hjensas taken over as main contributor from sambetts - There is challanges with integration to Placement due to the way the integration was done in neutron. Neutron will create a resource provider for network segments in Placement, then it creates an os-aggregate in Nova for the segment, adds nova compute hosts to this aggregate. Ironic nodes cannot be added to host-aggregates. I (hjensas) had a short discussion with neutron devs (mlavalle) on the issue: http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2018-01-12.log.html#t2018-01-12T17:05:38 There are patches in Nova to add support for ironic nodes in host-aggregates: - https://review.openstack.org/#/c/526753/ allow compute nodes to be associated with host agg - https://review.openstack.org/#/c/529135/ (Spec) - Patches: - https://review.openstack.org/456235 Add baremetal neutron agent - https://review.openstack.org/#/c/533707/ start_flag = True, only first time, or conf change - https://review.openstack.org/524709 Make the agent distributed using hashring and notifications (WIP) - https://review.openstack.org/521838 Switch from MechanismDriver to SimpleAgentMechanismDriverBase - https://review.openstack.org/#/c/532349/7 Add support to bind type vlan networks - CI Patches: - https://review.openstack.org/#/c/531275/ Devstack - use neutron segments (routed provider networks) - https://review.openstack.org/#/c/531637/ Wait for ironic-neutron-agent to report state - https://review.openstack.org/#/c/530117/ Devstack - Add ironic-neutron-agent - https://review.openstack.org/#/c/530409/ Add dsvm job Rescue mode (rloo, stendulker, aparnav) --------------------------------------- - Status as on 15 Jan 2018 - spec: http://specs.openstack.org/openstack/ironic-specs/specs/approved/implement-rescue-mode.html - code: https://review.openstack.org/#/q/topic:bug/1526449+status:open - ironic side: - All patches are up-to-date, being actively reviewed and updated - Tempest tests based on standalone ironic is WIP. - nova side: - https://blueprints.launchpad.net/nova/+spec/ironic-rescue-mode: approved for Queens; waiting for ironic part to be done first. Queens feature freeze is week of Jan 22. - To get the nova patch merged, we need: - release new python-ironicclient - update ironicclient version in upper-constraints (this patch will be posted automatically) - update ironicclient version in global-requirement (this patch needs to be posted manually) - code patch: https://review.openstack.org/#/c/416487/ - If we don't land this code (at least the API) this week, highly unlikely the nova part will land before next week's FF. - CI is needed for nova part to land Clean up deploy interfaces (vdrok) ---------------------------------- - status as of 9 Jan 2017: - patch https://review.openstack.org/524433 ready for reviews Zuul v3 jobs in-tree (sambetts, derekh, jlvillal, rloo) ------------------------------------------------------- - etherpad tracking zuul v3 -> intree: https://etherpad.openstack.org/p/ironic-zuulv3-intree-tracking - cleaning up/centralizing job descriptions (eg 'irrelevant-files'): DONE - Next TODO is to convert jobs on master, to proper ansible. NOT a high priority though. - (pas-ha) DNM experimental patch with "devstack-tempest" as base job https://review.openstack.org/#/c/520167/ Graphical console interface (pas-ha, vdrok, rpioso) --------------------------------------------------- - status as of 8 Jan 2017: - spec on review: https://review.openstack.org/#/c/306074/ - there is nova part here, which has to be approved too - dtantsur is worried by absence of progress here - (TheJulia) I think for rocky, it might be worth making it a prime focus, or making it a background goal. BIOS config framework (dtantsur, yolanda, rpioso) ------------------------------------------------- - status as of 8 Jan 2017: - spec under active review: https://review.openstack.org/#/c/496481/ Ansible deploy interface (pas-ha) --------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ansible-deploy-driver.html - status as of 15 Jan 2017: - code merged - TODO - CI job - https://review.openstack.org/529640 MERGED - https://review.openstack.org/#/c/529383/ MERGED - done? - docs: https://review.openstack.org/#/c/525501/ OpenStack Priorities ==================== Python 3.5 compatibility (Nisha, Ankit) --------------------------------------- - Topic: https://review.openstack.org/#/q/topic:goal-python35+NOT+project:openstack/governance+NOT+project:openstack/releases - this include all projects, not only ironic - please tag all reviews with topic "goal-python35" - TODO submit the python3 job for IPA - for ironic and ironic-inspector job enabled by disabling swift as swift is still lacking py3.5 support. - anupn to update the python3 job to build tinyipa with python3 - (anupn): Talked with swift folks and there is a bug upstream opened https://review.openstack.org/#/c/401397 for py3 support in swift. But this is not on their priority - Right now patch pass all gate jobs except agent_- drivers. - we need to make the ironic job voting eventually. but we need to check that nova, glance and neutron already have voting python 3 jobs, otherwise they may break us. - nova seems to have python 3 jobs voting, here are our patches: - ironic https://review.openstack.org/#/c/531398/ - ironic-inspector https://review.openstack.org/#/c/531400/ MERGED Deploying with Apache and WSGI in CI (pas-ha, vsaienk0) ------------------------------------------------------- - ironic is mostly finished - (pas-ha) needs to be rewritten for uWSGI, patches on review: - https://review.openstack.org/#/c/507011/ +A - https://review.openstack.org/#/c/507067 Needs revision - inspector is TODO and depends on https://review.openstack.org/#/q/topic:bug/1525218 - may be delayed to after Queens, as the HA work seems to take a different direction Split away the tempest plugin (jlvillal) ---------------------------------------- - https://etherpad.openstack.org/p/ironic-tempest-plugin-migration - Current (8-Jan-2018) (jlvillal): All projects now using tempest plugin code from openstack/ironic-tempest-plugin - Need to remove plugin code from master branch of openstack/ironic and openstack/ironic-inspector - Plugin code will NOT be removed from the stable branches of openstack/ironic and openstack/ironic-inspector - (jlvillal) 3rd Party CI has had over 3 weeks to prepare for removal. We should now move forward - README, setup.cfg and docs cleanup: https://review.openstack.org/#/c/529538/ MERGED - ironic-tempest-plugin 1.0.0 released Subprojects =========== Inspector (dtantsur) -------------------- - trying to flip dsvm-discovery to use the new dnsmasq pxe filter and failing because of bash :Dhttps://review.openstack.org/#/c/525685/6/devstack/plugin.sh at 202 - follow-ups being merged/reviewed; working on state consistency enhancements https://review.openstack.org/#/c/510928/ too (HA demo follow-up) Bifrost (TheJulia) ------------------ - Also seems a recent authentication change in keystoneauth1 has broken processing of the clouds.yaml files, i.e. `openstack` command does not work. - TheJulia will try to look at this this week. Drivers: -------- DRAC (rpioso, dtantsur) ~~~~~~~~~~~~~~~~~~~~~~~ - Dell Ironic CI is being rebuilt, its back and running now (10/17/2017) OneView (ricardoas, nicodemos, gmonteiro) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Re-submitting reverted patches for migration from python-oneviewclient to python-hpOneView + python-ilorest-library - Check weekly priorities for most import patch to review Cisco UCS (sambetts) ~~~~~~~~~~~~~~~~~~~~ - Currently rebuilding third party CI from the ground up after it bit the dust - Patches for updating the UCS python SDKs are in the works and should be posted soon ......... Until next week, --Rama [0] https://etherpad.openstack.org/p/IronicWhiteBoard -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Jan 16 18:36:17 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 16 Jan 2018 18:36:17 +0000 Subject: [openstack-dev] PTL Election Season In-Reply-To: References: Message-ID: Thanks for the reminder Luke! I will update my patch to remove Security from the directories setup. -Kendall (diablo_rojo) On Mon, Jan 15, 2018 at 9:11 AM Luke Hinds wrote: > On Mon, Jan 15, 2018 at 5:04 PM, Kendall Nelson > wrote: > >> Election details: https://governance.openstack.org/election/ >> >> Please read the stipulations and timelines for candidates and electorate >> contained in this governance documentation. >> >> Be aware, in the PTL elections if the program only has one candidate, >> that candidate is acclaimed and there will be no poll. There will only be a >> poll if there is more than one candidate stepping forward for a program's >> PTL position. >> >> There will be further announcements posted to the mailing list as action >> is required from the electorate or candidates. This email is for >> information purposes only. >> >> If you have any questions which you feel affect others please reply to >> this email thread. >> >> If you have any questions that you which to discuss in private please >> email any of the election judges[1] so that we may address your concerns. >> >> Thank you, >> >> -Kendall Nelson (diablo_rojo) >> >> [1] https://governance.openstack.org/election/#election-officials >> > > Keep in mind there will be no Security PTL election for rocky as we will > be changing to a SIG and will no longer be a project. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Tue Jan 16 19:00:02 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 16 Jan 2018 13:00:02 -0600 Subject: [openstack-dev] [keystone] adding Gage Hugo to keystone core Message-ID: Hey folks, In today's keystone meeting we made the announcement to add Gage Hugo (gagehugo) as a keystone core reviewer [0]! Gage has been actively involved in keystone over the last several cycles. Not only does he provide thorough reviews, but he's really stepped up to help move the project forward by keeping a handle on bugs, fielding questions in the channel, and being diligent about documentation (especially during in-person meet ups). Thanks for all the hard work, Gage! [0] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-01-16-18.00.log.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From cdent+os at anticdent.org Tue Jan 16 19:00:39 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 16 Jan 2018 19:00:39 +0000 (GMT) Subject: [openstack-dev] [tc] [all] TC Report 18-03 Message-ID: (Blog version: ) If a common theme exists in TC activity in the past week, it is the cloudy nature of leadership and governance and how this relates to what the TC should be doing, as a body, and how TC members, as individuals, should identify what they are doing ("I'm doing this with my TC hat on", "I am not doing this with my TC hat on"). It's a bit of a strange business, to me, because I think much of what a TC member can do is related to the relative freedom being elected allows them to achieve. I feel I can budget the time to write this newsletter because I'm a TC member, but I would be doing a _bad thing_ if I declared that this document was an official utterance of OpenStack governance™. Other TC members probably have a much different experience. ## Entropy and Governance The theme started with [a discussion](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-11.log.html#t2018-01-11T13:27:51) about driving some cleanup of stale repos on and whether that was an activity that should be associated with the TC role. It is clear there are some conflicts: * Because many repositories on `git.openstack.org` are not _official_ OpenStack projects it would be inappropriate to manage them out of existence. In this case, using OpenStack infra does not indicate volunteering oneself to be governed by the TC. Only being _official_ does that. * On the other hand, if being on the TC represents a kind of leadership and presents a form of freedom-to-do, then such cleanups represent an opportunity to, as [Sean put it](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-11.log.html#t2018-01-11T13:37:29), improve things: "Governed or not, I care about OpenStack and would like to see it not weighed down by entropy." In some sense, the role of the TC is to exercise that caring for OpenStack and what that caring is is context-dependent. These issues are further complicated by the changing shape of the OpenStack Foundation where there will be things which are officially part of the Foundation (such as Kata), and may use OpenStack infra, but have little to no relationship with the TC. Expect this to get more complicated before it gets less. That was before office-hours. By the time office hours started, the conversation abstracted (as it often does) into more of a discussion about the [role of the TC](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-11.log.html#t2018-01-11T15:21:03) with me saying: > What I'm upset (mildly) about is our continued effort to sort [of] > _not_ have the TC fill the leadership void that I think exists in > OpenStack. The details of this particular case are a stimulus for > that conversation, but not necessar[il]y relevant. (I did get a bit cranky in the discussion, my apologies to those who there. This is one of the issues that I'm most passionate about in OpenStack and I let myself run away a bit. My personal feeling has always been that we need an activist _and_ responsive TC if we expect to steward an environment that improves and adapts to change.) The log is worth reading if this is a topic of interest to you. We delineated some problems that have been left on the floor in the past, some meta-problems with how we identify problems, and even had some agreement on things to try. ## OpenStack-wide Goals Mixed in with the above discussions—and a good example of where the TC does provide some coordination and leadership to help guide all the boats in a similar direction—were efforts to establish sufficient proposals for [OpenStack-wide goals](https://governance.openstack.org/tc/goals/index.html) to make a fair choice. There are now four reviews pending: * [Pagination Links](https://review.openstack.org/#/c/532627/) * [Asserting Cold Upgrade Capabilities](https://review.openstack.org/#/c/533544/) * [Migrating to StoryBoard](https://review.openstack.org/#/c/513875/) * [Removing mox](https://review.openstack.org/#/c/532361/) It's quite likely that the StoryBoard goal will move to later, to get increased experience with it (such as by using it for [tracking rocky goals](http://lists.openstack.org/pipermail/openstack-dev/2018-January/126189.html)). That leaves the other three. They provide a nice balance between improving the user experience, improving the operator experience, and dealing with some technical debt. If you have thoughts on these goals you should comment on the reviews. There is also a [mailing list thread](http://lists.openstack.org/pipermail/openstack-dev/2018-January/126090.html) in progress. Late in the day today, there was discussion of perhaps [limiting the number of goals](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-16.log.html#t2018-01-16T13:05:31). Some projects are still trying to complete queens goals and were delayed for various reasons, including greater than expected time required to adapt to zuulv3. ## Interop Testing Colleen provided a useful [summary](http://lists.openstack.org/pipermail/openstack-dev/2018-January/126146.html) of the situation with the [location of tests for the interop program](https://review.openstack.org/#/c/521602). This discussion ground to a bit of a halt but needs to be resolved. ## Project Boundaries in Expanded Foundation [Qinling has applied to be official](https://review.openstack.org/#/c/533827/). It is a project to do function as a service. This caused [some conversation](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-16.log.html#t2018-01-16T09:27:32) this morning on what impact the expansion of the Foundation will have on the evaluation of candidate projects. ## S Cycle Voting Also [this morning](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-16.log.html#t2018-01-16T09:01:37) Thierry started the process of making it official that the naming poll for the S cycle will [be public](https://review.openstack.org/#/c/534226/). If you have reason to believe this is a bad idea, please comment on the review. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From lbragstad at gmail.com Tue Jan 16 19:00:49 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 16 Jan 2018 13:00:49 -0600 Subject: [openstack-dev] [keystone] core adjustments Message-ID: <46a12de7-cb0b-e4fa-bc8b-11667f112f27@gmail.com> Hey all, I've been in touch with a few folks from our core team that are no longer as active as they used to be. We've made a mutual decision to remove Steve Martinelli and Brant Knudson from keystone core and Brad Topol from keystone specification core. I'd like to express my gratitude for all the work they have done to make keystone better. It's been an absolute pleasure working with each of them and if they do see their involvement in keystone increase, we can expedite their path to core if they choose to pursue it. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From emilien at redhat.com Tue Jan 16 19:23:21 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 16 Jan 2018 11:23:21 -0800 Subject: [openstack-dev] [tripleo] storyboard evaluation In-Reply-To: <20180116162932.urmfaviw7b3ihnel@yuggoth.org> References: <20180116162932.urmfaviw7b3ihnel@yuggoth.org> Message-ID: On Tue, Jan 16, 2018 at 8:29 AM, Jeremy Stanley wrote: > On 2018-01-16 06:55:03 -0800 (-0800), Emilien Macchi wrote: > [...] >> I created a dev instance of storyboard and imported all bugs from >> TripleO so we could have a look at how it would be if we were using >> the tool: > [...] > > Awesome! We do also have a https://storyboard-dev.openstack.org/ > deployment we can do test migrations into if you'd prefer something > more central with which to play around. well - I wanted root access to hack myself. If I need the -dev instance I'll ask ok. > >> - how do we deal milestones in stories and also how can we have a >> dashboard with an overview per milestone (useful for PTL + TripleO >> release managers). > > So far, the general suggestion for stuff like this is to settle on a > consistent set of story tags to apply. It really depends on whether > you're trying to track this at a story or task level (there is no > per-task tagging implemented yet at any rate). I could imagine, for > example, setting something like tripleo-r2 as a tag on stories whose > TripleO deliverable tasks are targeting Rocky milestone #2, and then > you could have an automatic board with stories matching that tag and > lanes based on the story status. Does this kind of board exist already? Something like https://launchpad.net/tripleo/+milestone/queens-3 maybe. If the answer is "no but we can do it", fine but keep in mind this is a blocker for us now. I created a story: https://storyboard.openstack.org/#!/story/2001479 >> - how to update old Launchpad bugs with the new link in storyboard >> (probably by hacking the migration script during the import). > > We've debated this... unfortunately mass bug updates are a challenge > with LP due to the slowness and instability of its API. We might be > able to get away with leaving a comment on each open LP bug for a > project with a link to its corresponding story, but it would take a > long time and may need many retries for some bugs with large numbers > of subscribers. Switching the status of all the LP bugtasks en masse > is almost guaranteed to be a dead end since bugtask status changes > trigger API timeouts far more often based on our prior experience > with LP integration, though I suppose we could just live with the > idea that some of them might be uncloseable and ignore that > fraction. If the migration script is to do any of this, it will also > need to be extended to support LP authentication (since it currently > only performs anonymous queries it doesn't need to authenticate). > Further, that tool is currently designed to support being rerun > against the same set of projects for iterative imports in the case > of failure or to pick up newer comments/bugs so would need to know > to filter out its own comments for purposes of sanity. I'm fine to not update closed bugs but we should update ongoing bugs before closing them. We don't want let our users in a situation where their bugs closed and they don't know what to do. Not everyone is reading this mailing-list and having a nice message posted on Launchpad will be a requirement. I created a story: https://storyboard.openstack.org/#!/story/2001480 >> - how do we want the import to be like: >> * all bugs into a single project? > > Remember that the model for StoryBoard is like LP in that stories > (analogous to bugs in LP) are themselves projectless. It's their > tasks (similar to bugtasks in LP) which map to specific projects and > a story can have tasks related to multiple projects. In our > deployment of SB we create an SB project for each Git repository so > over time you would expect the distribution of tasks to cover many > "projects" (repositories) maintained by your team. The piece you may > be missing here is that you can also define SB projects as belonging > to one or more project groups, and in most cases by convention > we've defined groups corresponding to official project teams (the > governance concept of a "project") for ease of management. ack >> * closing launchpad/tripleo/bugs access? if so we loose web search >> on popular bugs > > They don't disappear from bugs.launchpad.net, and in fact you can't > really even prevent people from updating those bugs or adding > bugtasks for your project to other bug reports. What you have > control over is disabling the ability to file new bugs and list > existing bugs from your project page in LP. I would also recommend > updating the project description on LP to prominently feature the > URL to a closely corresponding project or group in SB. ack > Separately, I notice https://storyboard.openstack.org/robots.txt has > been disallowing indexing by search engines... I think this is > probably an oversight we should correct ASAP and I've just now added > it to the agenda to discuss at today's Infra team meeting. It would be great. >> - (more a todo) update the elastic-recheck bugs > > This should hopefully be more of a (trivial?) feature add to ER, > since the imported stories keep the same story numbers as the bugs > from which they originated. Yeah, trivial but worth noting (more for our todo). >> - investigate our TripleO Alerts (probably will have to use Storyboard >> API instead of Launchpad). > [...] > > Thankfully, SB was designed from the very beginning in an API-first > manner with the WebUI merely one possible API client (there are also > other clients like the boartty console client and a . In theory > pretty much anything you can do through the WebUI can also be done > through the API, as opposed to LP where the API is sort of > bolted-on. I'm not concerned about that one, the API will indeed help us here. Thanks Jeremy for your help, -- Emilien Macchi From rmascena at redhat.com Tue Jan 16 19:24:35 2018 From: rmascena at redhat.com (Raildo Mascena de Sousa Filho) Date: Tue, 16 Jan 2018 19:24:35 +0000 Subject: [openstack-dev] [keystone] adding Gage Hugo to keystone core In-Reply-To: References: Message-ID: +1 Congrats Gage, very well deserved! Cheers, On Tue, Jan 16, 2018 at 4:02 PM Lance Bragstad wrote: > Hey folks, > > In today's keystone meeting we made the announcement to add Gage Hugo > (gagehugo) as a keystone core reviewer [0]! Gage has been actively > involved in keystone over the last several cycles. Not only does he > provide thorough reviews, but he's really stepped up to help move the > project forward by keeping a handle on bugs, fielding questions in the > channel, and being diligent about documentation (especially during > in-person meet ups). > > Thanks for all the hard work, Gage! > > [0] > > http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-01-16-18.00.log.html > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Raildo mascena Software Engineer, Identity Managment Red Hat TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Jan 16 19:26:51 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 16 Jan 2018 11:26:51 -0800 Subject: [openstack-dev] [tripleo] storyboard evaluation In-Reply-To: <87zi5d4u2a.fsf@meyer.lemoncheese.net> References: <743c66ea-07d4-45ba-8284-8122e25135a4@Spark> <87zi5d4u2a.fsf@meyer.lemoncheese.net> Message-ID: On Tue, Jan 16, 2018 at 9:01 AM, James E. Blair wrote: [...] > We're currently using tags like "zuulv3.0" and "zuulv3.1" to make this > automatic board: > > https://storyboard.openstack.org/#!/board/53 Yeah I guess using worklists & tags would probably be our best bet now. Thanks, -- Emilien Macchi From hrybacki at redhat.com Tue Jan 16 21:16:09 2018 From: hrybacki at redhat.com (Harry Rybacki) Date: Tue, 16 Jan 2018 16:16:09 -0500 Subject: [openstack-dev] [keystone] adding Gage Hugo to keystone core In-Reply-To: References: Message-ID: +100 -- congratulations, Gage! On Tue, Jan 16, 2018 at 2:24 PM, Raildo Mascena de Sousa Filho < rmascena at redhat.com> wrote: > +1 > > Congrats Gage, very well deserved! > > Cheers, > > On Tue, Jan 16, 2018 at 4:02 PM Lance Bragstad > wrote: > >> Hey folks, >> >> In today's keystone meeting we made the announcement to add Gage Hugo >> (gagehugo) as a keystone core reviewer [0]! Gage has been actively >> involved in keystone over the last several cycles. Not only does he >> provide thorough reviews, but he's really stepped up to help move the >> project forward by keeping a handle on bugs, fielding questions in the >> channel, and being diligent about documentation (especially during >> in-person meet ups). >> >> Thanks for all the hard work, Gage! >> >> [0] >> http://eavesdrop.openstack.org/meetings/keystone/2018/ >> keystone.2018-01-16-18.00.log.html >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > > Raildo mascena > > Software Engineer, Identity Managment > > Red Hat > > > > TRIED. TESTED. TRUSTED. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Tue Jan 16 21:24:12 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 16 Jan 2018 13:24:12 -0800 Subject: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata Message-ID: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> Hello Stackers, This is a heads up to any of you using the AggregateCoreFilter, AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler. These filters have effectively allowed operators to set overcommit ratios per aggregate rather than per compute node in <= Newton. Beginning in Ocata, there is a behavior change where aggregate-based overcommit ratios will no longer be honored during scheduling. Instead, overcommit values must be set on a per compute node basis in nova.conf. Details: as of Ocata, instead of considering all compute nodes at the start of scheduler filtering, an optimization has been added to query resource capacity from placement and prune the compute node list with the result *before* any filters are applied. Placement tracks resource capacity and usage and does *not* track aggregate metadata [1]. Because of this, placement cannot consider aggregate-based overcommit and will exclude compute nodes that do not have capacity based on per compute node overcommit. How to prepare: if you have been relying on per aggregate overcommit, during your upgrade to Ocata, you must change to using per compute node overcommit ratios in order for your scheduling behavior to stay consistent. Otherwise, you may notice increased NoValidHost scheduling failures as the aggregate-based overcommit is no longer being considered. You can safely remove the AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter from your enabled_filters and you do not need to replace them with any other core/ram/disk filters. The placement query takes care of the core/ram/disk filtering instead, so CoreFilter, RamFilter, and DiskFilter are redundant. Thanks, -melanie [1] Placement has been a new slate for resource management and prior to placement, there were conflicts between the different methods for setting overcommit ratios that were never addressed, such as, "which value to take if a compute node has overcommit set AND the aggregate has it set? Which takes precedence?" And, "if a compute node is in more than one aggregate, which overcommit value should be taken?" So, the ambiguities were not something that was desirable to bring forward into placement. From doug at doughellmann.com Tue Jan 16 21:32:48 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 16 Jan 2018 16:32:48 -0500 Subject: [openstack-dev] [tc][ptl][goals][storyboard] tracking the rocky goals with storyboard In-Reply-To: <1515789340-sup-6629@lrrr.local> References: <1515789340-sup-6629@lrrr.local> Message-ID: <1516138313-sup-951@lrrr.local> Excerpts from Doug Hellmann's message of 2018-01-12 15:37:42 -0500: > Since we are discussing goals for the Rocky cycle, I would like to > propose a change to the way we track progress on the goals. > > We've started to see lots and lots of changes to the goal documents, > more than anticipated when we designed the system originally. That > leads to code review churn within the governance repo, and it means > the goal champions have to wait for the TC to review changes before > they have complete tracking information published somewhere. We've > talked about moving the tracking out of git and using an etherpad > or a wiki page, but I propose that we use storyboard. > > Specifically, I think we should create 1 story for each goal, and > one task for each project within the goal. We can then use a board > to track progress, with lanes like "New", "Acknowledged", "In > Progress", "Completed", and "Not Applicable". It would be the > responsibility of the goal champion to create the board, story, and > tasks and provide links to the board and story in the goal document > (so we only need 1 edit after the goal is approved). From that point > on, teams and goal champions could collaborate on keeping the board > up to date. > > Not all projects are registered in storyboard, yet. Since that > migration is itself a goal under discussion, I think for now we can > just associate all tasks with the governance repository. > > It doesn't look like changes to a board trigger any sort of > notifications for the tasks or stories involved, but that's probably > OK. If we really want notifications we can look at adding them as > a feature of Storyboard at the board level. > > How does this sound as an approach? Does anyone have any reservations > about using storyboard this way? > > Doug Since the feedback has been positive, I wrote up the policy changes to go along with this. Please continue any discussion of the idea over there. https://review.openstack.org/534443 Doug From emilien at redhat.com Tue Jan 16 23:12:26 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 16 Jan 2018 15:12:26 -0800 Subject: [openstack-dev] [tripleo] Rocky PTG planning Message-ID: Hey, I kicked-off an etherpad for PTG planning: https://etherpad.openstack.org/p/tripleo-ptg-rocky It's basic now but feel free to add your ideas of topics. We'll work on the agenda during the following weeks. Thanks! -- Emilien Macchi From corvus at inaugust.com Tue Jan 16 23:39:21 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 16 Jan 2018 15:39:21 -0800 Subject: [openstack-dev] Merging feature/zuulv3 into master Message-ID: <87po69tlue.fsf@meyer.lemoncheese.net> Hi, On Thursday, January 18, 2018, we will merge the feature/zuulv3 branches of both Zuul and Nodepool into master. If you continuously deploy Zuul or Nodepool from master, you should make sure you are prepared for this. The current version of the single_node_ci pattern in puppet-openstackci should, by default, install the latest released versions of Zuul and Nodepool. However, if you are running Zuul continuously deployed from a version of puppet-openstackci which is not continuously deployed, or using some other method, you may find that your system has automatically been upgraded if you have not taken action before the branch is merged. Regardless of how you deploy Zuul, if you find that your system has been upgraded, simply re-install the most current releases of Zuul and Nodepool, either from PyPI or from a git tag. They are: Nodepool: 0.5.0 Zuul: 2.6.0 Note that the final version of Zuul v3 has not been released yet. We hope to do so soon, but until we do, our recommendation is to continue using the current releases. Finally, if you find this message relevant, please subscribe to the new zuul-announce at lists.zuul-ci.org mailing list: http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-announce Thanks, Jim From zhengzhenyulixi at gmail.com Wed Jan 17 01:19:50 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Wed, 17 Jan 2018 09:19:50 +0800 Subject: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata In-Reply-To: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> References: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> Message-ID: Thanks for the info, so it seems we are not going to implement aggregate overcommit ratio in placement at least in the near future? On Wed, Jan 17, 2018 at 5:24 AM, melanie witt wrote: > Hello Stackers, > > This is a heads up to any of you using the AggregateCoreFilter, > AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler. > These filters have effectively allowed operators to set overcommit ratios > per aggregate rather than per compute node in <= Newton. > > Beginning in Ocata, there is a behavior change where aggregate-based > overcommit ratios will no longer be honored during scheduling. Instead, > overcommit values must be set on a per compute node basis in nova.conf. > > Details: as of Ocata, instead of considering all compute nodes at the > start of scheduler filtering, an optimization has been added to query > resource capacity from placement and prune the compute node list with the > result *before* any filters are applied. Placement tracks resource capacity > and usage and does *not* track aggregate metadata [1]. Because of this, > placement cannot consider aggregate-based overcommit and will exclude > compute nodes that do not have capacity based on per compute node > overcommit. > > How to prepare: if you have been relying on per aggregate overcommit, > during your upgrade to Ocata, you must change to using per compute node > overcommit ratios in order for your scheduling behavior to stay consistent. > Otherwise, you may notice increased NoValidHost scheduling failures as the > aggregate-based overcommit is no longer being considered. You can safely > remove the AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter > from your enabled_filters and you do not need to replace them with any > other core/ram/disk filters. The placement query takes care of the > core/ram/disk filtering instead, so CoreFilter, RamFilter, and DiskFilter > are redundant. > > Thanks, > -melanie > > [1] Placement has been a new slate for resource management and prior to > placement, there were conflicts between the different methods for setting > overcommit ratios that were never addressed, such as, "which value to take > if a compute node has overcommit set AND the aggregate has it set? Which > takes precedence?" And, "if a compute node is in more than one aggregate, > which overcommit value should be taken?" So, the ambiguities were not > something that was desirable to bring forward into placement. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhengzhenyulixi at gmail.com Wed Jan 17 01:21:24 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Wed, 17 Jan 2018 09:21:24 +0800 Subject: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata In-Reply-To: References: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> Message-ID: Thanks for the info, so it seems we are not going to implement aggregate overcommit ratio in placement at least in the near future? On Wed, Jan 17, 2018 at 9:19 AM, Zhenyu Zheng wrote: > Thanks for the info, so it seems we are not going to implement aggregate > overcommit ratio in placement at least in the near future? > > On Wed, Jan 17, 2018 at 5:24 AM, melanie witt wrote: > >> Hello Stackers, >> >> This is a heads up to any of you using the AggregateCoreFilter, >> AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler. >> These filters have effectively allowed operators to set overcommit ratios >> per aggregate rather than per compute node in <= Newton. >> >> Beginning in Ocata, there is a behavior change where aggregate-based >> overcommit ratios will no longer be honored during scheduling. Instead, >> overcommit values must be set on a per compute node basis in nova.conf. >> >> Details: as of Ocata, instead of considering all compute nodes at the >> start of scheduler filtering, an optimization has been added to query >> resource capacity from placement and prune the compute node list with the >> result *before* any filters are applied. Placement tracks resource capacity >> and usage and does *not* track aggregate metadata [1]. Because of this, >> placement cannot consider aggregate-based overcommit and will exclude >> compute nodes that do not have capacity based on per compute node >> overcommit. >> >> How to prepare: if you have been relying on per aggregate overcommit, >> during your upgrade to Ocata, you must change to using per compute node >> overcommit ratios in order for your scheduling behavior to stay consistent. >> Otherwise, you may notice increased NoValidHost scheduling failures as the >> aggregate-based overcommit is no longer being considered. You can safely >> remove the AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter >> from your enabled_filters and you do not need to replace them with any >> other core/ram/disk filters. The placement query takes care of the >> core/ram/disk filtering instead, so CoreFilter, RamFilter, and DiskFilter >> are redundant. >> >> Thanks, >> -melanie >> >> [1] Placement has been a new slate for resource management and prior to >> placement, there were conflicts between the different methods for setting >> overcommit ratios that were never addressed, such as, "which value to take >> if a compute node has overcommit set AND the aggregate has it set? Which >> takes precedence?" And, "if a compute node is in more than one aggregate, >> which overcommit value should be taken?" So, the ambiguities were not >> something that was desirable to bring forward into placement. >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Wed Jan 17 02:46:51 2018 From: ed at leafe.com (Ed Leafe) Date: Tue, 16 Jan 2018 20:46:51 -0600 Subject: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata In-Reply-To: References: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> Message-ID: On Jan 16, 2018, at 7:21 PM, Zhenyu Zheng wrote: > Thanks for the info, so it seems we are not going to implement aggregate overcommit ratio in placement at least in the near future? I would go so far as to say that we are not going to implement aggregate overcommit ratio in placement at all. Placement has the concept of a Resource Provider as its base unit, and aggregates really don't fit in this model at all. If you need that sort of grouping, perhaps a tool that would assign a single ratio to all the members of an aggregate would be a good way to convert to the new paradigm. -- Ed Leafe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From glongwave at gmail.com Wed Jan 17 03:34:10 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Wed, 17 Jan 2018 11:34:10 +0800 Subject: [openstack-dev] [all] [tc] Community Goals for Rocky In-Reply-To: References: <397BB99F-D7B2-47B3-9724-E8B628EFD5C2@cern.ch> <76c4df1e-2e82-96c6-a983-36040855a42d@gmail.com> Message-ID: 2018-01-17 0:29 GMT+08:00 Emilien Macchi : > Here's an update so we can hopefully, as a community, take a decision > in the next days or so. > > > * Migration to StoryBoard > > Champion: Kendall Nelson > https://review.openstack.org/#/c/513875/ > Some projects already migrated, some projects will migrate soon but > there is still a gap of things that prevents some projects to not > migrate. > See https://storyboard.openstack.org/#!/search?tags=blocking- > storyboard-migration > For that reason, we are postponing this goal to later but work needs > to keep going to make that happen one day. > > > * Remove mox > > Champion: Sean McGinnis (unless someone else steps up) > https://review.openstack.org/#/c/532361/ > This goal is to clean some technical debt in the code. > It remains a good candidate for Queens. > > > * Ensure pagination links > > Champion: Monty Taylor > https://review.openstack.org/#/c/532627/ > This one would improve API users experience. > It remains a good candidate for Queens. > > > * Enable mutable configuration > Champion: ChangBo Guo > Nothing was proposed in governance so far and we have enough proposals > now, I guess it could be a candidate for a future cycle though. This > one would make happy our operators. > > This is the review in governance https://review.openstack.org/534605 This change really benefit users, hope this can be finished in Rocky. > > * Cold upgrades capabilities > Champion: Masayuki Igawa > https://review.openstack.org/#/c/533544/ > This one would be appreciated by our operators who always need > improvements on upgrades experience - I believe it would be a good > candidate. > > > Note: some projects requested about having less goals so they have > more time to work on their backlogs. While I agree with that, I would > like to know who asked exactly, and if they would be affected by the > goals or not. > It will help us to decide which ones we take. > > So now, it's really a good time to speak-up and say if: > - your project could commit to 2 of these goals or not (and why? backlog? > etc) > - which ones you couldn't commit to > - the ones you prefer > > We need to take a decision as a community, not just TC members, so > please bring feedback. > > Thanks, > > > On Fri, Jan 12, 2018 at 2:19 PM, Lance Bragstad > wrote: > > > > > > On 01/12/2018 11:09 AM, Tim Bell wrote: > >> I was reading a tweet from Jean-Daniel and wondering if there would be > an appropriate community goal regarding support of some of the later API > versions or whether this would be more of a per-project goal. > >> > >> https://twitter.com/pilgrimstack/status/951860289141641217 > >> > >> Interesting numbers about customers tools used to talk to our > @OpenStack APIs and the Keystone v3 compatibility: > >> - 10% are not KeystoneV3 compatible > >> - 16% are compatible > >> - for the rest, the tools documentation has no info > >> > >> I think Keystone V3 and Glance V2 are the ones with APIs which have > moved on significantly from the initial implementations and not all > projects have been keeping up. > > Yeah, I'm super interested in this, too. I'll be honest I'm not quite > > sure where to start. If the tools are open source we can start > > contributing to them directly. > >> > >> Tim > >> > >> -----Original Message----- > >> From: Emilien Macchi > >> Reply-To: "OpenStack Development Mailing List (not for usage > questions)" > >> Date: Friday, 12 January 2018 at 16:51 > >> To: OpenStack Development Mailing List openstack.org> > >> Subject: Re: [openstack-dev] [all] [tc] Community Goals for Rocky > >> > >> Here's a quick update before the weekend: > >> > >> 2 goals were proposed to governance: > >> > >> Remove mox > >> https://review.openstack.org/#/c/532361/ > >> Champion: Sean McGinnis (unless someone else steps up) > >> > >> Ensure pagination links > >> https://review.openstack.org/#/c/532627/ > >> Champion: Monty Taylor > >> > >> 2 more goals are about to be proposed: > >> > >> Enable mutable configuration > >> Champion: ChangBo Guo > >> > >> Cold upgrades capabilities > >> Champion: Masayuki Igawa > >> > >> > >> Thanks everyone for your participation, > >> We hope to make a vote within the next 2 weeks so we can prepare the > >> PTG accordingly. > >> > >> On Tue, Jan 9, 2018 at 10:37 AM, Emilien Macchi > wrote: > >> > As promised, let's continue the discussion and move things > forward. > >> > > >> > This morning Thierry brought the discussion during the TC office > hour > >> > (that I couldn't attend due to timezone): > >> > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ > latest.log.html#t2018-01-09T09:18:33 > >> > > >> > Some outputs: > >> > > >> > - One goal has been proposed so far. > >> > > >> > Right now, we only have one goal proposal: Storyboard Migration. > There > >> > are some concerns about the ability to achieve this goal in 6 > months. > >> > At that point, we think it would be great to postpone the goal to > S > >> > cycle, continue the progress (kudos to Kendall) and fine other > goals > >> > for Rocky. > >> > > >> > > >> > - We still have a good backlog of goals, we're just missing > champions. > >> > > >> > https://etherpad.openstack.org/p/community-goals > >> > > >> > Chris brought up "pagination links in collection resources" in > api-wg > >> > guidelines theme. He said in the past this goal was more a > "should" > >> > than a "must". > >> > Thierry mentioned privsep migration (done in Nova and Zun). > (action, > >> > ping mikal about it). > >> > Thierry also brought up the version discovery (proposed by Monty). > >> > Flavio proposed mutable configuration, which might be very useful > for operators. > >> > He also mentioned that IPv6 support goal shouldn't be that far > from > >> > done, but we're currently lacking in CI jobs that test IPv6 > >> > deployments (question for infra/QA, can we maybe document the gap > so > >> > we can run some gate jobs on ipv6 ?) > >> > (personal note on that one, since TripleO & Puppet OpenStack CI > >> > already have IPv6 jobs, we can indeed be confident that it > shouldn't > >> > be that hard to complete this goal in 6 months, I guess the work > needs > >> > to happen in the projects layouts). > >> > Another interesting goal proposed by Thierry, also useful for > >> > operators, is to move more projects to assert:supports-upgrade > tag. > >> > Thierry said we are probably not that far from this goal, but the > >> > major lack is in testing. > >> > Finally, another "simple" goal is to remove mox/mox3 (Flavio said > most > >> > of projects don't use it anymore already). > >> > > >> > With that said, let's continue the discussion on these goals, see > >> > which ones can be actionable and find champions. > >> > > >> > - Flavio asked how would it be perceived if one cycle wouldn't > have at > >> > least one community goal. > >> > > >> > Thierry said we could introduce multi-cycle goals (Storyboard > might be > >> > a good candidate). > >> > Chris and Thierry thought that it would be a bad sign for our > >> > community to not have community goals during a cycle, "loss of > >> > momentum" eventually. > >> > > >> > > >> > Thanks for reading so far, > >> > > >> > On Fri, Dec 15, 2017 at 9:07 AM, Emilien Macchi < > emilien at redhat.com> wrote: > >> >> On Tue, Nov 28, 2017 at 2:22 PM, Emilien Macchi < > emilien at redhat.com> wrote: > >> >> [...] > >> >>> Suggestions are welcome: > >> >>> - on the mailing-list, in a new thread per goal [all] [tc] > Proposing > >> >>> goal XYZ for Rocky > >> >>> - on Gerrit in openstack/governance like Kendall did. > >> >> > >> >> Just a fresh reminder about Rocky goals. > >> >> A few questions that we can ask ourselves: > >> >> > >> >> 1) What common challenges do we have? > >> >> > >> >> e.g. Some projects don't have mutable configuration or some > projects > >> >> aren't tested against IPv6 clouds, etc. > >> >> > >> >> 2) Who is willing to drive a community goal (a.k.a. Champion)? > >> >> > >> >> note: a Champion is someone who volunteer to drive the goal, but > >> >> doesn't commit to write the code necessarily. The Champion will > >> >> communicate with projects PTLs about the goal, and make the > liaison if > >> >> needed. > >> >> > >> >> The list of ideas for Community Goals is documented here: > >> >> https://etherpad.openstack.org/p/community-goals > >> >> > >> >> Please be involved and propose some ideas, I'm sure our > community has > >> >> some common goals, right ? :-) > >> >> Thanks, and happy holidays. I'll follow-up in January of next > year. > >> >> -- > >> >> Emilien Macchi > >> > > >> > > >> > > >> > -- > >> > Emilien Macchi > >> > >> > >> > >> -- > >> Emilien Macchi > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From sriharsha.basavapatna at broadcom.com Wed Jan 17 04:14:00 2018 From: sriharsha.basavapatna at broadcom.com (Sriharsha Basavapatna) Date: Wed, 17 Jan 2018 09:44:00 +0530 Subject: [openstack-dev] [os-vif] Message-ID: On Wed, Jan 10, 2018 at 11:24 AM, Sriharsha Basavapatna < sriharsha.basavapatna at broadcom.com> wrote: > On Tue, Jan 9, 2018 at 10:46 PM, Stephen Finucane > wrote: > > On Tue, 2018-01-09 at 16:30 +0530, Sriharsha Basavapatna wrote: > >> On Tue, Jan 9, 2018 at 2:20 PM, Sriharsha Basavapatna > >> wrote: > >> > Hi Andreas, > >> > > >> > On Tue, Jan 9, 2018 at 12:04 PM, Andreas Jaeger > >> > wrote: > >> > > On 2018-01-09 07:00, Sriharsha Basavapatna wrote: > >> > > > Hi, > >> > > > > >> > > > I've uploaded a patch for review: > >> > > > https://review.openstack.org/#/c/531674/ > >> > > > > >> > > > This is the first time I'm submitting a patch on openstack. I'd > >> > > > like > >> > > > >> > > Welcome to OpenStack, Harsha. > >> > > >> > Thank you. > >> > > >> > > Please read > >> > > https://docs.openstack.org/infra/manual/developers.html if you > >> > > haven't. > >> > > >> > Ok, i'll read it. > >> > > > >> > > I see that your change fails the basic tests, you can run these > >> > > locally > >> > > as follows to check that your fixes will pass: > >> > > > >> > > tox -e pep8 > >> > > tox -e py27 > >> > > >> > I was wondering if there's a way to catch these errors without > >> > having > >> > to submit it for gerrit review. I fixed the ones that were > >> > reported > >> > in patch-set-1; looks like there's some new ones in the second > >> > patch-set. I'll run the above commands to verify the fix locally. > >> > > >> > Thanks, > >> > -Harsha > >> > >> I installed python-pip and tox. But when I run "tox -e pep8", I'm > >> seeing some errors: > >> > >> building 'netifaces' extension > >> gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall > >> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong > >> --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic > >> -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall > >> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong > >> --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic > >> -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DNETIFACES_VERSION=0.10.6 > >> -DHAVE_GETIFADDRS=1 -DHAVE_GETNAMEINFO=1 -DHAVE_NETASH_ASH_H=1 > >> -DHAVE_NETATALK_AT_H=1 -DHAVE_NETAX25_AX25_H=1 > >> -DHAVE_NETECONET_EC_H=1 > >> -DHAVE_NETIPX_IPX_H=1 -DHAVE_NETPACKET_PACKET_H=1 > >> -DHAVE_LINUX_IRDA_H=1 -DHAVE_LINUX_ATM_H=1 -DHAVE_LINUX_LLC_H=1 > >> -DHAVE_LINUX_TIPC_H=1 -DHAVE_LINUX_DN_H=1 -DHAVE_SOCKADDR_AT=1 > >> -DHAVE_SOCKADDR_AX25=1 -DHAVE_SOCKADDR_IN=1 -DHAVE_SOCKADDR_IN6=1 > >> -DHAVE_SOCKADDR_IPX=1 -DHAVE_SOCKADDR_UN=1 -DHAVE_SOCKADDR_ASH=1 > >> -DHAVE_SOCKADDR_EC=1 -DHAVE_SOCKADDR_LL=1 -DHAVE_SOCKADDR_ATMPVC=1 > >> -DHAVE_SOCKADDR_ATMSVC=1 -DHAVE_SOCKADDR_DN=1 -DHAVE_SOCKADDR_IRDA=1 > >> -DHAVE_SOCKADDR_LLC=1 -DHAVE_PF_NETLINK=1 -I/usr/include/python2.7 -c > >> netifaces.c -o build/temp.linux-x86_64-2.7/netifaces.o > >> netifaces.c:1:20: fatal error: Python.h: No such file or > >> directory > >> #include > >> ^ > >> compilation terminated. > >> error: command 'gcc' failed with exit status 1 > >> > >> ---------------------------------------- > >> Command "/home/harshab/os-vif/.tox/pep8/bin/python2 -u -c "import > >> setuptools, tokenize;__file__='/tmp/pip-build- > >> OibnHO/netifaces/setup.py';f=getattr(tokenize, > >> 'open', open)(__file__);code=f.read().replace('\r\n', > >> '\n');f.close();exec(compile(code, __file__, 'exec'))" install > >> --record /tmp/pip-3Hu__1-record/install-record.txt > >> --single-version-externally-managed --compile --install-headers > >> /home/harshab/os-vif/.tox/pep8/include/site/python2.7/netifaces" > >> failed with error code 1 in /tmp/pip-build-OibnHO/netifaces/ > >> > >> ERROR: could not install deps > >> [-r/home/harshab/os-vif/requirements.txt, > >> -r/home/harshab/os-vif/test-requirements.txt]; v = > >> InvocationError('/home/harshab/os-vif/.tox/pep8/bin/pip install -U > >> -r/home/harshab/os-vif/requirements.txt > >> -r/home/harshab/os-vif/test-requirements.txt (see > >> /home/harshab/os-vif/.tox/pep8/log/pep8-1.log)', 1) > >> ___________________________________ summary > >> ____________________________________ > >> ERROR: pep8: could not install deps > >> [-r/home/harshab/os-vif/requirements.txt, > >> -r/home/harshab/os-vif/test-requirements.txt]; v = > >> InvocationError('/home/harshab/os-vif/.tox/pep8/bin/pip install -U > >> -r/home/harshab/os-vif/requirements.txt > >> -r/home/harshab/os-vif/test-requirements.txt (see > >> /home/harshab/os-vif/.tox/pep8/log/pep8-1.log)', 1) > >> > >> Thanks, > >> -Harsha > > > > That's happening because the 'pep8' target is installing all the > > requirements for the project in a virtualenv, and one of them needs > > Python development headers. What Linux distro are you using? On Fedora > > you can fix this like so: > > > > sudo dnf install python-devel > > Thanks Stephen, I'm using RHEL and 'yum install python-devel' resolved it. > -Harsha > > I've resolved the test errors and addressed code review comments. The updated patch for review is here: https://review.openstack.org/#/c/531674/ legacy-tempest-dsvm-nova-os-vif is reporting an error; but I'm not sure if this is related to the changes in this patch-set since I couldn't find any relevant errors in the log files. Zuul has reported 'Verified+1' on this patch. Thanks, -Harsha > > > On Ubuntu, I think it's something like this: > > > > sudo apt-get install python-dev > > > > Stephen > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From madhuri.kumari at intel.com Wed Jan 17 05:22:55 2018 From: madhuri.kumari at intel.com (Kumari, Madhuri) Date: Wed, 17 Jan 2018 05:22:55 +0000 Subject: [openstack-dev] [Nova][Ironic][API] Service Management API Design Message-ID: <0512CBBECA36994BAA14C7FEDE986CA6041D1A8D@BGSMSX102.gar.corp.intel.com> Hi Nova Developers, I am working on adding a service management API in Ironic [1][2]. This spec adds a new API /conductors to list, enable/disable an ironic-conductor service. I am struggling to understand the difference between shutting down a service manually and disabling it. So my question is what happens to the VMs and the current operation(if any) going on with the nova-compute service we disable? What is the difference between shutting down the service and disabling it? I understand both the actions, disables scheduling request to the compute service and the workloads are taken over by other nova-compute service. Please help me understand the design in Nova. [1] https://review.openstack.org/#/c/471217/ [2] https://bugs.launchpad.net/ironic/+bug/1526759 Regards, Madhuri -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Wed Jan 17 07:25:10 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 17 Jan 2018 01:25:10 -0600 Subject: [openstack-dev] [all][requirements] Freeze is coming next week Message-ID: <20180117072510.nxx5wc3oj2r5lw5j@gentoo.org> So get your changes in or get left behind :D -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From prometheanfire at gentoo.org Wed Jan 17 08:05:11 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 17 Jan 2018 02:05:11 -0600 Subject: [openstack-dev] [all][requirements] Freeze is coming next week In-Reply-To: <20180117072510.nxx5wc3oj2r5lw5j@gentoo.org> References: <20180117072510.nxx5wc3oj2r5lw5j@gentoo.org> Message-ID: <20180117080511.75xvzebtq3asvbgy@gentoo.org> On 18-01-17 01:25:10, Matthew Thode wrote: > So get your changes in or get left behind :D > Just to be clear, the HARD deadline for the freeze will be Friday January 26th at 23:59:59 UTC -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From zhipengh512 at gmail.com Wed Jan 17 09:07:21 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 17 Jan 2018 17:07:21 +0800 Subject: [openstack-dev] [acceleration]Cyborg Team Weekly Meeting 2018.01.17 Message-ID: Hi Team, Weekly meeting starting from UTC1500 at #openstack-cyborg as usual. The main agenda today is to go over our current dev progress, and also Dutch Althoff from Xilinx introduce the latest development on SDAccel :) -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Wed Jan 17 09:54:01 2018 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Wed, 17 Jan 2018 10:54:01 +0100 Subject: [openstack-dev] [ironic] FFE - Requesting FFE for Routed Networks support. Message-ID: <1516182841.12010.13.camel@redhat.com> Requesting FFE for Routed Network support in networking-baremetal. ------------------------------------------------------------------- # Pros ------ With the patches up for review[7] we have a working ml2 agent; __depends on neutron fix__; and mechanism driver combination that enables support to bind ports on neutron routed networks. Specifically we report the bridge_mappings data to neutron, which enable the _find_candidate_subnets() method in neutron ipam[1] to succeed in finding a candidate subnet available to the ironic node when ports on routed segments are bound. This functionality will allow users to take advantage of the functionality added in DHCP Agent[2] which enables the DHCP agent to service other subnets on the network via DHCP relay. For Ironic this means we can support deploying nodes on a remote L3 network, e.g different datacenter or different rack/rack-row. # Cons ------ Integration with placement does not currently work. Neutron uses Nova host-aggregates in combination with Placement. Specifically hosts are added to a host-aggregate for segments based on SEGMENT_HOST_MAPPING. Ironic nodes cannot currently be added to host- aggregates in Nova. Because of this the following will appear in the neutron logs when ironic-neutron agent is started: RESP BODY: {"itemNotFound": {"message": "Compute host could not be found.", "code": 404}} Also the placement api cannot be used to find good candidate ironic nodes with a baremetal port on the correct segment. This will have to be worked around by the operator via capabilities and flavor properties or manual additions to resource providers in placement. Depending on the direction of other projects, neutron and nova, the way placement will finally work is not certain.  Either the nova work [3] and [4], or a neutron change to use placement only or a fallback to placement in neutron would be possible. In either case there should be no need to change the networking-baremetal agent or mechanism driver. # Risks ------- Unless this bug[5] is fixed we might break the current baremetal mechanism driver functionality. I have proposed a patch[6] to neutron that fix the issue. In case no fix lands for this neutron bug soon we should probably push these changes to Rocky. # Core reviewers ---------------- Julia Kreger, Sam Betts [1] https://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/ip am_backend_mixin.py#n697 [2] https://review.openstack.org/#/c/468744/ [3] https://review.openstack.org/#/c/421009/ [4] https://review.openstack.org/#/c/421011/ [5] https://bugs.launchpad.net/neutron/+bug/1743579 [6] https://review.openstack.org/#/c/534449/ [7] https://review.openstack.org/#/q/project:openstack/networking-barem etal -- |Harald Jensås        |hjensas:irc From aj at suse.com Wed Jan 17 09:58:25 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 17 Jan 2018 10:58:25 +0100 Subject: [openstack-dev] [all][docs] Fixing report a bug link for documentation Message-ID: <98bb4bf2-2f8d-21cf-e7af-829578cf52db@suse.com> I noticed that the report-a-bug feature was giving wrong information for the Neutron api-guide, it missed to give the giturl and SHA of latest commit. This comes from not fully initialising the openstackdocstheme. I fixed this for neutron-lib with change https://review.openstack.org/534666 . If you're interested in having correct information, check our setup, especially for api-ref and api-guide documents, and review the content of the report a bug link (click on the "Bug" icon in the upper right corner). Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From hjensas at redhat.com Wed Jan 17 10:02:42 2018 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Wed, 17 Jan 2018 11:02:42 +0100 Subject: [openstack-dev] [ironic] FFE - Requesting FFE for Routed Networks support. In-Reply-To: <1516182841.12010.13.camel@redhat.com> References: <1516182841.12010.13.camel@redhat.com> Message-ID: <1516183362.12010.15.camel@redhat.com> On Wed, 2018-01-17 at 10:54 +0100, Harald Jensås wrote: > Requesting FFE for Routed Network support in networking-baremetal. > ------------------------------------------------------------------- > > > # Pros > ------ > With the patches up for review[7] we have a working ml2 agent; > __depends on neutron fix__; and mechanism driver combination that > enables support to bind ports on neutron routed networks. > > Specifically we report the bridge_mappings data to neutron, which > enable the _find_candidate_subnets() method in neutron ipam[1] to > succeed in finding a candidate subnet available to the ironic node > when > ports on routed segments are bound. > > This functionality will allow users to take advantage of the > functionality added in DHCP Agent[2] which enables the DHCP agent to > service other subnets on the network via DHCP relay. For Ironic this > means we can support deploying nodes on a remote L3 network, e.g > different datacenter or different rack/rack-row. > > > > # Cons > ------ > Integration with placement does not currently work. > > Neutron uses Nova host-aggregates in combination with Placement. > Specifically hosts are added to a host-aggregate for segments based > on > SEGMENT_HOST_MAPPING. Ironic nodes cannot currently be added to host- > aggregates in Nova. Because of this the following will appear in the > neutron logs when ironic-neutron agent is started: >    RESP BODY: {"itemNotFound": {"message": "Compute host node- > id> could not be found.", "code": 404}} > > Also the placement api cannot be used to find good candidate ironic > nodes with a baremetal port on the correct segment. This will have to > be worked around by the operator via capabilities and flavor > properties or manual additions to resource providers in placement. > > Depending on the direction of other projects, neutron and nova, the > way > placement will finally work is not certain.  > > Either the nova work [3] and [4], or a neutron change to use > placement > only or a fallback to placement in neutron would be possible. In > either > case there should be no need to change the networking-baremetal agent > or mechanism driver. > > > # Risks > ------- > Unless this bug[5] is fixed we might break the current baremetal > mechanism driver functionality. I have proposed a patch[6] to neutron > that fix the issue. In case no fix lands for this neutron bug soon we > should probably push these changes to Rocky. > > > # Core reviewers > ---------------- > Julia Kreger, Sam Betts > > > > > [1] https://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/ > ip > am_backend_mixin.py#n697 > [2] https://review.openstack.org/#/c/468744/ > ooops .. got the wrong urls in the first mail. [3] https://review.openstack.org/#/c/526753/ [4] https://review.openstack.org/#/c/529135/ > [5] https://bugs.launchpad.net/neutron/+bug/1743579 > [6] https://review.openstack.org/#/c/534449/ > [7] https://review.openstack.org/#/q/project:openstack/networking-bar > em > etal > > -- |Harald Jensås        |hjensas at redhat.com   |  www.redhat.com |+46 (0)701 91 23 17  |  hjensas:irc From tobias at citynetwork.se Wed Jan 17 10:40:11 2018 From: tobias at citynetwork.se (Tobias Rydberg) Date: Wed, 17 Jan 2018 11:40:11 +0100 Subject: [openstack-dev] [publiccloud-wg] Reminder for todays meeting Message-ID: Hi all, Time again for a meeting for the Public Cloud WG - today at 1400 UTC in #openstack-meeting-3 Agenda and etherpad at: https://etherpad.openstack.org/p/publiccloud-wg See you later! Tobias Rydberg -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3945 bytes Desc: S/MIME Cryptographic Signature URL: From thierry at openstack.org Wed Jan 17 10:51:52 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 17 Jan 2018 11:51:52 +0100 Subject: [openstack-dev] [tripleo] storyboard evaluation In-Reply-To: References: <20180116162932.urmfaviw7b3ihnel@yuggoth.org> Message-ID: <0e787b3e-22f2-6ffd-6c1b-b95c51349302@openstack.org> Emilien Macchi wrote: > On Tue, Jan 16, 2018 at 8:29 AM, Jeremy Stanley wrote: >>> - how do we deal milestones in stories and also how can we have a >>> dashboard with an overview per milestone (useful for PTL + TripleO >>> release managers). >> >> So far, the general suggestion for stuff like this is to settle on a >> consistent set of story tags to apply. It really depends on whether >> you're trying to track this at a story or task level (there is no >> per-task tagging implemented yet at any rate). I could imagine, for >> example, setting something like tripleo-r2 as a tag on stories whose >> TripleO deliverable tasks are targeting Rocky milestone #2, and then >> you could have an automatic board with stories matching that tag and >> lanes based on the story status. > > Does this kind of board exist already? Rather than using tags, you can make a Board itself your "milestone view". To make a task/story part of the milestone objectives, you just add it to your board. Then use various lanes on that board to track progress. See the "Zuul v3 Operational" board in https://storyboard-blog.sotk.co.uk/things-that-storyboard-does-differently.html for an example -- I think it's pretty close to what you need. I /think/ if you used a tag you'd miss a feature: the ability to specify a board lane as needing to automatically contain "all things that match a given criteria (like a tag match) but which would not already appear in one of the other lanes on this board". *And* allow to move things from that automatic lane to the other lanes. That way you can have a board that automatically contains all the things that match your tag (by default in the automatic lane), but still lets you move things around onto various lanes. I don't think that exists, which is why I'd use a Board directly as a "milestone tracker", rather than go through tagging. -- Thierry Carrez (ttx) From chkumar246 at gmail.com Wed Jan 17 11:57:10 2018 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 17 Jan 2018 17:27:10 +0530 Subject: [openstack-dev] [qa][all] QA Office Hours on 18th Jan, 2018 Message-ID: Hello All, A kind reminder that tomorrow at 9:00 UTC we'll start office hours for the QA team in the #openstack-qa channel. Please join us with any question/comment you may have related to tempest plugin split community goal, tempest and others QA tools. We'll triage bugs for QA projects from the past 7 days and then extend the time frame if there is time left. Thanks, Chandan Kumar From ekuvaja at redhat.com Wed Jan 17 12:04:58 2018 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Wed, 17 Jan 2018 12:04:58 +0000 Subject: [openstack-dev] [all] [tc] Community Goals for Rocky In-Reply-To: References: <397BB99F-D7B2-47B3-9724-E8B628EFD5C2@cern.ch> <76c4df1e-2e82-96c6-a983-36040855a42d@gmail.com> Message-ID: On Tue, Jan 16, 2018 at 4:29 PM, Emilien Macchi wrote: > Here's an update so we can hopefully, as a community, take a decision > in the next days or so. > > > * Migration to StoryBoard > > Champion: Kendall Nelson > https://review.openstack.org/#/c/513875/ > Some projects already migrated, some projects will migrate soon but > there is still a gap of things that prevents some projects to not > migrate. > See https://storyboard.openstack.org/#!/search?tags=blocking-storyboard-migration > For that reason, we are postponing this goal to later but work needs > to keep going to make that happen one day. > > > * Remove mox > > Champion: Sean McGinnis (unless someone else steps up) > https://review.openstack.org/#/c/532361/ > This goal is to clean some technical debt in the code. > It remains a good candidate for Queens. > > > * Ensure pagination links > > Champion: Monty Taylor > https://review.openstack.org/#/c/532627/ > This one would improve API users experience. > It remains a good candidate for Queens. > > > * Enable mutable configuration > Champion: ChangBo Guo > Nothing was proposed in governance so far and we have enough proposals > now, I guess it could be a candidate for a future cycle though. This > one would make happy our operators. > > > * Cold upgrades capabilities > Champion: Masayuki Igawa > https://review.openstack.org/#/c/533544/ > This one would be appreciated by our operators who always need > improvements on upgrades experience - I believe it would be a good > candidate. > > > Note: some projects requested about having less goals so they have > more time to work on their backlogs. While I agree with that, I would > like to know who asked exactly, and if they would be affected by the > goals or not. > It will help us to decide which ones we take. > > So now, it's really a good time to speak-up and say if: > - your project could commit to 2 of these goals or not (and why? backlog? etc) > - which ones you couldn't commit to > - the ones you prefer > Looking the current contributor base and momentum on Glance, I'd say we would fail to catch up with most of these. I think we've got rid of mox already and I'm not exactly sure how the mutable config goal aligns with the Glance's ability to reload configs in flight, so those two might be doable, based on the amount of bikeshedding needed for any API related change I'd say the pagination link would probably be least likely done before Unicorn release. - Jokke > We need to take a decision as a community, not just TC members, so > please bring feedback. > > Thanks, > > > On Fri, Jan 12, 2018 at 2:19 PM, Lance Bragstad wrote: >> >> >> On 01/12/2018 11:09 AM, Tim Bell wrote: >>> I was reading a tweet from Jean-Daniel and wondering if there would be an appropriate community goal regarding support of some of the later API versions or whether this would be more of a per-project goal. >>> >>> https://twitter.com/pilgrimstack/status/951860289141641217 >>> >>> Interesting numbers about customers tools used to talk to our @OpenStack APIs and the Keystone v3 compatibility: >>> - 10% are not KeystoneV3 compatible >>> - 16% are compatible >>> - for the rest, the tools documentation has no info >>> >>> I think Keystone V3 and Glance V2 are the ones with APIs which have moved on significantly from the initial implementations and not all projects have been keeping up. >> Yeah, I'm super interested in this, too. I'll be honest I'm not quite >> sure where to start. If the tools are open source we can start >> contributing to them directly. >>> >>> Tim >>> >>> -----Original Message----- >>> From: Emilien Macchi >>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" >>> Date: Friday, 12 January 2018 at 16:51 >>> To: OpenStack Development Mailing List >>> Subject: Re: [openstack-dev] [all] [tc] Community Goals for Rocky >>> >>> Here's a quick update before the weekend: >>> >>> 2 goals were proposed to governance: >>> >>> Remove mox >>> https://review.openstack.org/#/c/532361/ >>> Champion: Sean McGinnis (unless someone else steps up) >>> >>> Ensure pagination links >>> https://review.openstack.org/#/c/532627/ >>> Champion: Monty Taylor >>> >>> 2 more goals are about to be proposed: >>> >>> Enable mutable configuration >>> Champion: ChangBo Guo >>> >>> Cold upgrades capabilities >>> Champion: Masayuki Igawa >>> >>> >>> Thanks everyone for your participation, >>> We hope to make a vote within the next 2 weeks so we can prepare the >>> PTG accordingly. >>> >>> On Tue, Jan 9, 2018 at 10:37 AM, Emilien Macchi wrote: >>> > As promised, let's continue the discussion and move things forward. >>> > >>> > This morning Thierry brought the discussion during the TC office hour >>> > (that I couldn't attend due to timezone): >>> > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/latest.log.html#t2018-01-09T09:18:33 >>> > >>> > Some outputs: >>> > >>> > - One goal has been proposed so far. >>> > >>> > Right now, we only have one goal proposal: Storyboard Migration. There >>> > are some concerns about the ability to achieve this goal in 6 months. >>> > At that point, we think it would be great to postpone the goal to S >>> > cycle, continue the progress (kudos to Kendall) and fine other goals >>> > for Rocky. >>> > >>> > >>> > - We still have a good backlog of goals, we're just missing champions. >>> > >>> > https://etherpad.openstack.org/p/community-goals >>> > >>> > Chris brought up "pagination links in collection resources" in api-wg >>> > guidelines theme. He said in the past this goal was more a "should" >>> > than a "must". >>> > Thierry mentioned privsep migration (done in Nova and Zun). (action, >>> > ping mikal about it). >>> > Thierry also brought up the version discovery (proposed by Monty). >>> > Flavio proposed mutable configuration, which might be very useful for operators. >>> > He also mentioned that IPv6 support goal shouldn't be that far from >>> > done, but we're currently lacking in CI jobs that test IPv6 >>> > deployments (question for infra/QA, can we maybe document the gap so >>> > we can run some gate jobs on ipv6 ?) >>> > (personal note on that one, since TripleO & Puppet OpenStack CI >>> > already have IPv6 jobs, we can indeed be confident that it shouldn't >>> > be that hard to complete this goal in 6 months, I guess the work needs >>> > to happen in the projects layouts). >>> > Another interesting goal proposed by Thierry, also useful for >>> > operators, is to move more projects to assert:supports-upgrade tag. >>> > Thierry said we are probably not that far from this goal, but the >>> > major lack is in testing. >>> > Finally, another "simple" goal is to remove mox/mox3 (Flavio said most >>> > of projects don't use it anymore already). >>> > >>> > With that said, let's continue the discussion on these goals, see >>> > which ones can be actionable and find champions. >>> > >>> > - Flavio asked how would it be perceived if one cycle wouldn't have at >>> > least one community goal. >>> > >>> > Thierry said we could introduce multi-cycle goals (Storyboard might be >>> > a good candidate). >>> > Chris and Thierry thought that it would be a bad sign for our >>> > community to not have community goals during a cycle, "loss of >>> > momentum" eventually. >>> > >>> > >>> > Thanks for reading so far, >>> > >>> > On Fri, Dec 15, 2017 at 9:07 AM, Emilien Macchi wrote: >>> >> On Tue, Nov 28, 2017 at 2:22 PM, Emilien Macchi wrote: >>> >> [...] >>> >>> Suggestions are welcome: >>> >>> - on the mailing-list, in a new thread per goal [all] [tc] Proposing >>> >>> goal XYZ for Rocky >>> >>> - on Gerrit in openstack/governance like Kendall did. >>> >> >>> >> Just a fresh reminder about Rocky goals. >>> >> A few questions that we can ask ourselves: >>> >> >>> >> 1) What common challenges do we have? >>> >> >>> >> e.g. Some projects don't have mutable configuration or some projects >>> >> aren't tested against IPv6 clouds, etc. >>> >> >>> >> 2) Who is willing to drive a community goal (a.k.a. Champion)? >>> >> >>> >> note: a Champion is someone who volunteer to drive the goal, but >>> >> doesn't commit to write the code necessarily. The Champion will >>> >> communicate with projects PTLs about the goal, and make the liaison if >>> >> needed. >>> >> >>> >> The list of ideas for Community Goals is documented here: >>> >> https://etherpad.openstack.org/p/community-goals >>> >> >>> >> Please be involved and propose some ideas, I'm sure our community has >>> >> some common goals, right ? :-) >>> >> Thanks, and happy holidays. I'll follow-up in January of next year. >>> >> -- >>> >> Emilien Macchi >>> > >>> > >>> > >>> > -- >>> > Emilien Macchi >>> >>> >>> >>> -- >>> Emilien Macchi >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From clint at fewbar.com Wed Jan 17 12:06:35 2018 From: clint at fewbar.com (Clint Byrum) Date: Wed, 17 Jan 2018 04:06:35 -0800 Subject: [openstack-dev] [tripleo] storyboard evaluation In-Reply-To: <0e787b3e-22f2-6ffd-6c1b-b95c51349302@openstack.org> References: <20180116162932.urmfaviw7b3ihnel@yuggoth.org> <0e787b3e-22f2-6ffd-6c1b-b95c51349302@openstack.org> Message-ID: <1516189284-sup-1775@fewbar.com> Excerpts from Thierry Carrez's message of 2018-01-17 11:51:52 +0100: > Emilien Macchi wrote: > > On Tue, Jan 16, 2018 at 8:29 AM, Jeremy Stanley wrote: > >>> - how do we deal milestones in stories and also how can we have a > >>> dashboard with an overview per milestone (useful for PTL + TripleO > >>> release managers). > >> > >> So far, the general suggestion for stuff like this is to settle on a > >> consistent set of story tags to apply. It really depends on whether > >> you're trying to track this at a story or task level (there is no > >> per-task tagging implemented yet at any rate). I could imagine, for > >> example, setting something like tripleo-r2 as a tag on stories whose > >> TripleO deliverable tasks are targeting Rocky milestone #2, and then > >> you could have an automatic board with stories matching that tag and > >> lanes based on the story status. > > > > Does this kind of board exist already? > > Rather than using tags, you can make a Board itself your "milestone > view". To make a task/story part of the milestone objectives, you just > add it to your board. Then use various lanes on that board to track > progress. > > See the "Zuul v3 Operational" board in > https://storyboard-blog.sotk.co.uk/things-that-storyboard-does-differently.html > for an example -- I think it's pretty close to what you need. > > I /think/ if you used a tag you'd miss a feature: the ability to specify > a board lane as needing to automatically contain "all things that match > a given criteria (like a tag match) but which would not already appear > in one of the other lanes on this board". *And* allow to move things > from that automatic lane to the other lanes. That way you can have a > board that automatically contains all the things that match your tag (by > default in the automatic lane), but still lets you move things around > onto various lanes. > > I don't think that exists, which is why I'd use a Board directly as a > "milestone tracker", rather than go through tagging. > That particular example board was built from tasks semi-automatically, using a tag, by this script running on a cron job somewhere: https://git.openstack.org/cgit/openstack-infra/zuul/tree/tools/update-storyboard.py?h=feature/zuulv3 We did this so that we could have a rule "any task that is open with the zuulv3 tag must be on this board". Jim very astutely noticed that I was not very good at being a robot that did this and thus created the script to ease me into retirement from zuul project management. The script adds new things in New, and moves tasks automatically to In Progress, and then removes them when they are completed. We would periodically groom the "New" items into an appropriate lane with the hopes of building what you might call a rolling-sprint in Todo, and calling out blocked tasks in a regular meeting. Stories were added manually as a way to say "look in here and add tasks", and manually removed when the larger effort of the story was considered done. I rather like the semi-automatic nature of it, and would definitely suggest that something like this be included in Storyboard if other groups find the board building script useful. This made a cross-project effort between Nodepool and Zuul go more smoothly as we had some more casual contributors to both, and some more full-time. The best part about a tag is that it is readily visible on the story view. I think given that alone, tags are a pretty valid way to call out what milestone you think a story is aimed at. From thierry at openstack.org Wed Jan 17 12:33:29 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 17 Jan 2018 13:33:29 +0100 Subject: [openstack-dev] [tripleo] storyboard evaluation In-Reply-To: <1516189284-sup-1775@fewbar.com> References: <20180116162932.urmfaviw7b3ihnel@yuggoth.org> <0e787b3e-22f2-6ffd-6c1b-b95c51349302@openstack.org> <1516189284-sup-1775@fewbar.com> Message-ID: Clint Byrum wrote: > [...] > That particular example board was built from tasks semi-automatically, > using a tag, by this script running on a cron job somewhere: > > https://git.openstack.org/cgit/openstack-infra/zuul/tree/tools/update-storyboard.py?h=feature/zuulv3 > > We did this so that we could have a rule "any task that is open with > the zuulv3 tag must be on this board". Jim very astutely noticed that > I was not very good at being a robot that did this and thus created the > script to ease me into retirement from zuul project management. > > The script adds new things in New, and moves tasks automatically to > In Progress, and then removes them when they are completed. We would > periodically groom the "New" items into an appropriate lane with the hopes > of building what you might call a rolling-sprint in Todo, and calling > out blocked tasks in a regular meeting. Stories were added manually as > a way to say "look in here and add tasks", and manually removed when > the larger effort of the story was considered done. > > I rather like the semi-automatic nature of it, and would definitely > suggest that something like this be included in Storyboard if other > groups find the board building script useful. This made a cross-project > effort between Nodepool and Zuul go more smoothly as we had some more > casual contributors to both, and some more full-time. That's a great example that illustrates StoryBoard design: rather than do too much upfront feature design, focus on primitives and expose them fully through a strong API, then let real-world usage dictate patterns that might result in future features. The downside of this approach is of course getting enough usage on a product that appears a bit "raw" in terms of features. But I think we are closing on getting that critical mass :) -- Thierry Carrez (ttx) From sathlang at redhat.com Wed Jan 17 12:46:13 2018 From: sathlang at redhat.com (Sofer Athlan-Guyot) Date: Wed, 17 Jan 2018 13:46:13 +0100 Subject: [openstack-dev] [tripleo] Ocata to Pike upgrade job is working as of today. Message-ID: <87zi5cejqi.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> Hi, So join us (upgrade squad) to celebrate the working ocata->pike upgrade job[1], without any depends-on whatsoever. We would like it to be voting as soon as possible. It has been a rather consuming task to revive that forgotten but important jobs, and the only way for it to not drift into oblivion again is to have it voting. Eventually, let’s thanks rdo-cloud people for their support (especially David Manchado), James Slagle for Traas[2] and Alfredo Moralejo for his constant availability to answer our questions. Thanks, [1] https://review.openstack.org/#/c/532791/, look for «gate-tripleo-ci-centos-7-containers-multinode-upgrades-pike» [2] https://github.com/slagle/traas … the repo we use -> https://github.com/sathlan/traas (so many pull requests to make that it would be cool for it to be an openstack project … :)) -- Sofer Athlan From thierry at openstack.org Wed Jan 17 12:47:04 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 17 Jan 2018 13:47:04 +0100 Subject: [openstack-dev] [ptg] Dublin PTG: list of etherpads Message-ID: <81c5f577-c620-bb7b-178e-d985532c0780@openstack.org> Hi everyone, As teams start planning for their meetings in Dublin during the PTG, please add links to your planning documents (etherpads or others) on the reference wiki page at: https://wiki.openstack.org/wiki/PTG/Rocky/Etherpads NB: We are still working on the room / day / track assignment based on each team's requirements and hope to be able to publish a strawman proposal by the end of the week. -- Thierry Carrez (ttx) From jaypipes at gmail.com Wed Jan 17 13:22:29 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 17 Jan 2018 08:22:29 -0500 Subject: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata In-Reply-To: References: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> Message-ID: <43f5884e-39d2-23c8-7606-5940f33251bd@gmail.com> On 01/16/2018 08:19 PM, Zhenyu Zheng wrote: > Thanks for the info, so it seems we are not going to implement aggregate > overcommit ratio in placement at least in the near future? As @edleafe alluded to, we will not be adding functionality to the placement service to associate an overcommit ratio with an aggregate. This was/is buggy functionality that we do not wish to bring forward into the placement modeling system. Reasons the current functionality is poorly architected and buggy (mentioned in @melwitt's footnote): 1) If a nova-compute service's CONF.cpu_allocation_ratio is different from the host aggregate's cpu_allocation_ratio metadata value, which value should be considered by the AggregateCoreFilter filter? 2) If a nova-compute service is associated with multiple host aggregates, and those aggregates contain different values for their cpu_allocation_ratio metadata value, which one should be used by the AggregateCoreFilter? The bottom line for me is that the AggregateCoreFilter has been used as a crutch to solve a **configuration management problem**. Instead of the configuration management system (Puppet, etc) setting nova-compute service CONF.cpu_allocation_ratio options *correctly*, having the admin set the HostAggregate metadata cpu_allocation_ratio value is error-prone for the reasons listed above. Incidentally, this same design flaw is the reason that availability zones are so poorly defined in Nova. There is actually no such thing as an availability zone in Nova. Instead, an AZ is merely a metadata tag (or a CONF option! :( ) that may or may not exist against a host aggregate. There's lots of spaghetti in Nova due to the decision to use host aggregate metadata for availability zone information, which should have always been the domain of a **configuration management system** to set. [*] In the Placement service, we have the concept of aggregates, too. However, in Placement, an aggregate (note: not "host aggregate") is merely a grouping mechanism for resource providers. Placement aggregates do not have any attributes themselves -- they merely represent the relationship between resource providers. Placement aggregates suffer from neither of the above listed design flaws because they are not buckets for metadata. ok . Best, -jay [*] Note the assumption on line 97 here: https://github.com/openstack/nova/blob/master/nova/availability_zones.py#L96-L100 > On Wed, Jan 17, 2018 at 5:24 AM, melanie witt > wrote: > > Hello Stackers, > > This is a heads up to any of you using the AggregateCoreFilter, > AggregateRamFilter, and/or AggregateDiskFilter in the filter > scheduler. These filters have effectively allowed operators to set > overcommit ratios per aggregate rather than per compute node in <= > Newton. > > Beginning in Ocata, there is a behavior change where aggregate-based > overcommit ratios will no longer be honored during scheduling. > Instead, overcommit values must be set on a per compute node basis > in nova.conf. > > Details: as of Ocata, instead of considering all compute nodes at > the start of scheduler filtering, an optimization has been added to > query resource capacity from placement and prune the compute node > list with the result *before* any filters are applied. Placement > tracks resource capacity and usage and does *not* track aggregate > metadata [1]. Because of this, placement cannot consider > aggregate-based overcommit and will exclude compute nodes that do > not have capacity based on per compute node overcommit. > > How to prepare: if you have been relying on per aggregate > overcommit, during your upgrade to Ocata, you must change to using > per compute node overcommit ratios in order for your scheduling > behavior to stay consistent. Otherwise, you may notice increased > NoValidHost scheduling failures as the aggregate-based overcommit is > no longer being considered. You can safely remove the > AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter > from your enabled_filters and you do not need to replace them with > any other core/ram/disk filters. The placement query takes care of > the core/ram/disk filtering instead, so CoreFilter, RamFilter, and > DiskFilter are redundant. > > Thanks, > -melanie > > [1] Placement has been a new slate for resource management and prior > to placement, there were conflicts between the different methods for > setting overcommit ratios that were never addressed, such as, "which > value to take if a compute node has overcommit set AND the aggregate > has it set? Which takes precedence?" And, "if a compute node is in > more than one aggregate, which overcommit value should be taken?" > So, the ambiguities were not something that was desirable to bring > forward into placement. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From emilien at redhat.com Wed Jan 17 13:39:13 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 17 Jan 2018 05:39:13 -0800 Subject: [openstack-dev] [tripleo] Ocata to Pike upgrade job is working as of today. In-Reply-To: <87zi5cejqi.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> References: <87zi5cejqi.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> Message-ID: Nice work Sofer! I have a few questions, see inline. On Wed, Jan 17, 2018 at 4:46 AM, Sofer Athlan-Guyot wrote: > Hi, > > So join us (upgrade squad) to celebrate the working ocata->pike upgrade > job[1], without any depends-on whatsoever. w00t, really good work! > We would like it to be voting as soon as possible. It has been a > rather consuming task to revive that forgotten but important jobs, and > the only way for it to not drift into oblivion again is to have it > voting. The last time I asked how we could make RDO jobs voting, it was something in Software Factory to enable, but I'm not sure about the details. I'm ok to make them voting, as long as: - they don't timeout or reach the timeout limit (which isn't the case now, 2h27 is really good) - they proved to be stable during some time - they're part of the promotion pipeline so gate can't break easily BTW the process is documented here: https://github.com/openstack/tripleo-specs/blob/master/specs/policy/adding-ci-jobs.rst > Eventually, let’s thanks rdo-cloud people for their support (especially > David Manchado), James Slagle for Traas[2] and Alfredo Moralejo for his > constant availability to answer our questions. ++ > > Thanks, > > [1] https://review.openstack.org/#/c/532791/, look for «gate-tripleo-ci-centos-7-containers-multinode-upgrades-pike» > [2] https://github.com/slagle/traas … the repo we use -> https://github.com/sathlan/traas (so many pull requests to make that it would be cool for it to be an openstack project … :)) Please rebase https://github.com/slagle/traas/pull/9 and address James's comments, so we can avoid using a fork. Also, any plan to run tempest instead of pingtest? Thanks, -- Emilien Macchi From sbauza at redhat.com Wed Jan 17 13:57:37 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 17 Jan 2018 14:57:37 +0100 Subject: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata In-Reply-To: <43f5884e-39d2-23c8-7606-5940f33251bd@gmail.com> References: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> <43f5884e-39d2-23c8-7606-5940f33251bd@gmail.com> Message-ID: On Wed, Jan 17, 2018 at 2:22 PM, Jay Pipes wrote: > On 01/16/2018 08:19 PM, Zhenyu Zheng wrote: > >> Thanks for the info, so it seems we are not going to implement aggregate >> overcommit ratio in placement at least in the near future? >> > > As @edleafe alluded to, we will not be adding functionality to the > placement service to associate an overcommit ratio with an aggregate. This > was/is buggy functionality that we do not wish to bring forward into the > placement modeling system. > > Reasons the current functionality is poorly architected and buggy > (mentioned in @melwitt's footnote): > > 1) If a nova-compute service's CONF.cpu_allocation_ratio is different from > the host aggregate's cpu_allocation_ratio metadata value, which value > should be considered by the AggregateCoreFilter filter? > > 2) If a nova-compute service is associated with multiple host aggregates, > and those aggregates contain different values for their > cpu_allocation_ratio metadata value, which one should be used by the > AggregateCoreFilter? > > The bottom line for me is that the AggregateCoreFilter has been used as a > crutch to solve a **configuration management problem**. > > Instead of the configuration management system (Puppet, etc) setting > nova-compute service CONF.cpu_allocation_ratio options *correctly*, having > the admin set the HostAggregate metadata cpu_allocation_ratio value is > error-prone for the reasons listed above. > > Well, the main cause why people started to use AggregateCoreFilter and others is because pre-Newton, it was litterally impossible to assign different allocation ratios in between computes except if you were grouping them in aggregates and using those filters. Now that ratios are per-compute, there is no need to keep those filters except if you don't touch computes nova.conf's so that it defaults to the scheduler ones. The crazy usecase would be like "I have 1000+ computes and I just want to apply specific ratios to only one or two" but then, I'd second Jay and say "Config management is the solution to your problem". > Incidentally, this same design flaw is the reason that availability zones > are so poorly defined in Nova. There is actually no such thing as an > availability zone in Nova. Instead, an AZ is merely a metadata tag (or a > CONF option! :( ) that may or may not exist against a host aggregate. > There's lots of spaghetti in Nova due to the decision to use host aggregate > metadata for availability zone information, which should have always been > the domain of a **configuration management system** to set. [*] > > IMHO, not exactly the root cause why we have spaghetti code for AZs. I rather like the idea to see an availability zone as just a user-visible aggregate, because it makes things simple to understand. What the spaghetti code is due to is because the transitive relationship between an aggregate, a compute and an instance is misunderstood and we introduced the notion of "instance AZ" which is a fool. Instances shouldn't have a field saying "here is my AZ", it should rather be a flag saying "what the user wanted as AZ ? (None being a choice) " In the Placement service, we have the concept of aggregates, too. However, > in Placement, an aggregate (note: not "host aggregate") is merely a > grouping mechanism for resource providers. Placement aggregates do not have > any attributes themselves -- they merely represent the relationship between > resource providers. Placement aggregates suffer from neither of the above > listed design flaws because they are not buckets for metadata. > > ok . > > Best, > -jay > > [*] Note the assumption on line 97 here: > > https://github.com/openstack/nova/blob/master/nova/availabil > ity_zones.py#L96-L100 > > On Wed, Jan 17, 2018 at 5:24 AM, melanie witt > melwittt at gmail.com>> wrote: >> >> Hello Stackers, >> >> This is a heads up to any of you using the AggregateCoreFilter, >> AggregateRamFilter, and/or AggregateDiskFilter in the filter >> scheduler. These filters have effectively allowed operators to set >> overcommit ratios per aggregate rather than per compute node in <= >> Newton. >> >> Beginning in Ocata, there is a behavior change where aggregate-based >> overcommit ratios will no longer be honored during scheduling. >> Instead, overcommit values must be set on a per compute node basis >> in nova.conf. >> >> Details: as of Ocata, instead of considering all compute nodes at >> the start of scheduler filtering, an optimization has been added to >> query resource capacity from placement and prune the compute node >> list with the result *before* any filters are applied. Placement >> tracks resource capacity and usage and does *not* track aggregate >> metadata [1]. Because of this, placement cannot consider >> aggregate-based overcommit and will exclude compute nodes that do >> not have capacity based on per compute node overcommit. >> >> How to prepare: if you have been relying on per aggregate >> overcommit, during your upgrade to Ocata, you must change to using >> per compute node overcommit ratios in order for your scheduling >> behavior to stay consistent. Otherwise, you may notice increased >> NoValidHost scheduling failures as the aggregate-based overcommit is >> no longer being considered. You can safely remove the >> AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter >> from your enabled_filters and you do not need to replace them with >> any other core/ram/disk filters. The placement query takes care of >> the core/ram/disk filtering instead, so CoreFilter, RamFilter, and >> DiskFilter are redundant. >> >> Thanks, >> -melanie >> >> [1] Placement has been a new slate for resource management and prior >> to placement, there were conflicts between the different methods for >> setting overcommit ratios that were never addressed, such as, "which >> value to take if a compute node has overcommit set AND the aggregate >> has it set? Which takes precedence?" And, "if a compute node is in >> more than one aggregate, which overcommit value should be taken?" >> So, the ambiguities were not something that was desirable to bring >> forward into placement. >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed Jan 17 15:05:15 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 17 Jan 2018 16:05:15 +0100 Subject: [openstack-dev] [ironic] FFE - Requesting FFE for Routed Networks support. In-Reply-To: <1516182841.12010.13.camel@redhat.com> References: <1516182841.12010.13.camel@redhat.com> Message-ID: <9b4d2edd-e718-09f3-13f0-638d5f4351a6@redhat.com> Hi! I'm essentially +1 on granting this FFE, as it's a low-risk work for a great feature. See one comment inline. On 01/17/2018 10:54 AM, Harald Jensås wrote: > Requesting FFE for Routed Network support in networking-baremetal. > ------------------------------------------------------------------- > > > # Pros > ------ > With the patches up for review[7] we have a working ml2 agent; > __depends on neutron fix__; and mechanism driver combination that > enables support to bind ports on neutron routed networks. > > Specifically we report the bridge_mappings data to neutron, which > enable the _find_candidate_subnets() method in neutron ipam[1] to > succeed in finding a candidate subnet available to the ironic node when > ports on routed segments are bound. > > This functionality will allow users to take advantage of the > functionality added in DHCP Agent[2] which enables the DHCP agent to > service other subnets on the network via DHCP relay. For Ironic this > means we can support deploying nodes on a remote L3 network, e.g > different datacenter or different rack/rack-row. > > > > # Cons > ------ > Integration with placement does not currently work. > > Neutron uses Nova host-aggregates in combination with Placement. > Specifically hosts are added to a host-aggregate for segments based on > SEGMENT_HOST_MAPPING. Ironic nodes cannot currently be added to host- > aggregates in Nova. Because of this the following will appear in the > neutron logs when ironic-neutron agent is started: > RESP BODY: {"itemNotFound": {"message": "Compute host id> could not be found.", "code": 404}} > > Also the placement api cannot be used to find good candidate ironic > nodes with a baremetal port on the correct segment. This will have to be worked around by the operator via capabilities and flavor properties or manual additions to resource providers in placement. > > Depending on the direction of other projects, neutron and nova, the way > placement will finally work is not certain. > > Either the nova work [3] and [4], or a neutron change to use placement > only or a fallback to placement in neutron would be possible. In either > case there should be no need to change the networking-baremetal agent > or mechanism driver. > > > # Risks > ------- > Unless this bug[5] is fixed we might break the current baremetal > mechanism driver functionality. I have proposed a patch[6] to neutron > that fix the issue. In case no fix lands for this neutron bug soon we > should probably push these changes to Rocky. Let's add Depends-On to the first patch in the chain to make sure your patches don't merge until the fix is merged. > > > # Core reviewers > ---------------- > Julia Kreger, Sam Betts > > > > > [1] https://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/ip > am_backend_mixin.py#n697 > [2] https://review.openstack.org/#/c/468744/ > [3] https://review.openstack.org/#/c/421009/ > [4] https://review.openstack.org/#/c/421011/ > [5] https://bugs.launchpad.net/neutron/+bug/1743579 > [6] https://review.openstack.org/#/c/534449/ > [7] https://review.openstack.org/#/q/project:openstack/networking-barem > etal > > From tobias at citynetwork.se Wed Jan 17 15:05:10 2018 From: tobias at citynetwork.se (Tobias Rydberg) Date: Wed, 17 Jan 2018 16:05:10 +0100 Subject: [openstack-dev] [publiccloud-wg] Missing features work session Message-ID: <707b7cc0-8393-6a2c-2539-cc6abd71f7dd@citynetwork.se> Hi everyone, We had a good session last week working the list we call "Missing features" - to get that up to date, finding contact persons and authors for each items. We now plan to have 2 more work sessions for that, listed below. This time we change time of day to 0800 UTC. Links: https://etherpad.openstack.org/p/publiccloud-wg https://launchpad.net/openstack-publiccloud-wg Where: #openstack-publiccloud When: Thursday 18th January 0800 UTC Where: #openstack-publiccloud When: Wednesday 24th January 0800 UTC Hope to see you there! Regards, Tobias Rydberg Chair Public Cloud WG -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3945 bytes Desc: S/MIME Cryptographic Signature URL: From sathlang at redhat.com Wed Jan 17 15:52:46 2018 From: sathlang at redhat.com (Sofer Athlan-Guyot) Date: Wed, 17 Jan 2018 16:52:46 +0100 Subject: [openstack-dev] [tripleo] Ocata to Pike upgrade job is working as of today. In-Reply-To: References: <87zi5cejqi.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> Message-ID: <87wp0geb3l.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> Emilien Macchi writes: > Nice work Sofer! > I have a few questions, see inline. > > On Wed, Jan 17, 2018 at 4:46 AM, Sofer Athlan-Guyot wrote: >> Hi, >> >> So join us (upgrade squad) to celebrate the working ocata->pike upgrade >> job[1], without any depends-on whatsoever. > > w00t, really good work! > >> We would like it to be voting as soon as possible. It has been a >> rather consuming task to revive that forgotten but important jobs, and >> the only way for it to not drift into oblivion again is to have it >> voting. > > The last time I asked how we could make RDO jobs voting, it was > something in Software Factory to enable, but I'm not sure about the > details. > I'm ok to make them voting, as long as: > > - they don't timeout or reach the timeout limit (which isn't the case > now, 2h27 is really good) > - they proved to be stable during some time > - they're part of the promotion pipeline so gate can't break easily > > BTW the process is documented here: > https://github.com/openstack/tripleo-specs/blob/master/specs/policy/adding-ci-jobs.rst Oki reading that, thanks for the pointer. >> Eventually, let’s thanks rdo-cloud people for their support (especially >> David Manchado), James Slagle for Traas[2] and Alfredo Moralejo for his >> constant availability to answer our questions. > > ++ > >> >> Thanks, >> >> [1] https://review.openstack.org/#/c/532791/, look for «gate-tripleo-ci-centos-7-containers-multinode-upgrades-pike» >> [2] https://github.com/slagle/traas … the repo we use -> https://github.com/sathlan/traas (so many pull requests to make that it would be cool for it to be an openstack project … :)) > > Please rebase https://github.com/slagle/traas/pull/9 and address > James's comments, so we can avoid using a fork. So yeah, the situation with traas is not so good as we have a 70 patches difference with the main repo. The pull/9 may be abandoned (jfrancoa will see if it’s still required) in favor of the what’s in https://github.com/sathlan/traas . We’re going to work toward pulling in slage/traas the main patches to avoid a fork. > Also, any plan to run tempest instead of pingtest? Not yet. We could cycle back to that at a later point in time, unless it’s a requirement. Currently I’ve added it to our backlog. A last point is that we may add ceph in the mix based on what it’s done in scenario 001 and 004. > > Thanks, > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Chem From dtantsur at redhat.com Wed Jan 17 17:11:05 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 17 Jan 2018 18:11:05 +0100 Subject: [openstack-dev] [ironic] Rocky PTG planning Message-ID: <87289806-6352-557d-683d-8bb31b3e8c71@redhat.com> Hi all! The PTG is slowly approaching. Make sure to do your visa paperwork (those unfortunate of us who need it) and let's start planning! Drop your ideas on the etherpad: https://etherpad.openstack.org/p/ironic-rocky-ptg Please do check the rules there before proposing, and please add your attendance information in the bottom. Finally, let me know if you can help with organizing a social event in Dublin. Thanks! From mordred at inaugust.com Wed Jan 17 19:23:58 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 17 Jan 2018 13:23:58 -0600 Subject: [openstack-dev] [tc][masakari][api] masakari service-type, docs, api-ref and releasenotes Message-ID: Hey everybody, I noticed today while preparing patches to projects that are using openstacksdk that masakari is not listed in service-types-authority. [0] I pushed up a patch to fix that [1] as well as to add api-ref, docs and releasenotes jobs to the masakari repo so that each of those will be published appropriately. As part of doing this, it came up that 'ha' is a pretty broad service-type and that perhaps it should be 'compute-ha' or 'instance-ha'. The service-type is a unique key for identifying a service in the catalog, so the same service-type cannot be shared amongst openstack services. It is also used for api-ref publication (to https://developer.openstack.org/api-ref/{service-type} ) - and in openstacksdk as the name used for the service attribute on the Connection object. (So the service-type 'ha' would result in having conn.ha on an openstack.connection.Connection) We do support specifying historical aliases. Since masakari has been using ha up until now, we'll need to list is in the aliases at the very least. Do we want to change it? What should we change it to? Thanks! Monty [0] http://git.openstack.org/cgit/openstack/service-types-authority [1] https://review.openstack.org/#/c/534875/ [2] https://review.openstack.org/#/c/534878/ From ihrachys at redhat.com Wed Jan 17 19:26:54 2018 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 17 Jan 2018 11:26:54 -0800 Subject: [openstack-dev] [gate][devstack][neutron][qa][release] Switch to lib/neutron in gate Message-ID: Hi all, tl;dr I propose to switch to lib/neutron devstack library in Queens. I ask for buy-in to the plan from release and QA teams, something that infra asked me to do. === Last several cycles we were working on getting lib/neutron - the new in-tree devstack library to deploy neutron services - ready to deploy configurations we may need in our gates. Some pieces of the work involved can be found in: https://review.openstack.org/#/q/topic:new-neutron-devstack-in-gate I am happy to announce that the work finally got to the point where we can consistently pass both devstack-gate and neutron gates: (devstack-gate) https://review.openstack.org/436798 (neutron) https://review.openstack.org/441579 One major difference between the old lib/neutron-legacy library and the new lib/neutron one is that service names for neutron are different. For example, q-svc is now neutron-api, q-dhcp is now neutron-dhcp, etc. (In case you wonder, this q- prefix links us back to times when Neutron was called Quantum.) The way lib/neutron is designed is that whenever a single q-* service name is present in ENABLED_SERVICES, the old lib/neutron-legacy code is triggered to deploy services. Service name changes are a large part of the work. The way the devstack-gate change linked above is designed is that it changes names for deployed neutron services starting from Queens (current master), so old branches and grenade jobs are not affected by the change. While we validated the change switching to new names against both devstack-gate and neutron gates that should cover 90% of our neutron configurations, and followed up with several projects that - we induced - may be affected by the change - there is always a chance that some job in some project gate would fail because of it, and we would need to push a (probably rather simple) follow-up to unbreak the affected job. Due to the nature of the work, the span of impact, and the fact that infra repos are not easily gated against with Depends-On links, we may need to live with the risk. Of course, there are several aspects of the project life involved, including QA and release delivery efforts. I was advised to reach out to both of those teams to get a buy-in to proceed with the move. If we have support for the switch now, as per Clark, infra is ready to support the switch. Note that the effort span several cycles, partially due to low review velocity in several affected repos (devstack, devstack-gate), partially because new changes in all affected repos were pulling us back from the end goal. This is one of the reasons why I would like us to do the switch sooner rather than later, since chasing this moving goalpost became rather burdensome. What are QA and release team thoughts on the switch? Are we ready to do it in next weeks? Thanks for attention, Ihar From Louie.Kwan at windriver.com Wed Jan 17 22:01:13 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Wed, 17 Jan 2018 22:01:13 +0000 Subject: [openstack-dev] [Zuul] requirements-check FAILURE Message-ID: <47EFB32CD8770A4D9590812EE28C977E961DC346@ALA-MBC.corp.ad.wrs.com> Would like to add the following module to openstack.masakari project https://github.com/pytransitions/transitions Got the following error with zuul requirements-check Requirement set([Requirement(package=u'transitions', location='', specifiers='>=0.6.4', markers=u'', comment='', extras=frozenset([]))]) not in openstack/requirements http://logs.openstack.org/88/534888/3/check/requirements-check/edec7bf/ara/ Any tip or insight to fix it? Thanks. Louie.Kwan at windriver.com From mtreinish at kortar.org Wed Jan 17 22:16:19 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Wed, 17 Jan 2018 17:16:19 -0500 Subject: [openstack-dev] [Zuul] requirements-check FAILURE In-Reply-To: <47EFB32CD8770A4D9590812EE28C977E961DC346@ALA-MBC.corp.ad.wrs.com> References: <47EFB32CD8770A4D9590812EE28C977E961DC346@ALA-MBC.corp.ad.wrs.com> Message-ID: <20180117221619.GA27084@zeong.kortar.org> On Wed, Jan 17, 2018 at 10:01:13PM +0000, Kwan, Louie wrote: > Would like to add the following module to openstack.masakari project > > https://github.com/pytransitions/transitions > > Got the following error with zuul requirements-check > > Requirement set([Requirement(package=u'transitions', location='', specifiers='>=0.6.4', markers=u'', comment='', extras=frozenset([]))]) not in openstack/requirements > > http://logs.openstack.org/88/534888/3/check/requirements-check/edec7bf/ara/ > > Any tip or insight to fix it? That error is caused by the dependency you're adding not being tracked in global requirements. To add it to the masakari project you first have to add it to the openstack/requirements project. The process for doing that is documented in: https://docs.openstack.org/requirements/latest/ That link also explains the reasoning behind why we handle adding dependencies centrally like this. -Matt Treinish -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From doug at doughellmann.com Wed Jan 17 23:01:10 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 17 Jan 2018 18:01:10 -0500 Subject: [openstack-dev] [Zuul] requirements-check FAILURE In-Reply-To: <20180117221619.GA27084@zeong.kortar.org> References: <47EFB32CD8770A4D9590812EE28C977E961DC346@ALA-MBC.corp.ad.wrs.com> <20180117221619.GA27084@zeong.kortar.org> Message-ID: <1516229973-sup-5662@lrrr.local> Excerpts from Matthew Treinish's message of 2018-01-17 17:16:19 -0500: > On Wed, Jan 17, 2018 at 10:01:13PM +0000, Kwan, Louie wrote: > > Would like to add the following module to openstack.masakari project > > > > https://github.com/pytransitions/transitions > > > > Got the following error with zuul requirements-check > > > > Requirement set([Requirement(package=u'transitions', location='', specifiers='>=0.6.4', markers=u'', comment='', extras=frozenset([]))]) not in openstack/requirements > > > > http://logs.openstack.org/88/534888/3/check/requirements-check/edec7bf/ara/ > > > > Any tip or insight to fix it? > > That error is caused by the dependency you're adding not being tracked in > global requirements. To add it to the masakari project you first have to > add it to the openstack/requirements project. > > The process for doing that is documented in: > > https://docs.openstack.org/requirements/latest/ > > That link also explains the reasoning behind why we handle adding dependencies > centrally like this. > > -Matt Treinish Please take a little time to look through the list of dependencies in that repository to see if we have a finite state machine library in the list already. If so, see if you can use that one instead of adding a new dependency to the system. Doug From emilien at redhat.com Thu Jan 18 01:43:16 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 17 Jan 2018 17:43:16 -0800 Subject: [openstack-dev] [all] [tc] Community Goals for Rocky In-Reply-To: References: <397BB99F-D7B2-47B3-9724-E8B628EFD5C2@cern.ch> <76c4df1e-2e82-96c6-a983-36040855a42d@gmail.com> Message-ID: On Wed, Jan 17, 2018 at 4:04 AM, Erno Kuvaja wrote: [...] > Looking the current contributor base and momentum on Glance, I'd say > we would fail to catch up with most of these. I think we've got rid of > mox already and I'm not exactly sure how the mutable config goal > aligns with the Glance's ability to reload configs in flight, so those > two might be doable, based on the amount of bikeshedding needed for > any API related change I'd say the pagination link would probably be > least likely done before Unicorn release. > > - Jokke If mox is already done, consider also that you already have the cold upgrade tag, so you wouldn't have anything to do in the cycle (if we go with these 2 goals). -- Emilien Macchi From glongwave at gmail.com Thu Jan 18 03:13:08 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Thu, 18 Jan 2018 11:13:08 +0800 Subject: [openstack-dev] [glance][oslo][requirements] [ironic]oslo.serialization fails with glance Message-ID: add Ironic team in the loop the revert patch got -1 from ironic folks , more details please see the comments in https://review.openstack.org/534736 The possible solution is to figure out why the change break Glance's unit test. which side should be fixed. 2018-01-17 20:14 GMT+08:00 ChangBo Guo : > I dig a little. It shows success when updating constraint to 2.21.2 [1] > but failure when updating constraint to 2.22.0 [2]. according to release > information [3]. > It means 2.21.1 works with glance test but 2.21.2 doesn't work well with > glance. The only issue patch is https://github.com/openstack/ > oslo.serialization/commit/c1a7079c26d27a2e46cca26963d3d9aa040bdbe8. > > > [1] https://review.openstack.org/514833 > [2] https://review.openstack.org/#/c/525136 > [3] https://github.com/openstack/releases/blob/master/ > deliverables/queens/oslo.serialization.yaml > > > Actions: > > Block oslo.serialization version 2.21.2, 2.22.0, 2. 23.0 in > https://review.openstack.org/534739 > Revert c1a7079c26d27a2e46cca26963d3d9aa040bdbe8 in > https://review.openstack.org/534736 > > > > > 2018-01-16 23:35 GMT+08:00 Matthew Thode : > >> On 18-01-16 19:12:16, ChangBo Guo wrote: >> > What's the issue for Glance, any bug link ? >> > >> > 2018-01-16 0:12 GMT+08:00 Matthew Thode : >> > >> > > On 18-01-13 00:41:28, Matthew Thode wrote: >> > > > https://review.openstack.org/531788 is the review we are seeing it >> in, >> > > > but 2.22.0 failed as well. >> > > > >> > > > I'm guessing it was introduced in either >> > > > >> > > > https://github.com/openstack/oslo.serialization/commit/ >> > > c1a7079c26d27a2e46cca26963d3d9aa040bdbe8 >> > > > or >> > > > https://github.com/openstack/oslo.serialization/commit/ >> > > cdb2f60d26e3b65b6370f87b2e9864045651c117 >> > > >> > > bamp >> > > >> >> The best bug for this is >> https://bugs.launchpad.net/oslo.serialization/+bug/1728368 and we are >> currently getting test fails in https://review.openstack.org/531788 >> >> -- >> Matthew Thode (prometheanfire) >> > > > > -- > ChangBo Guo(gcb) > Community Director @EasyStack > -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu Jan 18 04:41:59 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 17 Jan 2018 23:41:59 -0500 Subject: [openstack-dev] [tripleo] Ocata to Pike upgrade job is working as of today. In-Reply-To: <87wp0geb3l.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> References: <87zi5cejqi.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> <87wp0geb3l.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> Message-ID: On Wed, Jan 17, 2018 at 10:52 AM, Sofer Athlan-Guyot wrote: > Emilien Macchi writes: > > > Nice work Sofer! > > I have a few questions, see inline. > > > > On Wed, Jan 17, 2018 at 4:46 AM, Sofer Athlan-Guyot > wrote: > >> Hi, > >> > >> So join us (upgrade squad) to celebrate the working ocata->pike upgrade > >> job[1], without any depends-on whatsoever. > > > > w00t, really good work! > > > >> We would like it to be voting as soon as possible. It has been a > >> rather consuming task to revive that forgotten but important jobs, and > >> the only way for it to not drift into oblivion again is to have it > >> voting. > > > > The last time I asked how we could make RDO jobs voting, it was > > something in Software Factory to enable, but I'm not sure about the > > details. > > I'm ok to make them voting, as long as: > > > > - they don't timeout or reach the timeout limit (which isn't the case > > now, 2h27 is really good) > > - they proved to be stable during some time > > - they're part of the promotion pipeline so gate can't break easily > > > > BTW the process is documented here: > > https://github.com/openstack/tripleo-specs/blob/master/ > specs/policy/adding-ci-jobs.rst > > Oki reading that, thanks for the pointer. > > >> Eventually, let’s thanks rdo-cloud people for their support (especially > >> David Manchado), James Slagle for Traas[2] and Alfredo Moralejo for his > >> constant availability to answer our questions. > > > > ++ > > > >> > >> Thanks, > >> > >> [1] https://review.openstack.org/#/c/532791/, look for > «gate-tripleo-ci-centos-7-containers-multinode-upgrades-pike» > >> [2] https://github.com/slagle/traas … the repo we use -> > https://github.com/sathlan/traas (so many pull requests to make that it > would be cool for it to be an openstack project … :)) > > > > Please rebase https://github.com/slagle/traas/pull/9 and address > > James's comments, so we can avoid using a fork. > > So yeah, the situation with traas is not so good as we have a 70 patches > difference with the main repo. The pull/9 may be abandoned (jfrancoa > will see if it’s still required) in favor of the what’s in > https://github.com/sathlan/traas . We’re going to work toward pulling > in slage/traas the main patches to avoid a fork. > Sofer, it's probably worth everyone's time to consider reviewing and testing the script provided in the logs with the successful upgrade itself to debug and recreate upgrade jobs. https://logs.rdoproject.org/91/532791/6/openstack-check/gate-tripleo-ci-centos-7-containers-multinode-upgrades-pike/Z0ac437d8cf634d4a8e1eca18b6dd0061/reproducer-quickstart.sh Thanks > > > Also, any plan to run tempest instead of pingtest? > > Not yet. We could cycle back to that at a later point in time, unless > it’s a requirement. Currently I’ve added it to our backlog. > > A last point is that we may add ceph in the mix based on what it’s done > in scenario 001 and 004. > > > > > Thanks, > > -- > > Emilien Macchi > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- > Chem > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack.org at sodarock.com Thu Jan 18 05:24:12 2018 From: openstack.org at sodarock.com (John Villalovos) Date: Wed, 17 Jan 2018 21:24:12 -0800 Subject: [openstack-dev] [glance][oslo][requirements] [ironic]oslo.serialization fails with glance In-Reply-To: References: Message-ID: I have updated the bug with info I found out: https://bugs.launchpad.net/oslo.serialization/+bug/1728368 Also I did a test patch with a proposed change: https://review.openstack.org/#/c/535166/ This patch causes the unit tests to work. As a note there is a deprecation warning in the current code (without my patch) that says in the future it will raise a ValueError() as can be seen in a recently merged patch: http://logs.openstack.org/72/533872/6/check/openstack-tox-py27/4709e32/job-output.txt.gz#_2018-01-16_13_10_38_931593 The test patch gets rid of that deprecation warning for the exceptions. Though I did see another warning about the "Response" object: http://logs.openstack.org/66/535166/2/check/openstack-tox-py35/33d0827/job-output.txt.gz#_2018-01-18_05_13_52_603162 But that is for someone else to figure out :) On Wed, Jan 17, 2018 at 7:13 PM, ChangBo Guo wrote: > add Ironic team in the loop > > the revert patch got -1 from ironic folks , more details please see the > comments in https://review.openstack.org/534736 > The possible solution is to figure out why the change break Glance's unit > test. which side should be fixed. > > > > 2018-01-17 20:14 GMT+08:00 ChangBo Guo : > >> I dig a little. It shows success when updating constraint to 2.21.2 [1] >> but failure when updating constraint to 2.22.0 [2]. according to release >> information [3]. >> It means 2.21.1 works with glance test but 2.21.2 doesn't work well with >> glance. The only issue patch is https://github.com/openstack/o >> slo.serialization/commit/c1a7079c26d27a2e46cca26963d3d9aa040bdbe8. >> >> >> [1] https://review.openstack.org/514833 >> [2] https://review.openstack.org/#/c/525136 >> [3] https://github.com/openstack/releases/blob/master/deliverabl >> es/queens/oslo.serialization.yaml >> >> >> Actions: >> >> Block oslo.serialization version 2.21.2, 2.22.0, 2. 23.0 in >> https://review.openstack.org/534739 >> Revert c1a7079c26d27a2e46cca26963d3d9aa040bdbe8 in >> https://review.openstack.org/534736 >> >> >> >> >> 2018-01-16 23:35 GMT+08:00 Matthew Thode : >> >>> On 18-01-16 19:12:16, ChangBo Guo wrote: >>> > What's the issue for Glance, any bug link ? >>> > >>> > 2018-01-16 0:12 GMT+08:00 Matthew Thode : >>> > >>> > > On 18-01-13 00:41:28, Matthew Thode wrote: >>> > > > https://review.openstack.org/531788 is the review we are seeing >>> it in, >>> > > > but 2.22.0 failed as well. >>> > > > >>> > > > I'm guessing it was introduced in either >>> > > > >>> > > > https://github.com/openstack/oslo.serialization/commit/ >>> > > c1a7079c26d27a2e46cca26963d3d9aa040bdbe8 >>> > > > or >>> > > > https://github.com/openstack/oslo.serialization/commit/ >>> > > cdb2f60d26e3b65b6370f87b2e9864045651c117 >>> > > >>> > > bamp >>> > > >>> >>> The best bug for this is >>> https://bugs.launchpad.net/oslo.serialization/+bug/1728368 and we are >>> currently getting test fails in https://review.openstack.org/531788 >>> >>> -- >>> Matthew Thode (prometheanfire) >>> >> >> >> >> -- >> ChangBo Guo(gcb) >> Community Director @EasyStack >> > > > > -- > ChangBo Guo(gcb) > Community Director @EasyStack > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Thu Jan 18 06:18:56 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Thu, 18 Jan 2018 06:18:56 +0000 Subject: [openstack-dev] [tripleo] Ocata to Pike upgrade job is working as of today. In-Reply-To: References: <87zi5cejqi.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> Message-ID: <1516255651.3fia5le7l9.tristanC@fedora> On January 17, 2018 1:39 pm, Emilien Macchi wrote: [snip] > The last time I asked how we could make RDO jobs voting, it was > something in Software Factory to enable, but I'm not sure about the > details. It doesn't seems like there is something to change in review.rdoproject.org, Third Party CI can vote on review.openstack.org through gerrit configuration, for example here is how Nova does it: http://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/acls/openstack/nova.config#n4 The RDO account is rdothirdparty (id: 23181). -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From muroi.masahito at lab.ntt.co.jp Thu Jan 18 06:30:36 2018 From: muroi.masahito at lab.ntt.co.jp (Masahito MUROI) Date: Thu, 18 Jan 2018 15:30:36 +0900 Subject: [openstack-dev] [Blazar] Rocky PTG planning Message-ID: <30ea58ac-5bbd-1880-577f-0705f54b9fc8@lab.ntt.co.jp> Hi all, The PTG is coming in next month. It's good time to start to listing topics we discuss now. Feel free to write down your ideas on the etherpad[1]. 1. https://etherpad.openstack.org/p/blazar-ptg-rocky best regards, Masahito From prometheanfire at gentoo.org Thu Jan 18 06:39:08 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 18 Jan 2018 00:39:08 -0600 Subject: [openstack-dev] [glance][oslo][requirements] [ironic]oslo.serialization fails with glance In-Reply-To: References: Message-ID: <20180118063908.rixbab6xyqz72frv@gentoo.org> On 18-01-17 21:24:12, John Villalovos wrote: > I have updated the bug with info I found out: > https://bugs.launchpad.net/oslo.serialization/+bug/1728368 > > Also I did a test patch with a proposed change: > https://review.openstack.org/#/c/535166/ > > This patch causes the unit tests to work. > > As a note there is a deprecation warning in the current code (without my > patch) that says in the future it will raise a ValueError() as can be seen > in a recently merged patch: > http://logs.openstack.org/72/533872/6/check/openstack-tox-py27/4709e32/job-output.txt.gz#_2018-01-16_13_10_38_931593 > > The test patch gets rid of that deprecation warning for the exceptions. > > Though I did see another warning about the "Response" object: > http://logs.openstack.org/66/535166/2/check/openstack-tox-py35/33d0827/job-output.txt.gz#_2018-01-18_05_13_52_603162 > > But that is for someone else to figure out :) > > > > On Wed, Jan 17, 2018 at 7:13 PM, ChangBo Guo wrote: > > > add Ironic team in the loop > > > > the revert patch got -1 from ironic folks , more details please see the > > comments in https://review.openstack.org/534736 > > The possible solution is to figure out why the change break Glance's unit > > test. which side should be fixed. > > > > > > > > 2018-01-17 20:14 GMT+08:00 ChangBo Guo : > > > >> I dig a little. It shows success when updating constraint to 2.21.2 [1] > >> but failure when updating constraint to 2.22.0 [2]. according to release > >> information [3]. > >> It means 2.21.1 works with glance test but 2.21.2 doesn't work well with > >> glance. The only issue patch is https://github.com/openstack/o > >> slo.serialization/commit/c1a7079c26d27a2e46cca26963d3d9aa040bdbe8. > >> > >> > >> [1] https://review.openstack.org/514833 > >> [2] https://review.openstack.org/#/c/525136 > >> [3] https://github.com/openstack/releases/blob/master/deliverabl > >> es/queens/oslo.serialization.yaml > >> > >> > >> Actions: > >> > >> Block oslo.serialization version 2.21.2, 2.22.0, 2. 23.0 in > >> https://review.openstack.org/534739 > >> Revert c1a7079c26d27a2e46cca26963d3d9aa040bdbe8 in > >> https://review.openstack.org/534736 > >> > >> > >> > >> > >> 2018-01-16 23:35 GMT+08:00 Matthew Thode : > >> > >>> On 18-01-16 19:12:16, ChangBo Guo wrote: > >>> > What's the issue for Glance, any bug link ? > >>> > > >>> > 2018-01-16 0:12 GMT+08:00 Matthew Thode : > >>> > > >>> > > On 18-01-13 00:41:28, Matthew Thode wrote: > >>> > > > https://review.openstack.org/531788 is the review we are seeing > >>> it in, > >>> > > > but 2.22.0 failed as well. > >>> > > > > >>> > > > I'm guessing it was introduced in either > >>> > > > > >>> > > > https://github.com/openstack/oslo.serialization/commit/ > >>> > > c1a7079c26d27a2e46cca26963d3d9aa040bdbe8 > >>> > > > or > >>> > > > https://github.com/openstack/oslo.serialization/commit/ > >>> > > cdb2f60d26e3b65b6370f87b2e9864045651c117 > >>> > > > >>> > > bamp > >>> > > > >>> > >>> The best bug for this is > >>> https://bugs.launchpad.net/oslo.serialization/+bug/1728368 and we are > >>> currently getting test fails in https://review.openstack.org/531788 > >>> Thanks for that, will keep an eye on it. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From coolsvap at gmail.com Thu Jan 18 07:00:24 2018 From: coolsvap at gmail.com (Swapnil Kulkarni) Date: Thu, 18 Jan 2018 12:30:24 +0530 Subject: [openstack-dev] [all] [tc] Community Goals for Rocky -- privsep In-Reply-To: References: Message-ID: On Thu, Jan 11, 2018 at 7:50 PM, Thierry Carrez wrote: > Emilien Macchi wrote: >> [...] >> Thierry mentioned privsep migration (done in Nova and Zun). (action, >> ping mikal about it). > > It's not "done" in Nova: Mikal planned to migrate all of nova-compute > (arguably the largest service using rootwrap) to privsep during Queens, > but AFAICT it's still work in progress. > > Other projects like cinder and neutron are using it. > > If support in Nova is almost there, it would make a great Queens goal to > get rid of the last rootwrap leftovers and deprecate it. > > Mikal: could you give us a quick update of where you are ? > Anyone interested in championing that as a goal? > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I am interested in championing this as a goal for the next release. -- Best Regards, Swapnil irc : coolsvap From aj at suse.com Thu Jan 18 07:42:09 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 18 Jan 2018 08:42:09 +0100 Subject: [openstack-dev] [tc][masakari][api] masakari service-type, docs, api-ref and releasenotes In-Reply-To: References: Message-ID: On 2018-01-17 20:23, Monty Taylor wrote: > Hey everybody, > > I noticed today while preparing patches to projects that are using > openstacksdk that masakari is not listed in service-types-authority. [0] > > I pushed up a patch to fix that [1] as well as to add api-ref, docs and > releasenotes jobs to the masakari repo so that each of those will be > published appropriately. > > As part of doing this, it came up that 'ha' is a pretty broad > service-type and that perhaps it should be 'compute-ha' or 'instance-ha'. > > The service-type is a unique key for identifying a service in the > catalog, so the same service-type cannot be shared amongst openstack > services. It is also used for api-ref publication (to > https://developer.openstack.org/api-ref/{service-type} ) - and in > openstacksdk as the name used for the service attribute on the > Connection object. (So the service-type 'ha' would result in having > conn.ha on an openstack.connection.Connection) > > We do support specifying historical aliases. Since masakari has been > using ha up until now, we'll need to list is in the aliases at the very > least. > > Do we want to change it? What should we change it to? Yes, I would change it - instance-ha sounds best seeing that the governance file has: " service: Instances High Availability Service" Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From aj at suse.com Thu Jan 18 07:43:25 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 18 Jan 2018 08:43:25 +0100 Subject: [openstack-dev] [Zuul] requirements-check FAILURE In-Reply-To: <47EFB32CD8770A4D9590812EE28C977E961DC346@ALA-MBC.corp.ad.wrs.com> References: <47EFB32CD8770A4D9590812EE28C977E961DC346@ALA-MBC.corp.ad.wrs.com> Message-ID: <12ddcf60-7f35-efa1-626f-5b36e3c7b527@suse.com> On 2018-01-17 23:01, Kwan, Louie wrote: > Would like to add the following module to openstack.masakari project > > https://github.com/pytransitions/transitions > > Got the following error with zuul requirements-check > > Requirement set([Requirement(package=u'transitions', location='', specifiers='>=0.6.4', markers=u'', comment='', extras=frozenset([]))]) not in openstack/requirements > > http://logs.openstack.org/88/534888/3/check/requirements-check/edec7bf/ara/ > > Any tip or insight to fix it? Yes, read on how to add it: https://docs.openstack.org/requirements/latest/ Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From saverio.proto at switch.ch Thu Jan 18 08:06:20 2018 From: saverio.proto at switch.ch (Saverio Proto) Date: Thu, 18 Jan 2018 09:06:20 +0100 Subject: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID In-Reply-To: <07ee3262-8aec-a4c4-f981-bc448afab0ba@switch.ch> References: <054e9a0c-1c99-51dd-9e66-8adb3c910802@switch.ch> <1515696336-sup-7054@lrrr.local> <7142b655-abca-ed58-bc4c-07fa51cb401c@switch.ch> <1515771070-sup-7997@lrrr.local> <07ee3262-8aec-a4c4-f981-bc448afab0ba@switch.ch> Message-ID: <96f2a7d8-ea7c-5530-7975-62b477982f03@switch.ch> Hello Sean, after the brief chat we had on IRC, do you think I should open a bug about this issue ? thank you Saverio On 13.01.18 07:06, Saverio Proto wrote: >> I don't think this is a configuration problem. >> >> Which version of the oslo.log library do you have installed? > > Hello Doug, > > I use the Ubuntu packages, at the moment I have this version: > > python-oslo.log 3.16.0-0ubuntu1~cloud0 > > thank you > > Saverio > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- SWITCH Saverio Proto, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 1573 saverio.proto at switch.ch, http://www.switch.ch http://www.switch.ch/stories From sam47priya at gmail.com Thu Jan 18 08:11:51 2018 From: sam47priya at gmail.com (Sam P) Date: Thu, 18 Jan 2018 17:11:51 +0900 Subject: [openstack-dev] [tc][masakari][api] masakari service-type, docs, api-ref and releasenotes In-Reply-To: References: Message-ID: Hi Monty, Thanks for the patches. Agree that 'ha' is a pretty broad service-type for Masakari. compute-ha or instance-ha are both OK, as Andreas proposed, I would like to change it to instance-ha. If no objections, I will fix the python-masakariclient, devstack pulgin etc.. from masakari side. --- Regards, Sampath On Thu, Jan 18, 2018 at 4:42 PM, Andreas Jaeger wrote: > On 2018-01-17 20:23, Monty Taylor wrote: >> Hey everybody, >> >> I noticed today while preparing patches to projects that are using >> openstacksdk that masakari is not listed in service-types-authority. [0] >> >> I pushed up a patch to fix that [1] as well as to add api-ref, docs and >> releasenotes jobs to the masakari repo so that each of those will be >> published appropriately. >> >> As part of doing this, it came up that 'ha' is a pretty broad >> service-type and that perhaps it should be 'compute-ha' or 'instance-ha'. >> >> The service-type is a unique key for identifying a service in the >> catalog, so the same service-type cannot be shared amongst openstack >> services. It is also used for api-ref publication (to >> https://developer.openstack.org/api-ref/{service-type} ) - and in >> openstacksdk as the name used for the service attribute on the >> Connection object. (So the service-type 'ha' would result in having >> conn.ha on an openstack.connection.Connection) >> >> We do support specifying historical aliases. Since masakari has been >> using ha up until now, we'll need to list is in the aliases at the very >> least. >> >> Do we want to change it? What should we change it to? > > Yes, I would change it - instance-ha sounds best seeing that the > governance file has: > " service: Instances High Availability Service" > > Andreas > -- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > From lijie at unitedstack.com Thu Jan 18 08:31:11 2018 From: lijie at unitedstack.com (=?utf-8?B?5p2O5p2w?=) Date: Thu, 18 Jan 2018 16:31:11 +0800 Subject: [openstack-dev] [nova] about rebuild instance booted from volume Message-ID: Hi,all This is the spec about rebuild a instance booted from volume, anyone who is interested in booted from volume can help to review this. Any suggestion is welcome. The link is here. Re:the rebuild spec:https://review.openstack.org/#/c/532407/ Best Regards Lijie -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpena at redhat.com Thu Jan 18 09:03:42 2018 From: jpena at redhat.com (Javier Pena) Date: Thu, 18 Jan 2018 04:03:42 -0500 (EST) Subject: [openstack-dev] [tripleo] Ocata to Pike upgrade job is working as of today. In-Reply-To: <87zi5cejqi.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> References: <87zi5cejqi.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> Message-ID: <426711021.761649.1516266222504.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Hi, > > So join us (upgrade squad) to celebrate the working ocata->pike upgrade > job[1], without any depends-on whatsoever. > > We would like it to be voting as soon as possible. It has been a > rather consuming task to revive that forgotten but important jobs, and > the only way for it to not drift into oblivion again is to have it > voting. > I see the job is actually voting (it should have a -nv suffix to be non-voting). Regards, Javier > Eventually, let’s thanks rdo-cloud people for their support (especially > David Manchado), James Slagle for Traas[2] and Alfredo Moralejo for his > constant availability to answer our questions. > > Thanks, > > [1] https://review.openstack.org/#/c/532791/, look for > «gate-tripleo-ci-centos-7-containers-multinode-upgrades-pike» > [2] https://github.com/slagle/traas … the repo we use -> > https://github.com/sathlan/traas (so many pull requests to make that it > would be cool for it to be an openstack project … :)) > -- > Sofer Athlan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From lhinds at redhat.com Thu Jan 18 10:05:19 2018 From: lhinds at redhat.com (Luke Hinds) Date: Thu, 18 Jan 2018 10:05:19 +0000 Subject: [openstack-dev] [security] Won't be able to make todays meeting Message-ID: Hello, I won't be able to attend the security project meeting today, and as there are no hot topics I suggest we postpone until next week (if there are, then feel free to #startmeeting and I will catch up tomorrow through meetbot logs). Cheers, Luke -------------- next part -------------- An HTML attachment was scrubbed... URL: From tommylikehu at gmail.com Thu Jan 18 10:07:22 2018 From: tommylikehu at gmail.com (TommyLike Hu) Date: Thu, 18 Jan 2018 10:07:22 +0000 Subject: [openstack-dev] [api-wg] [api] [cinder] [nova] Support specify action name in request url Message-ID: Hey all, Recently We found an issue related to our OpenStack action APIs. We usually expose our OpenStack APIs by registering them to our API Gateway (for instance Kong [1]), but it becomes very difficult when regarding to action APIs. We can not register and control them seperately because them all share the same request url which will be used as the identity in the gateway service, not say rate limiting and other advanced gateway features, take a look at the basic resources in OpenStack 1. *Server*: "/servers/{server_id}/action" 35+ APIs are include. 2. *Volume*: "/volumes/{volume_id}/action" 14 APIs are include. 3. Other resource We have tried to register different interfaces with same upstream url, such as: * api gateway*: /version/resource_one/action/action1 =>* upstream*: /version/resource_one/action * api gateway*: /version/resource_one/action/action2 =>* upstream*: /version/resource_one/action But it's not secure enough cause we can pass action2 in the request body while invoking /action/action1, also, try to read the full body for route is not supported by most of the api gateways(maybe plugins) and will have a performance impact when proxy. So my question is do we have any solution or suggestion for this case? Could we support specify action name both in request body and url such as: *URL:*/volumes/{volume_id}/action *BODY:*{'extend':{}} and: *URL:*/volumes/{volume_id}/action/extend *BODY:* {'extend':{}} Thanks Tommy [1]: https://github.com/Kong/kong -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Jan 18 10:20:21 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 18 Jan 2018 11:20:21 +0100 Subject: [openstack-dev] [ptg] Dublin PTG proposed track schedule Message-ID: <27e2401b-9753-51b9-f465-eeb0281dc350@openstack.org> Hi everyone, Here is the proposed pre-allocated track schedule for the Dublin PTG: https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307&single=true The proposed allocation takes into account the estimated group size and number of days that was communicated to Kendall and I by the team PTL. We'd like to publish this schedule on the event website ASAP, so please check that it still matches your needs (number of days, room size vs. expected attendance) and does not create too many conflicts. There are lots of constraints, so we can't promise we'll accommodate your remarks, but we'll do our best. If your team is not listed, that's probably because you haven't confirmed that your team intends to meet at the PTG yet. Let us know ASAP if the situation changed, and we'll see if we can add extra space to host you. You'll also notice that some teams (in orange below the table in above link) do not have pre-allocated slots. One key difference this time around is that we set aside a larger number of rooms and meeting spots for dynamically scheduling tracks. The idea is to avoid pre-allocating smaller tracks to a specific time slot that might or might not create conflicts, and let that team book a space at a time that makes the most sense for them, while the event happens. This dynamic booking will be done through the PTGbot. So the unscheduled teams (in orange) are expected to take advantage of this flexibility and schedule themselves during the event. This flexibility is not limited to those orange teams: other teams may want to meet for more than their pre-allocated time slots, and can book extra space as well. For example if you are on the First Contact SIG and realize on Tuesday afternoon that you would like to continue the discussions on Friday morning, it's easy to extend your track to a time slot there. Note that this system also replaces the old ethercalc-scheduled "reservable rooms". If a group of people forms around a specific issue and wants to discuss it, they can ask for their own additional "track" and schedule it dynamically as well. Finally, you'll notice we have extra space set aside on Monday-Tuesday to discuss "hot" last-minute cross-project issues -- if you have ideas of topics that we need to discuss in-person, please let us know. -- Thierry Carrez (ttx) From andrea.frittoli at gmail.com Thu Jan 18 10:28:42 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Thu, 18 Jan 2018 10:28:42 +0000 Subject: [openstack-dev] [QA] [all] QA Rocky PTG Planning Message-ID: Dear all, I started the etherpad for planning the QA work in Dublin. Please add your ideas / proposals for sessions and intention of attending. We have a room for the QA team for three full days Wed-Fri. This time I also included a "Request for Sessions" - if anyone would like to discuss a QA related topic with the QA team or do a hands-on / sprint on something feel free to add it in there. We can also handle them in a less unstructured format during the PTG - but if there are a few requests on similar topics we can setup a session on Mon/Tue for everyone interested to attend. Andrea Frittoli (andreaf) -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.frittoli at gmail.com Thu Jan 18 10:32:45 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Thu, 18 Jan 2018 10:32:45 +0000 Subject: [openstack-dev] [QA] [all] QA Rocky PTG Planning In-Reply-To: References: Message-ID: and the link [1] [1] https://etherpad.openstack.org/p/qa-rocky-ptg On Thu, Jan 18, 2018 at 10:28 AM Andrea Frittoli wrote: > Dear all, > > I started the etherpad for planning the QA work in Dublin. > Please add your ideas / proposals for sessions and intention of attending. > We have a room for the QA team for three full days Wed-Fri. > > This time I also included a "Request for Sessions" - if anyone would like > to discuss a QA related topic with the QA team or do a hands-on / sprint on > something feel free to add it in there. We can also handle them in a less > unstructured format during the PTG - but if there are a few requests on > similar topics we can setup a session on Mon/Tue for everyone interested to > attend. > > Andrea Frittoli (andreaf) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rnoriega at redhat.com Thu Jan 18 11:18:27 2018 From: rnoriega at redhat.com (Ricardo Noriega De Soto) Date: Thu, 18 Jan 2018 12:18:27 +0100 Subject: [openstack-dev] [neutron][l2gw] Preventing DB out-of-sync In-Reply-To: References: Message-ID: Just for awareness on this subject. Peng has proposed an initial patch to tackle this issue: https://review.openstack.org/#/c/529009/6 On Tue, Dec 12, 2017 at 11:20 AM, Ricardo Noriega De Soto < rnoriega at redhat.com> wrote: > Peng, I think you are right. We should have a common behavior among the > drivers, and move the implementation to the proper methods like > post-commits, do the validation on the pre-commits, etc, etc. > > Second phase to tackle the out-of-sync could be the "revision number" > approach from networking-ovn. > > Cheers > > On Mon, Dec 11, 2017 at 4:32 PM, Peng Liu wrote: > >> Hi Michael, >> >> Yes, it's a similar issue but different aspect. Actually, the case for >> l2gw is worse, considering we have to deal with 2 existing back-end driver >> which have different understanding for the interfaces. But I think the >> proposed approach for networking-ovn is inspiring and helpful for l2gw. >> >> Thanks, >> >> On Fri, Dec 8, 2017 at 11:59 PM, Michael Bayer wrote: >> >>> On Wed, Dec 6, 2017 at 3:46 AM, Peng Liu wrote: >>> > Hi, >>> > >>> > During working on this patch[0], I encounter some DB out-of-sync >>> problem. I >>> > think maybe the design can be improved. Here is my thought, all >>> comments are >>> > welcome. >>> >>> >>> see also https://review.openstack.org/#/c/490834/ which I think is >>> dealing with a similar (if not the same) issue. >>> >>> > >>> > In plugin code, I found all the resource operations follow the pattern >>> in >>> > [1]. It is a very misleading design compare to [2]. >>> > >>> > "For every action that can be taken on a resource, the mechanism driver >>> > exposes two methods - ACTION_RESOURCE_precommit, which is called >>> within the >>> > database transaction context, and ACTION_RESOURCE_postcommit, called >>> after >>> > the database transaction is complete." >>> > >>> > In result, if we focus on the out-of-sync between plugin DB and >>> > driver/backend DB: >>> > >>> > 1) In RPC driver, only methods Action_Resource and are implemented. >>> Which >>> > means the action is token before it was written in plugin DB. In case >>> of >>> > action partial succeed (especially for update case) or plugin DB >>> operation >>> > failure, it will cause DB out-of-sync. >>> > 2) In ODL driver, only methods Action_Resource_postcommit are >>> implemented, >>> > which means there is no validation in ODL level before the record is >>> written >>> > in the plugin DB. In case of, ODL side failure, there is no rollback >>> for >>> > plugin DB. >>> > >>> > So, to fix this issue is costly. Both plugin and driver sides need to >>> be >>> > altered. >>> > >>> > The other side of this issue is a period db monitor mechanism between >>> plugin >>> > and drivers, and it is another story. >>> > >>> > >>> > [0] https://review.openstack.org/#/c/516256 >>> > [1] >>> > ... >>> > def Action_Resource >>> > self.validate_Resource_for_Action >>> > self.driver.Action_Resource >>> > with context.session.begin(subtransactions=True): >>> > super.Action_Resource >>> > self.driver.Action_Resource_precommit >>> > try: >>> > self.driver.Action_Resource_postcommit >>> > ... >>> > [2] https://wiki.openstack.org/wiki/Neutron/ML2 >>> > >>> > -- >>> > Peng Liu >>> > >>> > ____________________________________________________________ >>> ______________ >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> >> -- >> Peng Liu | Senior Software Engineer >> >> Tel: +86 10 62608046 (direct) >> Mobile: +86 13801193245 <+86%20138%200119%203245> >> >> Red Hat Software (Beijing) Co., Ltd. >> 9/F, North Tower C, >> Raycom Infotech Park, >> No.2 Kexueyuan Nanlu, Haidian District, >> Beijing, China, POC 100190 >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Ricardo Noriega > > Senior Software Engineer - NFV Partner Engineer | Office of Technology | > Red Hat > irc: rnoriega @freenode > > -- Ricardo Noriega Senior Software Engineer - NFV Partner Engineer | Office of Technology | Red Hat irc: rnoriega @freenode -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean at dague.net Thu Jan 18 11:37:28 2018 From: sean at dague.net (Sean Dague) Date: Thu, 18 Jan 2018 06:37:28 -0500 Subject: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID In-Reply-To: <96f2a7d8-ea7c-5530-7975-62b477982f03@switch.ch> References: <054e9a0c-1c99-51dd-9e66-8adb3c910802@switch.ch> <1515696336-sup-7054@lrrr.local> <7142b655-abca-ed58-bc4c-07fa51cb401c@switch.ch> <1515771070-sup-7997@lrrr.local> <07ee3262-8aec-a4c4-f981-bc448afab0ba@switch.ch> <96f2a7d8-ea7c-5530-7975-62b477982f03@switch.ch> Message-ID: A bug would be fine. I'm not sure how many people are keeping an eye on oslo.log bugs at this point, so be realistic in when it might get looked at. On 01/18/2018 03:06 AM, Saverio Proto wrote: > Hello Sean, > after the brief chat we had on IRC, do you think I should open a bug > about this issue ? > > thank you > > Saverio > > > On 13.01.18 07:06, Saverio Proto wrote: >>> I don't think this is a configuration problem. >>> >>> Which version of the oslo.log library do you have installed? >> >> Hello Doug, >> >> I use the Ubuntu packages, at the moment I have this version: >> >> python-oslo.log 3.16.0-0ubuntu1~cloud0 >> >> thank you >> >> Saverio >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -- Sean Dague http://dague.net From saverio.proto at switch.ch Thu Jan 18 13:49:21 2018 From: saverio.proto at switch.ch (Saverio Proto) Date: Thu, 18 Jan 2018 14:49:21 +0100 Subject: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID In-Reply-To: References: <054e9a0c-1c99-51dd-9e66-8adb3c910802@switch.ch> <1515696336-sup-7054@lrrr.local> <7142b655-abca-ed58-bc4c-07fa51cb401c@switch.ch> <1515771070-sup-7997@lrrr.local> <07ee3262-8aec-a4c4-f981-bc448afab0ba@switch.ch> <96f2a7d8-ea7c-5530-7975-62b477982f03@switch.ch> Message-ID: Hello all, well this oslo.log library looks like a core thing that is used by multiple projects. I feel scared hearing that bugs opened on that project are probably just ignored. should I reach out to the current PTL of OSLO ? https://github.com/openstack/governance/blob/master/reference/projects.yaml#L2580 ChangBo Guo are you reading this thread ? Do you think this is a bug or a missing feature ? And moreover is really nobody looking at these oslo.log bugs ? thanks Saverio On 18.01.18 12:37, Sean Dague wrote: > A bug would be fine. I'm not sure how many people are keeping an eye on > oslo.log bugs at this point, so be realistic in when it might get looked at. > > On 01/18/2018 03:06 AM, Saverio Proto wrote: >> Hello Sean, >> after the brief chat we had on IRC, do you think I should open a bug >> about this issue ? >> >> thank you >> >> Saverio >> >> >> On 13.01.18 07:06, Saverio Proto wrote: >>>> I don't think this is a configuration problem. >>>> >>>> Which version of the oslo.log library do you have installed? >>> >>> Hello Doug, >>> >>> I use the Ubuntu packages, at the moment I have this version: >>> >>> python-oslo.log 3.16.0-0ubuntu1~cloud0 >>> >>> thank you >>> >>> Saverio >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> > > -- SWITCH Saverio Proto, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 1573 saverio.proto at switch.ch, http://www.switch.ch http://www.switch.ch/stories From aj at suse.com Thu Jan 18 14:07:40 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 18 Jan 2018 15:07:40 +0100 Subject: [openstack-dev] [craton] Retirement of craton and python-cratonclient Message-ID: Craton development is frozen since around March 2017, I've discussed with Sulochan and will start the retirement process now, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From rodrigodsousa at gmail.com Thu Jan 18 14:25:03 2018 From: rodrigodsousa at gmail.com (Rodrigo Duarte) Date: Thu, 18 Jan 2018 11:25:03 -0300 Subject: [openstack-dev] [keystone] adding Gage Hugo to keystone core In-Reply-To: References: Message-ID: Congrats, Gage. Well deserved! On Tue, Jan 16, 2018 at 6:16 PM, Harry Rybacki wrote: > +100 -- congratulations, Gage! > > > On Tue, Jan 16, 2018 at 2:24 PM, Raildo Mascena de Sousa Filho < > rmascena at redhat.com> wrote: > >> +1 >> >> Congrats Gage, very well deserved! >> >> Cheers, >> >> On Tue, Jan 16, 2018 at 4:02 PM Lance Bragstad >> wrote: >> >>> Hey folks, >>> >>> In today's keystone meeting we made the announcement to add Gage Hugo >>> (gagehugo) as a keystone core reviewer [0]! Gage has been actively >>> involved in keystone over the last several cycles. Not only does he >>> provide thorough reviews, but he's really stepped up to help move the >>> project forward by keeping a handle on bugs, fielding questions in the >>> channel, and being diligent about documentation (especially during >>> in-person meet ups). >>> >>> Thanks for all the hard work, Gage! >>> >>> [0] >>> http://eavesdrop.openstack.org/meetings/keystone/2018/keysto >>> ne.2018-01-16-18.00.log.html >>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> -- >> >> Raildo mascena >> >> Software Engineer, Identity Managment >> >> Red Hat >> >> >> >> TRIED. TESTED. TRUSTED. >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Rodrigo http://rodrigods.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From oidgar at redhat.com Thu Jan 18 14:34:22 2018 From: oidgar at redhat.com (Or Idgar) Date: Thu, 18 Jan 2018 16:34:22 +0200 Subject: [openstack-dev] Many timeouts in zuul gates for TripleO Message-ID: Hi, we're encountering many timeouts for zuul gates in TripleO. For example, see http://logs.openstack.org/95/508195/28/check-tripleo/tripleo-ci-centos-7-ovb-ha-oooq/c85fcb7/ . rechecks won't help and sometimes specific gate is end successfully and sometimes not. The problem is that after recheck it's not always the same gate which is failed. Is there someone who have access to the servers load to see what cause this? alternatively, is there something we can do in order to reduce the running time for each gate? Thanks in advance. -- Best regards, Or Idgar -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Jan 18 14:39:15 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 18 Jan 2018 14:39:15 +0000 Subject: [openstack-dev] [security] Won't be able to make todays meeting In-Reply-To: References: Message-ID: <20180118143914.xkdrsbhfaff54q4f@yuggoth.org> On 2018-01-18 10:05:19 +0000 (+0000), Luke Hinds wrote: > I won't be able to attend the security project meeting today, and as there > are no hot topics I suggest we postpone until next week (if there are, then > feel free to #startmeeting and I will catch up tomorrow through meetbot > logs). Sounds fine to me, I didn't have anything new to bring up (also, I'll be indisposed/travelling next week and unable to attend then as well, just FYI, though other members of the VMT will likely be around some if we need to be reached on an urgent matter). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gr at ham.ie Thu Jan 18 14:46:44 2018 From: gr at ham.ie (Graham Hayes) Date: Thu, 18 Jan 2018 14:46:44 +0000 Subject: [openstack-dev] [designate] Meeting Next Week + Other Highlights Message-ID: Hi All, I am going to be on vacation next week, and will not be able to run the weekly IRC meeting. We can either skip, or if someone steps up to run it we can go ahead. I will be around enough to do the q3 release. Also a reminder that the etherpad for planning the PTG in Dublin is available [1]. We have 2 full days + I am sure we can find a free room / hallway for the Friday if we run over, so please fill ideas. Thanks! - Graham 1 - https://etherpad.openstack.org/p/DUB-designate-PTG-planning -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From gr at ham.ie Thu Jan 18 14:48:30 2018 From: gr at ham.ie (Graham Hayes) Date: Thu, 18 Jan 2018 14:48:30 +0000 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: <13089710.Tc0ZGb15F4@whitebase.usersys.redhat.com> References: <13089710.Tc0ZGb15F4@whitebase.usersys.redhat.com> Message-ID: On 12/01/18 09:49, Luigi Toscano wrote: > On Thursday, 11 January 2018 23:52:00 CET Matt Riedemann wrote: >> On 1/11/2018 10:36 AM, Colleen Murphy wrote: >>> 1) All trademark-related tests should go in the tempest repo, in >>> accordance >>> >>> with the original resolution. This would mean that even projects that >>> have >>> never had tests in tempest would now have to add at least some of >>> their >>> black-box tests to tempest. >>> >>> The value of this option is that centralizes tests used for the Interop >>> program in a location where interop-minded folks from the QA team can >>> control them. The downside is that projects that so far have avoided >>> having a dependency on tempest will now lose some control over the >>> black-box tests that they use for functional and integration that would >>> now also be used for trademark certification. >>> There's also concern for the review bandwidth of the QA team - we can't >>> expect the QA team to be continually responsible for an ever-growing list >>> of projects and their trademark tests. >> >> How many tests are we talking about for designate and heat? Half a >> dozen? A dozen? More? >> >> If it's just a couple of tests per project it doesn't seem terrible to >> have them live in Tempest so you get the "interop eye" on reviews, as >> noted in your email. If it's a considerable amount, then option 2 seems >> the best for the majority of parties. > > I would argue that it does not scale; what if some test is taken out from the > interoperability, and others are added? It would mean moving tests from one > repository to another, with change of paths. I think that the solution 2, > where the repository where a test belong and the functionality of a test are > not linked, is better. > > Ciao > How is this different from the 6 programs already in tempest? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From sean.mcginnis at gmx.com Thu Jan 18 14:56:55 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 18 Jan 2018 08:56:55 -0600 Subject: [openstack-dev] [release] Release countdown for week R-5, January 20 - 26 Message-ID: <20180118145654.GA26416@sm-xps> Development Focus ----------------- Teams should be focused on implementing planned work. Work should be wrapping up on client libraries to meet the client lib deadline Thursday, the 25th. General Information ------------------- Next Thursday is the final client library release. Releases will only be allowed for critical fixes in libraries after this point as we stabilize requirements and give time for any unforeseen impacts from lib changes to trickle through. When requesting these library releases, you should also include the stable branching request with the review (as an example, see the "branches" section here: http://git.openstack.org/cgit/openstack/releases/tree/deliverables/pike/os-brick.yaml#n2) Speaking of branching... for projects following the cycle-with-milestones release model, please check membership of your $project-release group. This group should be limited to those aware of the final release activity for the project to make sure only important things are allowed to be backported into the stable/queens branch leading up to the final release. For new projects this cycle, you may need to request the infra team create this group for you. Upcoming Deadlines & Dates -------------------------- Final client library release deadline: January 25 Queens-3 Milestone: January 25 Start of Rocky PTL nominations: January 29 Start of Rocky PTL election: February 7 OpenStack Summit Vancouver CFP deadline: February 8 Rocky PTG in Dublin: Week of February 26, 2018 -- Sean McGinnis (smcginnis) From doug at doughellmann.com Thu Jan 18 15:15:16 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 18 Jan 2018 10:15:16 -0500 Subject: [openstack-dev] [cliff][osc][barbican][oslo][sdk][all] avoiding option name conflicts with cliff and OSC plugins Message-ID: <1516287510-sup-5069@lrrr.local> We've been working this week to resolve an issue between cliff and barbicanclient due to a collision in the option namespace [0]. Barbicanclient was using the -s option, and then cliff's lister command base class added that option as an alias for sort-columns. The first attempt at resolving the conflict was to set the conflict handler in argparse to 'resolve' [1]. Several reviewers pointed out that this would have the unwanted side-effect of making some OSC commands support the -s as an alias for --sort-columns while the barbicanclient commands would use it for a different purpose. For now we have removed the -s alias from within cliff. However, we want to avoid this problem coming up in the future so we want a better solution. The OSC project has a policy that its command plugins do not use short options (single letter). There are relatively few of them, and it's easy to introduce collisions. Therefore, they are seen as reserved for more common "global" options such as provided by the base classes in OSC and cliff. I propose that for Rocky we update cliff to change the way options are registered so that conflicting options added by command subclasses are ignored. This would effectively let cliff "own" the short option namespace, and require command classes to use long option names. The implementation details need to be worked out a bit, but I think we can do that by subclassing ArgumentParser and adding a new conflict handler method similar to the existing _handle_conflict_error() and _handle_conflict_resolve(). This is going to introduce backwards-incompatible changes in the commands derived from cliff, so before we take any action I wanted to solicit input in the plan. Please let me know what you think, Doug [0] https://bugs.launchpad.net/python-barbicanclient/+bug/1743578 [1] https://docs.python.org/3.5/library/argparse.html#conflict-handler From gr at ham.ie Thu Jan 18 15:33:12 2018 From: gr at ham.ie (Graham Hayes) Date: Thu, 18 Jan 2018 15:33:12 +0000 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: References: Message-ID: <21a4f3d8-6386-7759-9d68-95a2b635b70d@ham.ie> On 11/01/18 16:36, Colleen Murphy wrote: > Hi everyone, > > We have governance review under debate[1] that we need the community's help on. > The debate is over what recommendation the TC should make to the Interop team > on where the tests it uses for the OpenStack trademark program should be > located, specifically those for the new add-on program being introduced. Let me > badly summarize: > > A couple of years ago we issued a resolution[2] officially recommending that > the Interop team use solely tempest as its source of tests for capability > verification. The Interop team has always had the view that the developers, > being the people closest to the project they're creating, are the best people > to write tests verifying correct functionality, and so the Interop team doesn't > maintain its own test suite, instead selecting tests from those written in > coordination between the QA team and the other project teams. These tests are > used to validate clouds applying for the OpenStack Powered tag, and since all > of the projects included in the OpenStack Powered program already had tests in > tempest, this was a natural fit. When we consider adding new trademark programs > comprising of other projects, the test source is less obvious. Two examples are > designate, which has never had tests in the tempest repo, and heat, which > recently had its tests removed from the tempest repo. > > > As I'm not deeply steeped in the history of either the Interop or QA teams I am > sure I've misrepresented some details here, I'm sorry about that. But we'd like > to get this resolution moving forward and we're currently stuck, so this thread > is intended to gather enough community input to get unstuck and avoid letting > this proposal become stale. Please respond to this thread or comment on the > resolution proposal[1] if you have any thoughts. > > Colleen > > [1] https://review.openstack.org/#/c/521602 > [2] https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html > [3] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html > I had hoped for more of a discussion on this before I jumped back into this debate - but it seams to be stalled still, so here it goes. I proposed this initially as we were unclear on where the tests should go - we had a resolution that said all tests go into openstack/tempest (with a list of reasons why), and the guidance and discussion that been had in various summits was that "add-ons" should stay in plugins. So right now, we (by the governance rules) should be pushing tests to tempest for the new programs. In the resolution that placed the tests in tempest there was a few reasons proposed: For example, API and behavioral changes must be carefully managed, as must mundane aspects such as test and module naming and location within the test suite. Even changes that leave tests functionally equivalent may cause unexpected consequences for their use in DefCore processes and validation. Placing the tests in a central repository will make it easier to maintain consistency and avoid breaking the trademark enforcement tool. This still applies, and even more so for teams that traditionally do not have a strong QE contributor / reviewer base (aka projects not in "core"). Centralizing the tests also makes it easier for anyone running the validation tool against their cloud or cloud distribution to use the tests. It is easier to install the test suite and its dependencies, and it is easier to read and understand a set of tests following a consistent implementation pattern. Apparently users do not need central tests anymore, feedback from RefStack is that people who run these tests are comfortable dealing with extra python packages. The point about a single set of tests, in a single location and style still stands. Finally, having the tests in a central location makes it easier to ensure that all members of the community have equal input into what the tests do and how they are implemented and maintained. Seems like a good value to strive for. One of the items that has been used to push back against adding "add-ons" to tempest has been that tempest has a defined scope, and neither of the current add-ons fit in that scope. Can someone clarify what the set of criteria is? I think it will help this discussion. Another push back is the "scaling" issue - adding more tests will overload the QA team. Right now, DNS adds 10 tests, Orchestration adds 22, to a current suite of 353. I do not think there is many other add-ons proposed yet, and the new Vertical programs will probably mainly be re-using tests in the openstack/tempest repos as is. This is not a big tent-esque influx of programs - the only projects that can be added to the trademarks are programs in tc-approved-release [4], so I do not see scaling as a big issue, especially as these tests are such base concepts that if they need to be changed there is a completely new API, so the only overhead will be ensuring that nothing in tempest breaks the new tests (which is a good thing for trademark tests). Personally, for me, I like option 3. I did not initially add it, as I knew it would cause endless bikesheding, but I do think it fits both a technical and social model. I see 2 immediate routes forward: - Option 1, and we start adding these tests asap - Pseudo Option 2, were we delete the resolution at [2] as it clearly does not apply anymore, and abandon the review at [1]. Finally - do not conflate my actions with those of the Designate team. I have seen people talking about how this resolution was leverage the team needed to move our tests in tree. This is definitely *not* true. Having our tests in a plugin is useful to us, and if the above resolution passed, I cannot see a situation where we would try to move any tests that were not listed in the interop standards. This is something I have done as an individual in the community, not something the designate team have pushed for. [4] - https://governance.openstack.org/tc/reference/tags/tc_approved-release.html > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From emilien at redhat.com Thu Jan 18 15:45:18 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 18 Jan 2018 07:45:18 -0800 Subject: [openstack-dev] Many timeouts in zuul gates for TripleO In-Reply-To: References: Message-ID: On Thu, Jan 18, 2018 at 6:34 AM, Or Idgar wrote: > Hi, > we're encountering many timeouts for zuul gates in TripleO. > For example, see > http://logs.openstack.org/95/508195/28/check-tripleo/tripleo-ci-centos-7-ovb-ha-oooq/c85fcb7/. > > rechecks won't help and sometimes specific gate is end successfully and > sometimes not. > The problem is that after recheck it's not always the same gate which is > failed. > > Is there someone who have access to the servers load to see what cause this? > alternatively, is there something we can do in order to reduce the running > time for each gate? We're migrating to RDO Cloud for OVB jobs: https://review.openstack.org/#/c/526481/ It's a work in progress but will help a lot for OVB timeouts on RH1. I'll let the CI folks comment on that topic. -- Emilien Macchi From sean.mcginnis at gmx.com Thu Jan 18 15:47:57 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 18 Jan 2018 09:47:57 -0600 Subject: [openstack-dev] [release] Release countdown for week R-5, January 20 - 26 In-Reply-To: <20180118145654.GA26416@sm-xps> References: <20180118145654.GA26416@sm-xps> Message-ID: <20180118154757.GA27794@sm-xps> On Thu, Jan 18, 2018 at 08:56:55AM -0600, Sean McGinnis wrote: > Development Focus > ----------------- > > Teams should be focused on implementing planned work. Work should be wrapping > up on client libraries to meet the client lib deadline Thursday, the 25th. > > General Information > ------------------- > > Next Thursday is the final client library release. Releases will only be > allowed for critical fixes in libraries after this point as we stabilize > requirements and give time for any unforeseen impacts from lib changes to > trickle through. > > When requesting these library releases, you should also include the stable > branching request with the review (as an example, see the "branches" section > here: > http://git.openstack.org/cgit/openstack/releases/tree/deliverables/pike/os-brick.yaml#n2) > > Speaking of branching... for projects following the cycle-with-milestones > release model, please check membership of your $project-release group. This > group should be limited to those aware of the final release activity for the > project to make sure only important things are allowed to be backported into > the stable/queens branch leading up to the final release. For new projects this > cycle, you may need to request the infra team create this group for you. > > Upcoming Deadlines & Dates > -------------------------- > > Final client library release deadline: January 25 > Queens-3 Milestone: January 25 > Start of Rocky PTL nominations: January 29 > Start of Rocky PTL election: February 7 > OpenStack Summit Vancouver CFP deadline: February 8 > Rocky PTG in Dublin: Week of February 26, 2018 > > -- > Sean McGinnis (smcginnis) > I got too focused on the client lib freeze that there are a few significant things I forgot to mention. Please be aware that the 25th (that's a Thursday Jay) is also the Queens-3 deadline. With that, we are also in Feature Freeze. Only bug fixes and wrap up work should be accepted after this point without explicit approval from the PTL and some mention on the mailing list. With these final library releases, the requirements repo is also now locked down to allow projects to stabilize before RC. And in order to help the I18n team have a chance of getting any translations done, we also enter String Freeze. Translatable strings (anything in _() in exceptions) should not be changed unless absolutely necessary. From prometheanfire at gentoo.org Thu Jan 18 16:06:48 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 18 Jan 2018 10:06:48 -0600 Subject: [openstack-dev] [ptg] Dublin PTG proposed track schedule In-Reply-To: <27e2401b-9753-51b9-f465-eeb0281dc350@openstack.org> References: <27e2401b-9753-51b9-f465-eeb0281dc350@openstack.org> Message-ID: <20180118160648.edpsvmhwzzni3i7s@gentoo.org> On 18-01-18 11:20:21, Thierry Carrez wrote: > Hi everyone, > > Here is the proposed pre-allocated track schedule for the Dublin PTG: > > https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307&single=true > > You'll also notice that some teams (in orange below the table in above > link) do not have pre-allocated slots. One key difference this time > around is that we set aside a larger number of rooms and meeting spots > for dynamically scheduling tracks. The idea is to avoid pre-allocating > smaller tracks to a specific time slot that might or might not create > conflicts, and let that team book a space at a time that makes the most > sense for them, while the event happens. This dynamic booking will be > done through the PTGbot. > > So the unscheduled teams (in orange) are expected to take advantage of > this flexibility and schedule themselves during the event. This > flexibility is not limited to those orange teams: other teams may want > to meet for more than their pre-allocated time slots, and can book extra > space as well. For example if you are on the First Contact SIG and > realize on Tuesday afternoon that you would like to continue the > discussions on Friday morning, it's easy to extend your track to a time > slot there. > > Note that this system also replaces the old ethercalc-scheduled > "reservable rooms". If a group of people forms around a specific issue > and wants to discuss it, they can ask for their own additional "track" > and schedule it dynamically as well. > > Finally, you'll notice we have extra space set aside on Monday-Tuesday > to discuss "hot" last-minute cross-project issues -- if you have ideas > of topics that we need to discuss in-person, please let us know. > As one of the teams in orange, what specific steps, if any, do we need to take to schedule a specific time/place for a meeting? -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Thu Jan 18 16:11:41 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 18 Jan 2018 11:11:41 -0500 Subject: [openstack-dev] [glance] priorities for the coming week (18 Jan - 24 Jan) Message-ID: As discussed at today's Glance meeting, the Q-3 milestone is next week. Please focus on the following: (1) image metadata injection https://review.openstack.org/#/c/527635/ (2) interoperable image import https://review.openstack.org/532502 https://review.openstack.org/532501 may be some more, watch the ML (3) use only E-M-C strategy in glance-manage tool not sure the patch is up yet, will leave a note on https://review.openstack.org/#/c/433934 Have a good week! brian From doug at doughellmann.com Thu Jan 18 16:25:39 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 18 Jan 2018 11:25:39 -0500 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: <21a4f3d8-6386-7759-9d68-95a2b635b70d@ham.ie> References: <21a4f3d8-6386-7759-9d68-95a2b635b70d@ham.ie> Message-ID: <1516292546-sup-5080@lrrr.local> Excerpts from Graham Hayes's message of 2018-01-18 15:33:12 +0000: > I had hoped for more of a discussion on this before I jumped back into > this debate - but it seams to be stalled still, so here it goes. > > I proposed this initially as we were unclear on where the tests should > go - we had a resolution that said all tests go into openstack/tempest > (with a list of reasons why), and the guidance and discussion that been > had in various summits was that "add-ons" should stay in plugins. > > So right now, we (by the governance rules) should be pushing tests to > tempest for the new programs. > > In the resolution that placed the tests in tempest there was a few > reasons proposed: > > For example, API and behavioral changes must be carefully managed, as > must mundane aspects such as test and module naming and location > within the test suite. Even changes that leave tests functionally > equivalent may cause unexpected consequences for their use in DefCore > processes and validation. Placing the tests in a central repository > will make it easier to maintain consistency and avoid breaking the > trademark enforcement tool. > > This still applies, and even more so for teams that traditionally do not > have a strong QE contributor / reviewer base (aka projects not in > "core"). > > Centralizing the tests also makes it easier for anyone running the > validation tool against their cloud or cloud distribution to use the > tests. It is easier to install the test suite and its dependencies, > and it is easier to read and understand a set of tests following a > consistent implementation pattern. > > Apparently users do not need central tests anymore, feedback from > RefStack is that people who run these tests are comfortable dealing > with extra python packages. > > The point about a single set of tests, in a single location and style > still stands. > > Finally, having the tests in a central location makes it easier to > ensure that all members of the community have equal input into what > the tests do and how they are implemented and maintained. > > Seems like a good value to strive for. > > One of the items that has been used to push back against adding > "add-ons" to tempest has been that tempest has a defined scope, and > neither of the current add-ons fit in that scope. > > Can someone clarify what the set of criteria is? I think it will help > this discussion. > > Another push back is the "scaling" issue - adding more tests will > overload the QA team. In the past the QA team agreed to accept trademark-related tests from all projects in the tempest repo. Has that changed? > > Right now, DNS adds 10 tests, Orchestration adds 22, to a current suite > of 353. > > I do not think there is many other add-ons proposed yet, and the new > Vertical programs will probably mainly be re-using tests in the > openstack/tempest repos as is. > > This is not a big tent-esque influx of programs - the only projects > that can be added to the trademarks are programs in tc-approved-release > [4], so I do not see scaling as a big issue, especially as these tests > are such base concepts that if they need to be changed there is a > completely new API, so the only overhead will be ensuring that nothing > in tempest breaks the new tests (which is a good thing for trademark > tests). > > Personally, for me, I like option 3. I did not initially add it, as I > knew it would cause endless bikesheding, but I do think it fits both > a technical and social model. > > I see 2 immediate routes forward: > > - Option 1, and we start adding these tests asap > - Pseudo Option 2, were we delete the resolution at [2] as it clearly > does not apply anymore, and abandon the review at [1]. > > Finally - do not conflate my actions with those of the Designate team. > I have seen people talking about how this resolution was leverage the > team needed to move our tests in tree. This is definitely *not* true. > Having our tests in a plugin is useful to us, and if the above > resolution passed, I cannot see a situation where we would try to > move any tests that were not listed in the interop standards. > > This is something I have done as an individual in the community, not > something the designate team have pushed for. Thanks for pushing for a clear resolution to this, Graham. > > > [4] - > https://governance.openstack.org/tc/reference/tags/tc_approved-release.html > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From ed at leafe.com Thu Jan 18 16:31:17 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 18 Jan 2018 10:31:17 -0600 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, A very quiet meeting today [7], which you would expect with the absence of cdent and elmiko. The main discussion was about the guideline on exposing microversions in SDKs [8] by dtantsur. The focus of the discussion was about how to handle the distinction between what he calls a "low-level SDK" (such as novaclient, ironicclient, etc.), and a "high-level SDK" (such as Shade, jclouds, or OpenStack.NET). We agreed to continue the discussion next week when we can have additional points of view available to come up with more clarity. Oh, and we merged the improvement to the guideline on pagination. Thanks, mordred! As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * Expand note about rfc5988 link header https://review.openstack.org/#/c/531914/ # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. None this week. # Guidelines Currently Under Review [3] * Add guideline on exposing microversions in SDKs https://review.openstack.org/#/c/532814/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 * WIP: Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] http://eavesdrop.openstack.org/meetings/api_sig/2018/api_sig.2018-01-18-16.00.log.html [8] https://review.openstack.org/#/c/532814/ Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Ed Leafe From johnsomor at gmail.com Thu Jan 18 16:33:07 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 18 Jan 2018 08:33:07 -0800 Subject: [openstack-dev] [gate][devstack][neutron][qa][release] Switch to lib/neutron in gate In-Reply-To: References: Message-ID: This sounds great Ihar! Let us know when we should make the changes to the neutron-lbaas projects. Michael On Wed, Jan 17, 2018 at 11:26 AM, Ihar Hrachyshka wrote: > Hi all, > > tl;dr I propose to switch to lib/neutron devstack library in Queens. I > ask for buy-in to the plan from release and QA teams, something that > infra asked me to do. > > === > > Last several cycles we were working on getting lib/neutron - the new > in-tree devstack library to deploy neutron services - ready to deploy > configurations we may need in our gates. Some pieces of the work > involved can be found in: > > https://review.openstack.org/#/q/topic:new-neutron-devstack-in-gate > > I am happy to announce that the work finally got to the point where we > can consistently pass both devstack-gate and neutron gates: > > (devstack-gate) https://review.openstack.org/436798 > (neutron) https://review.openstack.org/441579 > > One major difference between the old lib/neutron-legacy library and > the new lib/neutron one is that service names for neutron are > different. For example, q-svc is now neutron-api, q-dhcp is now > neutron-dhcp, etc. (In case you wonder, this q- prefix links us back > to times when Neutron was called Quantum.) The way lib/neutron is > designed is that whenever a single q-* service name is present in > ENABLED_SERVICES, the old lib/neutron-legacy code is triggered to > deploy services. > > Service name changes are a large part of the work. The way the > devstack-gate change linked above is designed is that it changes names > for deployed neutron services starting from Queens (current master), > so old branches and grenade jobs are not affected by the change. > > While we validated the change switching to new names against both > devstack-gate and neutron gates that should cover 90% of our neutron > configurations, and followed up with several projects that - we > induced - may be affected by the change - there is always a chance > that some job in some project gate would fail because of it, and we > would need to push a (probably rather simple) follow-up to unbreak the > affected job. Due to the nature of the work, the span of impact, and > the fact that infra repos are not easily gated against with Depends-On > links, we may need to live with the risk. > > Of course, there are several aspects of the project life involved, > including QA and release delivery efforts. I was advised to reach out > to both of those teams to get a buy-in to proceed with the move. If we > have support for the switch now, as per Clark, infra is ready to > support the switch. > > Note that the effort span several cycles, partially due to low review > velocity in several affected repos (devstack, devstack-gate), > partially because new changes in all affected repos were pulling us > back from the end goal. This is one of the reasons why I would like us > to do the switch sooner rather than later, since chasing this moving > goalpost became rather burdensome. > > What are QA and release team thoughts on the switch? Are we ready to > do it in next weeks? > > Thanks for attention, > Ihar > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Thu Jan 18 16:45:28 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 18 Jan 2018 11:45:28 -0500 Subject: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID In-Reply-To: References: <054e9a0c-1c99-51dd-9e66-8adb3c910802@switch.ch> <1515696336-sup-7054@lrrr.local> <7142b655-abca-ed58-bc4c-07fa51cb401c@switch.ch> <1515771070-sup-7997@lrrr.local> <07ee3262-8aec-a4c4-f981-bc448afab0ba@switch.ch> <96f2a7d8-ea7c-5530-7975-62b477982f03@switch.ch> Message-ID: <1516293565-sup-9123@lrrr.local> Excerpts from Saverio Proto's message of 2018-01-18 14:49:21 +0100: > Hello all, > > well this oslo.log library looks like a core thing that is used by > multiple projects. I feel scared hearing that bugs opened on that > project are probably just ignored. > > should I reach out to the current PTL of OSLO ? > https://github.com/openstack/governance/blob/master/reference/projects.yaml#L2580 > > ChangBo Guo are you reading this thread ? Do you think this is a bug or > a missing feature ? And moreover is really nobody looking at these > oslo.log bugs ? The Oslo team is small, but we do pay attention to bug reports. I don't think this issue is going to rise to the level of "drop what you're doing and help because the world is on fire", so I think Sean is just encouraging you to have a little patience. Please do go ahead and open a bug and attach (or paste into the description) an example of what the log output for your service looks like. Doug From daniel.mellado.es at ieee.org Thu Jan 18 16:49:08 2018 From: daniel.mellado.es at ieee.org (Daniel Mellado) Date: Thu, 18 Jan 2018 17:49:08 +0100 Subject: [openstack-dev] [kuryr] Rocky PTG planning Message-ID: Hi everyone! Unlike winter, PTG is coming! I've created an etherpad to track the topics and attendees, so please add your attendance information in the there. Besides work items, maybe we can also use it to try to organize some kind of social event in Dublin. Looking forward to see you all there! -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From daniel.mellado.es at ieee.org Thu Jan 18 16:51:39 2018 From: daniel.mellado.es at ieee.org (Daniel Mellado) Date: Thu, 18 Jan 2018 17:51:39 +0100 Subject: [openstack-dev] [kuryr] Rocky PTG planning In-Reply-To: References: Message-ID: On 01/18/2018 05:49 PM, Daniel Mellado wrote: > Hi everyone! > > Unlike winter, PTG is coming! I've created an etherpad to track the > topics and attendees, so please add your attendance information in the > there. > > Besides work items, maybe we can also use it to try to organize some > kind of social event in Dublin. > > Looking forward to see you all there! > Forgot to put the link xD! https://etherpad.openstack.org/p/kuryr-ptg-rocky Best! -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From emilien at redhat.com Thu Jan 18 17:00:19 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 18 Jan 2018 09:00:19 -0800 Subject: [openstack-dev] [ptg] Dublin PTG proposed track schedule In-Reply-To: <27e2401b-9753-51b9-f465-eeb0281dc350@openstack.org> References: <27e2401b-9753-51b9-f465-eeb0281dc350@openstack.org> Message-ID: On Thu, Jan 18, 2018 at 2:20 AM, Thierry Carrez wrote: [...] > We'd like to publish this schedule on the event website ASAP, so please > check that it still matches your needs (number of days, room size vs. > expected attendance) and does not create too many conflicts. [...] ack & works for us (TripleO). -- Emilien Macchi From doug at doughellmann.com Thu Jan 18 17:14:13 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 18 Jan 2018 12:14:13 -0500 Subject: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID In-Reply-To: <1516293565-sup-9123@lrrr.local> References: <054e9a0c-1c99-51dd-9e66-8adb3c910802@switch.ch> <1515696336-sup-7054@lrrr.local> <7142b655-abca-ed58-bc4c-07fa51cb401c@switch.ch> <1515771070-sup-7997@lrrr.local> <07ee3262-8aec-a4c4-f981-bc448afab0ba@switch.ch> <96f2a7d8-ea7c-5530-7975-62b477982f03@switch.ch> <1516293565-sup-9123@lrrr.local> Message-ID: <1516295114-sup-7111@lrrr.local> Excerpts from Doug Hellmann's message of 2018-01-18 11:45:28 -0500: > Excerpts from Saverio Proto's message of 2018-01-18 14:49:21 +0100: > > Hello all, > > > > well this oslo.log library looks like a core thing that is used by > > multiple projects. I feel scared hearing that bugs opened on that > > project are probably just ignored. > > > > should I reach out to the current PTL of OSLO ? > > https://github.com/openstack/governance/blob/master/reference/projects.yaml#L2580 > > > > ChangBo Guo are you reading this thread ? Do you think this is a bug or > > a missing feature ? And moreover is really nobody looking at these > > oslo.log bugs ? > > The Oslo team is small, but we do pay attention to bug reports. I > don't think this issue is going to rise to the level of "drop what > you're doing and help because the world is on fire", so I think > Sean is just encouraging you to have a little patience. > > Please do go ahead and open a bug and attach (or paste into the > description) an example of what the log output for your service looks > like. > > Doug Earlier in the thread you mentioned running the newton versions of neutron and oslo.log. The newton release has been marked end-of-life and is not supported by the community any longer. You may find support from your vendor, but if you're deploying on your own we'll have to work something else out. If we determine that this is a bug in the newton version of the library I won't have any way to give you a new release because the branch is closed. It should be possible for you to update just oslo.log to a more recent (and supported), although to do so you would have to get the package separately or build your own and that may complicate your deployment. More recent versions of the JSON formatter change the structure of the data to include the entire context (including the request id) in a separate key. Are you updating to newton as part of upgrading further than that? If so, we probably want to wait to debug this until you hit the latest supported version you're planning to deploy, in case the problem is already fixed there. Doug From gr at ham.ie Thu Jan 18 17:52:39 2018 From: gr at ham.ie (Graham Hayes) Date: Thu, 18 Jan 2018 17:52:39 +0000 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: <1516292546-sup-5080@lrrr.local> References: <21a4f3d8-6386-7759-9d68-95a2b635b70d@ham.ie> <1516292546-sup-5080@lrrr.local> Message-ID: <0b626ab3-09c9-0899-7f9a-2309830f1d79@ham.ie> On 18/01/18 16:25, Doug Hellmann wrote: > Excerpts from Graham Hayes's message of 2018-01-18 15:33:12 +0000: > > In the past the QA team agreed to accept trademark-related tests from > all projects in the tempest repo. Has that changed? > There has not been an explict rejection but in all conversations the response has been "non core projects are outside the scope of tempest". Honestly, everytime we have tried to do something to core tempest we have had major pushback, and I want to clarify this before I or someone else put in the work of porting the base clients, getting CI configured*, and proposing the tests to tempest. - Graham * With zuulv3 this is *much* easier, so not as big a deal as it once was -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From doug at doughellmann.com Thu Jan 18 18:52:17 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 18 Jan 2018 13:52:17 -0500 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: <0b626ab3-09c9-0899-7f9a-2309830f1d79@ham.ie> References: <21a4f3d8-6386-7759-9d68-95a2b635b70d@ham.ie> <1516292546-sup-5080@lrrr.local> <0b626ab3-09c9-0899-7f9a-2309830f1d79@ham.ie> Message-ID: <1516300734-sup-8558@lrrr.local> Excerpts from Graham Hayes's message of 2018-01-18 17:52:39 +0000: > On 18/01/18 16:25, Doug Hellmann wrote: > > Excerpts from Graham Hayes's message of 2018-01-18 15:33:12 +0000: > > > > > > > In the past the QA team agreed to accept trademark-related tests from > > all projects in the tempest repo. Has that changed? > > > > There has not been an explict rejection but in all conversations the > response has been "non core projects are outside the scope of tempest". > > Honestly, everytime we have tried to do something to core tempest > we have had major pushback, and I want to clarify this before I or > someone else put in the work of porting the base clients, getting CI > configured*, and proposing the tests to tempest. OK. The current policy doesn't say anything about "core" or different trademark programs or any other criteria. The TC therefore encourages the DefCore committee to consider it an indication of future technical direction that we do not want tests outside of the Tempest repository used for trademark enforcement, and that any new or existing tests that cover capabilities they want to consider for trademark enforcement should be placed in Tempest. That all seems very clear to me (setting aside some specific word choices like "future technical direction" that tie the resolution to language in the bylaws). Regardless of technical reasons why it may not be necessary, we still have many social justifications for doing it the way we originally set out to do it. Tests related to trademark enforcement need to go into the tempest repository. The way I think this should work (and the way I remember us describing it at the time the policy was established) is the Interop WG (previously DefCore) should identify capabilities and tests, then ask project teams to reproduce those tests in the tempest repo. When the tests land, they can be used by the trademark program. Teams can also, at their leisure, decide whether to remove the original versions of the tests from whatever repo they existed in to begin with. Graham, you've proposed a new resolution with several options for where to put tests for "add-on programs." I don't think we need that resolution if we want the tests to continue to live in tempest. The existing resolution doesn't qualify which tests, beyond "for trademark enforcement" and more words won't make that more clear, IMO. Now if you *do* want to change the policy, we should talk about that. But I can't tell whether you want to change it, you're worried the policy is unclear, or it is not being followed. Can you clarify which it is? Doug From gr at ham.ie Thu Jan 18 19:25:02 2018 From: gr at ham.ie (Graham Hayes) Date: Thu, 18 Jan 2018 19:25:02 +0000 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: <1516300734-sup-8558@lrrr.local> References: <21a4f3d8-6386-7759-9d68-95a2b635b70d@ham.ie> <1516292546-sup-5080@lrrr.local> <0b626ab3-09c9-0899-7f9a-2309830f1d79@ham.ie> <1516300734-sup-8558@lrrr.local> Message-ID: <52aafa3f-ace9-26d9-2e17-8344d65f5081@ham.ie> On 18/01/18 18:52, Doug Hellmann wrote: > Excerpts from Graham Hayes's message of 2018-01-18 17:52:39 +0000: >> On 18/01/18 16:25, Doug Hellmann wrote: >>> Excerpts from Graham Hayes's message of 2018-01-18 15:33:12 +0000: >> >> >> >>> >>> In the past the QA team agreed to accept trademark-related tests from >>> all projects in the tempest repo. Has that changed? >>> >> >> There has not been an explict rejection but in all conversations the >> response has been "non core projects are outside the scope of tempest". >> >> Honestly, everytime we have tried to do something to core tempest >> we have had major pushback, and I want to clarify this before I or >> someone else put in the work of porting the base clients, getting CI >> configured*, and proposing the tests to tempest. > > OK. > > The current policy doesn't say anything about "core" or different > trademark programs or any other criteria. > > The TC therefore encourages the DefCore committee to consider it an > indication of future technical direction that we do not want tests > outside of the Tempest repository used for trademark enforcement, and > that any new or existing tests that cover capabilities they want to > consider for trademark enforcement should be placed in Tempest. > > That all seems very clear to me (setting aside some specific word > choices like "future technical direction" that tie the resolution > to language in the bylaws). Regardless of technical reasons why > it may not be necessary, we still have many social justifications > for doing it the way we originally set out to do it. Tests related > to trademark enforcement need to go into the tempest repository. > > The way I think this should work (and the way I remember us describing > it at the time the policy was established) is the Interop WG > (previously DefCore) should identify capabilities and tests, then > ask project teams to reproduce those tests in the tempest repo. > When the tests land, they can be used by the trademark program. > Teams can also, at their leisure, decide whether to remove the > original versions of the tests from whatever repo they existed in > to begin with. > > Graham, you've proposed a new resolution with several options for > where to put tests for "add-on programs." I don't think we need > that resolution if we want the tests to continue to live in tempest. > The existing resolution doesn't qualify which tests, beyond "for > trademark enforcement" and more words won't make that more clear, > IMO. > > Now if you *do* want to change the policy, we should talk about > that. But I can't tell whether you want to change it, you're worried > the policy is unclear, or it is not being followed. Can you clarify > which it is? It is not being followed. I have brought this up at every forum session on these programs, and the people in the room from QA have *always* pushed back on it. And, for clarity (I saw this in a few logs) QA have *never* said that they will take the interop designated tests for the DNS project into openstack/tempest. To the point that the interop tooling was developed to support plugins (which would seem to be in breach of this resolution, but I am sure there is reasons for this.) I do want to have option 3 (interop-tempest-plugin), but right now I will settle for us either: A: Doing what we planned on before (Option 1) (Prefered) B: Documenting the fact that things have changed (Option 2), and articulate and record the reasoning for the change. I think Add Ons are going to the Board in Dublin for the change from Advisory, in the 2018.02 standard so we need to get clarity on this. - Graham > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From logan at protiumit.com Thu Jan 18 20:06:04 2018 From: logan at protiumit.com (Logan V.) Date: Thu, 18 Jan 2018 14:06:04 -0600 Subject: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata In-Reply-To: References: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> <43f5884e-39d2-23c8-7606-5940f33251bd@gmail.com> Message-ID: We have used aggregate based scheduler filters since deploying our cloud in Kilo. This explains the unpredictable scheduling we have seen since upgrading to Ocata. Before this post, was there some indication I missed that these filters can no longer be used? Even now reading the Ocata release notes[1] or checking the filter scheduler docs[2] I cannot find any indication that AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter are useless in Ocata+. If I missed something I'd like to know where it is so I can avoid that mistake again! Just to make sure I understand correctly, given this list of filters we used in Newton: AggregateInstanceExtraSpecsFilter,AggregateNumInstancesFilter,AggregateCoreFilter,AggregateRamFilter,RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter I should remove AggregateCoreFilter, AggregateRamFilter, and RamFilter from the list because they are no longer useful, and replace them with the appropriate nova.conf settings instead, correct? What about AggregateInstanceExtraSpecsFilter and AggregateNumInstancesFilter? Do these still work? Thanks Logan [1] https://docs.openstack.org/releasenotes/nova/ocata.html [2] https://docs.openstack.org/ocata/config-reference/compute/schedulers.html On Wed, Jan 17, 2018 at 7:57 AM, Sylvain Bauza wrote: > > > On Wed, Jan 17, 2018 at 2:22 PM, Jay Pipes wrote: >> >> On 01/16/2018 08:19 PM, Zhenyu Zheng wrote: >>> >>> Thanks for the info, so it seems we are not going to implement aggregate >>> overcommit ratio in placement at least in the near future? >> >> >> As @edleafe alluded to, we will not be adding functionality to the >> placement service to associate an overcommit ratio with an aggregate. This >> was/is buggy functionality that we do not wish to bring forward into the >> placement modeling system. >> >> Reasons the current functionality is poorly architected and buggy >> (mentioned in @melwitt's footnote): >> >> 1) If a nova-compute service's CONF.cpu_allocation_ratio is different from >> the host aggregate's cpu_allocation_ratio metadata value, which value should >> be considered by the AggregateCoreFilter filter? >> >> 2) If a nova-compute service is associated with multiple host aggregates, >> and those aggregates contain different values for their cpu_allocation_ratio >> metadata value, which one should be used by the AggregateCoreFilter? >> >> The bottom line for me is that the AggregateCoreFilter has been used as a >> crutch to solve a **configuration management problem**. >> >> Instead of the configuration management system (Puppet, etc) setting >> nova-compute service CONF.cpu_allocation_ratio options *correctly*, having >> the admin set the HostAggregate metadata cpu_allocation_ratio value is >> error-prone for the reasons listed above. >> > > Well, the main cause why people started to use AggregateCoreFilter and > others is because pre-Newton, it was litterally impossible to assign > different allocation ratios in between computes except if you were grouping > them in aggregates and using those filters. > Now that ratios are per-compute, there is no need to keep those filters > except if you don't touch computes nova.conf's so that it defaults to the > scheduler ones. The crazy usecase would be like "I have 1000+ computes and I > just want to apply specific ratios to only one or two" but then, I'd second > Jay and say "Config management is the solution to your problem". > > >> >> Incidentally, this same design flaw is the reason that availability zones >> are so poorly defined in Nova. There is actually no such thing as an >> availability zone in Nova. Instead, an AZ is merely a metadata tag (or a >> CONF option! :( ) that may or may not exist against a host aggregate. >> There's lots of spaghetti in Nova due to the decision to use host aggregate >> metadata for availability zone information, which should have always been >> the domain of a **configuration management system** to set. [*] >> > > IMHO, not exactly the root cause why we have spaghetti code for AZs. I > rather like the idea to see an availability zone as just a user-visible > aggregate, because it makes things simple to understand. > What the spaghetti code is due to is because the transitive relationship > between an aggregate, a compute and an instance is misunderstood and we > introduced the notion of "instance AZ" which is a fool. Instances shouldn't > have a field saying "here is my AZ", it should rather be a flag saying "what > the user wanted as AZ ? (None being a choice) " > > >> In the Placement service, we have the concept of aggregates, too. However, >> in Placement, an aggregate (note: not "host aggregate") is merely a grouping >> mechanism for resource providers. Placement aggregates do not have any >> attributes themselves -- they merely represent the relationship between >> resource providers. Placement aggregates suffer from neither of the above >> listed design flaws because they are not buckets for metadata. >> >> ok . >> >> Best, >> -jay >> >> [*] Note the assumption on line 97 here: >> >> >> https://github.com/openstack/nova/blob/master/nova/availability_zones.py#L96-L100 >> >>> On Wed, Jan 17, 2018 at 5:24 AM, melanie witt >> > wrote: >>> >>> Hello Stackers, >>> >>> This is a heads up to any of you using the AggregateCoreFilter, >>> AggregateRamFilter, and/or AggregateDiskFilter in the filter >>> scheduler. These filters have effectively allowed operators to set >>> overcommit ratios per aggregate rather than per compute node in <= >>> Newton. >>> >>> Beginning in Ocata, there is a behavior change where aggregate-based >>> overcommit ratios will no longer be honored during scheduling. >>> Instead, overcommit values must be set on a per compute node basis >>> in nova.conf. >>> >>> Details: as of Ocata, instead of considering all compute nodes at >>> the start of scheduler filtering, an optimization has been added to >>> query resource capacity from placement and prune the compute node >>> list with the result *before* any filters are applied. Placement >>> tracks resource capacity and usage and does *not* track aggregate >>> metadata [1]. Because of this, placement cannot consider >>> aggregate-based overcommit and will exclude compute nodes that do >>> not have capacity based on per compute node overcommit. >>> >>> How to prepare: if you have been relying on per aggregate >>> overcommit, during your upgrade to Ocata, you must change to using >>> per compute node overcommit ratios in order for your scheduling >>> behavior to stay consistent. Otherwise, you may notice increased >>> NoValidHost scheduling failures as the aggregate-based overcommit is >>> no longer being considered. You can safely remove the >>> AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter >>> from your enabled_filters and you do not need to replace them with >>> any other core/ram/disk filters. The placement query takes care of >>> the core/ram/disk filtering instead, so CoreFilter, RamFilter, and >>> DiskFilter are redundant. >>> >>> Thanks, >>> -melanie >>> >>> [1] Placement has been a new slate for resource management and prior >>> to placement, there were conflicts between the different methods for >>> setting overcommit ratios that were never addressed, such as, "which >>> value to take if a compute node has overcommit set AND the aggregate >>> has it set? Which takes precedence?" And, "if a compute node is in >>> more than one aggregate, which overcommit value should be taken?" >>> So, the ambiguities were not something that was desirable to bring >>> forward into placement. >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Thu Jan 18 20:21:12 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 18 Jan 2018 15:21:12 -0500 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: <52aafa3f-ace9-26d9-2e17-8344d65f5081@ham.ie> References: <21a4f3d8-6386-7759-9d68-95a2b635b70d@ham.ie> <1516292546-sup-5080@lrrr.local> <0b626ab3-09c9-0899-7f9a-2309830f1d79@ham.ie> <1516300734-sup-8558@lrrr.local> <52aafa3f-ace9-26d9-2e17-8344d65f5081@ham.ie> Message-ID: <1516306691-sup-5728@lrrr.local> Excerpts from Graham Hayes's message of 2018-01-18 19:25:02 +0000: > > On 18/01/18 18:52, Doug Hellmann wrote: > > Excerpts from Graham Hayes's message of 2018-01-18 17:52:39 +0000: > >> On 18/01/18 16:25, Doug Hellmann wrote: > >>> Excerpts from Graham Hayes's message of 2018-01-18 15:33:12 +0000: > >> > >> > >> > >>> > >>> In the past the QA team agreed to accept trademark-related tests from > >>> all projects in the tempest repo. Has that changed? > >>> > >> > >> There has not been an explict rejection but in all conversations the > >> response has been "non core projects are outside the scope of tempest". > >> > >> Honestly, everytime we have tried to do something to core tempest > >> we have had major pushback, and I want to clarify this before I or > >> someone else put in the work of porting the base clients, getting CI > >> configured*, and proposing the tests to tempest. > > > > OK. > > > > The current policy doesn't say anything about "core" or different > > trademark programs or any other criteria. > > > > The TC therefore encourages the DefCore committee to consider it an > > indication of future technical direction that we do not want tests > > outside of the Tempest repository used for trademark enforcement, and > > that any new or existing tests that cover capabilities they want to > > consider for trademark enforcement should be placed in Tempest. > > > > That all seems very clear to me (setting aside some specific word > > choices like "future technical direction" that tie the resolution > > to language in the bylaws). Regardless of technical reasons why > > it may not be necessary, we still have many social justifications > > for doing it the way we originally set out to do it. Tests related > > to trademark enforcement need to go into the tempest repository. > > > > The way I think this should work (and the way I remember us describing > > it at the time the policy was established) is the Interop WG > > (previously DefCore) should identify capabilities and tests, then > > ask project teams to reproduce those tests in the tempest repo. > > When the tests land, they can be used by the trademark program. > > Teams can also, at their leisure, decide whether to remove the > > original versions of the tests from whatever repo they existed in > > to begin with. > > > > Graham, you've proposed a new resolution with several options for > > where to put tests for "add-on programs." I don't think we need > > that resolution if we want the tests to continue to live in tempest. > > The existing resolution doesn't qualify which tests, beyond "for > > trademark enforcement" and more words won't make that more clear, > > IMO. > > > > Now if you *do* want to change the policy, we should talk about > > that. But I can't tell whether you want to change it, you're worried > > the policy is unclear, or it is not being followed. Can you clarify > > which it is? > > It is not being followed. > > I have brought this up at every forum session on these programs, and the > people in the room from QA have *always* pushed back on it. OK, so that's a problem. I need to hear from the QA team why they've reversed that decision. > > And, for clarity (I saw this in a few logs) QA have *never* said that > they will take the interop designated tests for the DNS project into > openstack/tempest. When we approved the resolution that describes the current policy, the QA team agreed that they would take tests for trademark. There was no stipulation about which projects those apply to. > > To the point that the interop tooling was developed to support plugins > (which would seem to be in breach of this resolution, but I am sure > there is reasons for this.) I can see it being useful to support plugins for evaluating tests before they are accepted. > > I do want to have option 3 (interop-tempest-plugin), but right now I > will settle for us either: > > A: Doing what we planned on before (Option 1) (Prefered) > B: Documenting the fact that things have changed (Option 2), and > articulate and record the reasoning for the change. > > I think Add Ons are going to the Board in Dublin for the change from > Advisory, in the 2018.02 standard so we need to get clarity on this. I agree. Doug From doug at doughellmann.com Thu Jan 18 20:36:11 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 18 Jan 2018 15:36:11 -0500 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: <1516306691-sup-5728@lrrr.local> References: <21a4f3d8-6386-7759-9d68-95a2b635b70d@ham.ie> <1516292546-sup-5080@lrrr.local> <0b626ab3-09c9-0899-7f9a-2309830f1d79@ham.ie> <1516300734-sup-8558@lrrr.local> <52aafa3f-ace9-26d9-2e17-8344d65f5081@ham.ie> <1516306691-sup-5728@lrrr.local> Message-ID: <1516307710-sup-8918@lrrr.local> Excerpts from Doug Hellmann's message of 2018-01-18 15:21:12 -0500: > Excerpts from Graham Hayes's message of 2018-01-18 19:25:02 +0000: > > > > On 18/01/18 18:52, Doug Hellmann wrote: > > > Excerpts from Graham Hayes's message of 2018-01-18 17:52:39 +0000: > > >> On 18/01/18 16:25, Doug Hellmann wrote: > > >>> Excerpts from Graham Hayes's message of 2018-01-18 15:33:12 +0000: > > >> > > >> > > >> > > >>> > > >>> In the past the QA team agreed to accept trademark-related tests from > > >>> all projects in the tempest repo. Has that changed? > > >>> > > >> > > >> There has not been an explict rejection but in all conversations the > > >> response has been "non core projects are outside the scope of tempest". > > >> > > >> Honestly, everytime we have tried to do something to core tempest > > >> we have had major pushback, and I want to clarify this before I or > > >> someone else put in the work of porting the base clients, getting CI > > >> configured*, and proposing the tests to tempest. > > > > > > OK. > > > > > > The current policy doesn't say anything about "core" or different > > > trademark programs or any other criteria. > > > > > > The TC therefore encourages the DefCore committee to consider it an > > > indication of future technical direction that we do not want tests > > > outside of the Tempest repository used for trademark enforcement, and > > > that any new or existing tests that cover capabilities they want to > > > consider for trademark enforcement should be placed in Tempest. > > > > > > That all seems very clear to me (setting aside some specific word > > > choices like "future technical direction" that tie the resolution > > > to language in the bylaws). Regardless of technical reasons why > > > it may not be necessary, we still have many social justifications > > > for doing it the way we originally set out to do it. Tests related > > > to trademark enforcement need to go into the tempest repository. > > > > > > The way I think this should work (and the way I remember us describing > > > it at the time the policy was established) is the Interop WG > > > (previously DefCore) should identify capabilities and tests, then > > > ask project teams to reproduce those tests in the tempest repo. > > > When the tests land, they can be used by the trademark program. > > > Teams can also, at their leisure, decide whether to remove the > > > original versions of the tests from whatever repo they existed in > > > to begin with. > > > > > > Graham, you've proposed a new resolution with several options for > > > where to put tests for "add-on programs." I don't think we need > > > that resolution if we want the tests to continue to live in tempest. > > > The existing resolution doesn't qualify which tests, beyond "for > > > trademark enforcement" and more words won't make that more clear, > > > IMO. > > > > > > Now if you *do* want to change the policy, we should talk about > > > that. But I can't tell whether you want to change it, you're worried > > > the policy is unclear, or it is not being followed. Can you clarify > > > which it is? > > > > It is not being followed. > > > > I have brought this up at every forum session on these programs, and the > > people in the room from QA have *always* pushed back on it. > > OK, so that's a problem. I need to hear from the QA team why they've > reversed that decision. > > > > > And, for clarity (I saw this in a few logs) QA have *never* said that > > they will take the interop designated tests for the DNS project into > > openstack/tempest. > > When we approved the resolution that describes the current policy, the > QA team agreed that they would take tests for trademark. There was no > stipulation about which projects those apply to. I feel pretty sure that was discussed in a TC meeting, but I can't find that. I do find Matt and Ken'ichi voting +1 on the resolution itself. https://review.openstack.org/#/c/312718/. If I remember correctly, Ken'ichi was the PTL at the time. Doug From jaypipes at gmail.com Thu Jan 18 20:49:09 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 18 Jan 2018 15:49:09 -0500 Subject: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata In-Reply-To: References: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> <43f5884e-39d2-23c8-7606-5940f33251bd@gmail.com> Message-ID: <3ad94edd-fbbd-1257-a88e-0a97cc4b588b@gmail.com> On 01/18/2018 03:06 PM, Logan V. wrote: > We have used aggregate based scheduler filters since deploying our > cloud in Kilo. This explains the unpredictable scheduling we have seen > since upgrading to Ocata. Before this post, was there some indication > I missed that these filters can no longer be used? Even now reading > the Ocata release notes[1] or checking the filter scheduler docs[2] I > cannot find any indication that AggregateCoreFilter, > AggregateRamFilter, and AggregateDiskFilter are useless in Ocata+. If > I missed something I'd like to know where it is so I can avoid that > mistake again! We failed to provide a release note about it. :( That's our fault and I apologize. > Just to make sure I understand correctly, given this list of filters > we used in Newton: > AggregateInstanceExtraSpecsFilter,AggregateNumInstancesFilter,AggregateCoreFilter,AggregateRamFilter,RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter > > I should remove AggregateCoreFilter, AggregateRamFilter, and RamFilter > from the list because they are no longer useful, and replace them with > the appropriate nova.conf settings instead, correct? Yes, correct. > What about AggregateInstanceExtraSpecsFilter and > AggregateNumInstancesFilter? Do these still work? Yes. Best, -jay > Thanks > Logan > > [1] https://docs.openstack.org/releasenotes/nova/ocata.html > [2] https://docs.openstack.org/ocata/config-reference/compute/schedulers.html > > On Wed, Jan 17, 2018 at 7:57 AM, Sylvain Bauza wrote: >> >> >> On Wed, Jan 17, 2018 at 2:22 PM, Jay Pipes wrote: >>> >>> On 01/16/2018 08:19 PM, Zhenyu Zheng wrote: >>>> >>>> Thanks for the info, so it seems we are not going to implement aggregate >>>> overcommit ratio in placement at least in the near future? >>> >>> >>> As @edleafe alluded to, we will not be adding functionality to the >>> placement service to associate an overcommit ratio with an aggregate. This >>> was/is buggy functionality that we do not wish to bring forward into the >>> placement modeling system. >>> >>> Reasons the current functionality is poorly architected and buggy >>> (mentioned in @melwitt's footnote): >>> >>> 1) If a nova-compute service's CONF.cpu_allocation_ratio is different from >>> the host aggregate's cpu_allocation_ratio metadata value, which value should >>> be considered by the AggregateCoreFilter filter? >>> >>> 2) If a nova-compute service is associated with multiple host aggregates, >>> and those aggregates contain different values for their cpu_allocation_ratio >>> metadata value, which one should be used by the AggregateCoreFilter? >>> >>> The bottom line for me is that the AggregateCoreFilter has been used as a >>> crutch to solve a **configuration management problem**. >>> >>> Instead of the configuration management system (Puppet, etc) setting >>> nova-compute service CONF.cpu_allocation_ratio options *correctly*, having >>> the admin set the HostAggregate metadata cpu_allocation_ratio value is >>> error-prone for the reasons listed above. >>> >> >> Well, the main cause why people started to use AggregateCoreFilter and >> others is because pre-Newton, it was litterally impossible to assign >> different allocation ratios in between computes except if you were grouping >> them in aggregates and using those filters. >> Now that ratios are per-compute, there is no need to keep those filters >> except if you don't touch computes nova.conf's so that it defaults to the >> scheduler ones. The crazy usecase would be like "I have 1000+ computes and I >> just want to apply specific ratios to only one or two" but then, I'd second >> Jay and say "Config management is the solution to your problem". >> >> >>> >>> Incidentally, this same design flaw is the reason that availability zones >>> are so poorly defined in Nova. There is actually no such thing as an >>> availability zone in Nova. Instead, an AZ is merely a metadata tag (or a >>> CONF option! :( ) that may or may not exist against a host aggregate. >>> There's lots of spaghetti in Nova due to the decision to use host aggregate >>> metadata for availability zone information, which should have always been >>> the domain of a **configuration management system** to set. [*] >>> >> >> IMHO, not exactly the root cause why we have spaghetti code for AZs. I >> rather like the idea to see an availability zone as just a user-visible >> aggregate, because it makes things simple to understand. >> What the spaghetti code is due to is because the transitive relationship >> between an aggregate, a compute and an instance is misunderstood and we >> introduced the notion of "instance AZ" which is a fool. Instances shouldn't >> have a field saying "here is my AZ", it should rather be a flag saying "what >> the user wanted as AZ ? (None being a choice) " >> >> >>> In the Placement service, we have the concept of aggregates, too. However, >>> in Placement, an aggregate (note: not "host aggregate") is merely a grouping >>> mechanism for resource providers. Placement aggregates do not have any >>> attributes themselves -- they merely represent the relationship between >>> resource providers. Placement aggregates suffer from neither of the above >>> listed design flaws because they are not buckets for metadata. >>> >>> ok . >>> >>> Best, >>> -jay >>> >>> [*] Note the assumption on line 97 here: >>> >>> >>> https://github.com/openstack/nova/blob/master/nova/availability_zones.py#L96-L100 >>> >>>> On Wed, Jan 17, 2018 at 5:24 AM, melanie witt >>> > wrote: >>>> >>>> Hello Stackers, >>>> >>>> This is a heads up to any of you using the AggregateCoreFilter, >>>> AggregateRamFilter, and/or AggregateDiskFilter in the filter >>>> scheduler. These filters have effectively allowed operators to set >>>> overcommit ratios per aggregate rather than per compute node in <= >>>> Newton. >>>> >>>> Beginning in Ocata, there is a behavior change where aggregate-based >>>> overcommit ratios will no longer be honored during scheduling. >>>> Instead, overcommit values must be set on a per compute node basis >>>> in nova.conf. >>>> >>>> Details: as of Ocata, instead of considering all compute nodes at >>>> the start of scheduler filtering, an optimization has been added to >>>> query resource capacity from placement and prune the compute node >>>> list with the result *before* any filters are applied. Placement >>>> tracks resource capacity and usage and does *not* track aggregate >>>> metadata [1]. Because of this, placement cannot consider >>>> aggregate-based overcommit and will exclude compute nodes that do >>>> not have capacity based on per compute node overcommit. >>>> >>>> How to prepare: if you have been relying on per aggregate >>>> overcommit, during your upgrade to Ocata, you must change to using >>>> per compute node overcommit ratios in order for your scheduling >>>> behavior to stay consistent. Otherwise, you may notice increased >>>> NoValidHost scheduling failures as the aggregate-based overcommit is >>>> no longer being considered. You can safely remove the >>>> AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter >>>> from your enabled_filters and you do not need to replace them with >>>> any other core/ram/disk filters. The placement query takes care of >>>> the core/ram/disk filtering instead, so CoreFilter, RamFilter, and >>>> DiskFilter are redundant. >>>> >>>> Thanks, >>>> -melanie >>>> >>>> [1] Placement has been a new slate for resource management and prior >>>> to placement, there were conflicts between the different methods for >>>> setting overcommit ratios that were never addressed, such as, "which >>>> value to take if a compute node has overcommit set AND the aggregate >>>> has it set? Which takes precedence?" And, "if a compute node is in >>>> more than one aggregate, which overcommit value should be taken?" >>>> So, the ambiguities were not something that was desirable to bring >>>> forward into placement. >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >>>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> >>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mgagne at calavera.ca Thu Jan 18 20:54:02 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Thu, 18 Jan 2018 15:54:02 -0500 Subject: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata In-Reply-To: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> References: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> Message-ID: Hi, On Tue, Jan 16, 2018 at 4:24 PM, melanie witt wrote: > Hello Stackers, > > This is a heads up to any of you using the AggregateCoreFilter, > AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler. > These filters have effectively allowed operators to set overcommit ratios > per aggregate rather than per compute node in <= Newton. > > Beginning in Ocata, there is a behavior change where aggregate-based > overcommit ratios will no longer be honored during scheduling. Instead, > overcommit values must be set on a per compute node basis in nova.conf. > > Details: as of Ocata, instead of considering all compute nodes at the start > of scheduler filtering, an optimization has been added to query resource > capacity from placement and prune the compute node list with the result > *before* any filters are applied. Placement tracks resource capacity and > usage and does *not* track aggregate metadata [1]. Because of this, > placement cannot consider aggregate-based overcommit and will exclude > compute nodes that do not have capacity based on per compute node > overcommit. > > How to prepare: if you have been relying on per aggregate overcommit, during > your upgrade to Ocata, you must change to using per compute node overcommit > ratios in order for your scheduling behavior to stay consistent. Otherwise, > you may notice increased NoValidHost scheduling failures as the > aggregate-based overcommit is no longer being considered. You can safely > remove the AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter > from your enabled_filters and you do not need to replace them with any other > core/ram/disk filters. The placement query takes care of the core/ram/disk > filtering instead, so CoreFilter, RamFilter, and DiskFilter are redundant. > > Thanks, > -melanie > > [1] Placement has been a new slate for resource management and prior to > placement, there were conflicts between the different methods for setting > overcommit ratios that were never addressed, such as, "which value to take > if a compute node has overcommit set AND the aggregate has it set? Which > takes precedence?" And, "if a compute node is in more than one aggregate, > which overcommit value should be taken?" So, the ambiguities were not > something that was desirable to bring forward into placement. So we are a user of this feature and I do have some questions/concerns. We use this feature to segregate capacity/hosts based on CPU allocation ratio using aggregates. This is because we have different offers/flavors based on those allocation ratios. This is part of our business model. A flavor extra_specs is use to schedule instances on appropriate hosts using AggregateInstanceExtraSpecsFilter. Our setup has a configuration management system and we use aggregates exclusively when it comes to allocation ratio. We do not rely on cpu_allocation_ratio config in nova-scheduler or nova-compute. One of the reasons is we do not wish to have to update/package/redeploy our configuration management system just to add one or multiple compute nodes to an aggregate/capacity pool. This means anyone (likely an operator or other provisioning technician) can perform this action without having to touch or even know about our configuration management system. We can also transfer capacity from one aggregate to another if there is a need, again, using aggregate memberships. (we do "evacuate" the node if there are instances on it) Our capacity monitoring is based on aggregate memberships and this offer an easy overview of the current capacity. Note that a host can be in one and only one aggregate in our setup. What's the migration path for us? My understanding is that we will now be forced to have people rely on our configuration management system (which they don't have access to) to perform simple task we used to be able to do through the API. I find this unfortunate and I would like to be offered an alternative solution as the current proposed solution is not acceptable for us. We are loosing "agility" in our operational tasks. -- Mathieu From hjensas at redhat.com Thu Jan 18 21:39:24 2018 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Thu, 18 Jan 2018 22:39:24 +0100 Subject: [openstack-dev] [ironic] FFE - Requesting FFE for Routed Networks support. In-Reply-To: <9b4d2edd-e718-09f3-13f0-638d5f4351a6@redhat.com> References: <1516182841.12010.13.camel@redhat.com> <9b4d2edd-e718-09f3-13f0-638d5f4351a6@redhat.com> Message-ID: <1516311564.8927.1.camel@redhat.com> On Wed, 2018-01-17 at 16:05 +0100, Dmitry Tantsur wrote: > Hi! > > I'm essentially +1 on granting this FFE, as it's a low-risk work for > a great  > feature. See one comment inline. > > On 01/17/2018 10:54 AM, Harald Jensås wrote: > > Requesting FFE for Routed Network support in networking-baremetal. > > ------------------------------------------------------------------- > > > > > > # Pros > > ------ > > With the patches up for review[7] we have a working ml2 agent; > > __depends on neutron fix__; and mechanism driver combination that > > enables support to bind ports on neutron routed networks. > > > > Specifically we report the bridge_mappings data to neutron, which > > enable the _find_candidate_subnets() method in neutron ipam[1] to > > succeed in finding a candidate subnet available to the ironic node > > when > > ports on routed segments are bound. > > > > This functionality will allow users to take advantage of the > > functionality added in DHCP Agent[2] which enables the DHCP agent > > to > > service other subnets on the network via DHCP relay. For Ironic > > this > > means we can support deploying nodes on a remote L3 network, e.g > > different datacenter or different rack/rack-row. > > > > > > > > # Cons > > ------ > > Integration with placement does not currently work. > > > > Neutron uses Nova host-aggregates in combination with Placement. > > Specifically hosts are added to a host-aggregate for segments based > > on > > SEGMENT_HOST_MAPPING. Ironic nodes cannot currently be added to > > host- > > aggregates in Nova. Because of this the following will appear in > > the > > neutron logs when ironic-neutron agent is started: > >     RESP BODY: {"itemNotFound": {"message": "Compute host > node- > > id> could not be found.", "code": 404}} > > > > Also the placement api cannot be used to find good candidate ironic > > nodes with a baremetal port on the correct segment. This will have > > to be worked around by the operator via capabilities and flavor > > properties or manual additions to resource providers in placement. > > > > Depending on the direction of other projects, neutron and nova, the > > way > > placement will finally work is not certain. > > > > Either the nova work [3] and [4], or a neutron change to use > > placement > > only or a fallback to placement in neutron would be possible. In > > either > > case there should be no need to change the networking-baremetal > > agent > > or mechanism driver. > > > > > > # Risks > > ------- > > Unless this bug[5] is fixed we might break the current baremetal > > mechanism driver functionality. I have proposed a patch[6] to > > neutron > > that fix the issue. In case no fix lands for this neutron bug soon > > we > > should probably push these changes to Rocky. > > Let's add Depends-On to the first patch in the chain to make sure > your patches  > don't merge until the fix is merged. > The fix for the neutron issue was approved and is now merged. https://review.openstack.org/#/c/534449/ > > > > > > # Core reviewers > > ---------------- > > Julia Kreger, Sam Betts > > > > > > > > > > [1] https://git.openstack.org/cgit/openstack/neutron/tree/neutron/d > > b/ip > > am_backend_mixin.py#n697 > > [2] https://review.openstack.org/#/c/468744/ > > [3] https://review.openstack.org/#/c/421009/ > > [4] https://review.openstack.org/#/c/421011/ > > [5] https://bugs.launchpad.net/neutron/+bug/1743579 > > [6] https://review.openstack.org/#/c/534449/ > > [7] https://review.openstack.org/#/q/project:openstack/networking-b > > arem > > etal > > > > > > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- |Harald Jensås        |hjensas at redhat.com   |  www.redhat.com |+46 (0)701 91 23 17  |  hjensas:irc From Louie.Kwan at windriver.com Thu Jan 18 21:54:14 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Thu, 18 Jan 2018 21:54:14 +0000 Subject: [openstack-dev] [requirements] requirements-tox-validate-projects FAILURE Message-ID: <47EFB32CD8770A4D9590812EE28C977E961DD31C@ALA-MBC.corp.ad.wrs.com> Would like to add the following module to openstack.masakari project https://github.com/pytransitions/transitions https://review.openstack.org/#/c/534990/ requirements-tox-validate-projects failed: http://logs.openstack.org/90/534990/6/check/requirements-tox-validate-projects/ed69273/ara/result/4ee4f7a1-456c-4b89-933a-fe282cf534a3/ What else need to be done? Thanks. Louie.Kwan at windriver.com From jaypipes at gmail.com Thu Jan 18 22:19:55 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 18 Jan 2018 17:19:55 -0500 Subject: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata In-Reply-To: References: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> Message-ID: <57306d1a-529b-3907-7c5a-a9b46057b236@gmail.com> On 01/18/2018 03:54 PM, Mathieu Gagné wrote: > Hi, > > On Tue, Jan 16, 2018 at 4:24 PM, melanie witt wrote: >> Hello Stackers, >> >> This is a heads up to any of you using the AggregateCoreFilter, >> AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler. >> These filters have effectively allowed operators to set overcommit ratios >> per aggregate rather than per compute node in <= Newton. >> >> Beginning in Ocata, there is a behavior change where aggregate-based >> overcommit ratios will no longer be honored during scheduling. Instead, >> overcommit values must be set on a per compute node basis in nova.conf. >> >> Details: as of Ocata, instead of considering all compute nodes at the start >> of scheduler filtering, an optimization has been added to query resource >> capacity from placement and prune the compute node list with the result >> *before* any filters are applied. Placement tracks resource capacity and >> usage and does *not* track aggregate metadata [1]. Because of this, >> placement cannot consider aggregate-based overcommit and will exclude >> compute nodes that do not have capacity based on per compute node >> overcommit. >> >> How to prepare: if you have been relying on per aggregate overcommit, during >> your upgrade to Ocata, you must change to using per compute node overcommit >> ratios in order for your scheduling behavior to stay consistent. Otherwise, >> you may notice increased NoValidHost scheduling failures as the >> aggregate-based overcommit is no longer being considered. You can safely >> remove the AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter >> from your enabled_filters and you do not need to replace them with any other >> core/ram/disk filters. The placement query takes care of the core/ram/disk >> filtering instead, so CoreFilter, RamFilter, and DiskFilter are redundant. >> >> Thanks, >> -melanie >> >> [1] Placement has been a new slate for resource management and prior to >> placement, there were conflicts between the different methods for setting >> overcommit ratios that were never addressed, such as, "which value to take >> if a compute node has overcommit set AND the aggregate has it set? Which >> takes precedence?" And, "if a compute node is in more than one aggregate, >> which overcommit value should be taken?" So, the ambiguities were not >> something that was desirable to bring forward into placement. > > So we are a user of this feature and I do have some questions/concerns. > > We use this feature to segregate capacity/hosts based on CPU > allocation ratio using aggregates. > This is because we have different offers/flavors based on those > allocation ratios. This is part of our business model. > A flavor extra_specs is use to schedule instances on appropriate hosts > using AggregateInstanceExtraSpecsFilter. The AggregateInstanceExtraSpecsFilter will continue to work, but this filter is run *after* the placement service would have already eliminated compute node records due to placement considering the allocation ratio set for the compute node provider's inventory records. > Our setup has a configuration management system and we use aggregates > exclusively when it comes to allocation ratio. Yes, that's going to be a problem. You will need to use your configuration management system to write the nova.CONF.XXX_allocation_ratio configuration option values appropriately for each compute node. > We do not rely on cpu_allocation_ratio config in nova-scheduler or nova-compute. > One of the reasons is we do not wish to have to > update/package/redeploy our configuration management system just to > add one or multiple compute nodes to an aggregate/capacity pool. Yes, I understand. > This means anyone (likely an operator or other provisioning > technician) can perform this action without having to touch or even > know about our configuration management system. > We can also transfer capacity from one aggregate to another if there > is a need, again, using aggregate memberships. Aggregates don't have "capacity". Aggregates are not capacity pools. Only compute nodes provide resources for guests to consume. > (we do "evacuate" the > node if there are instances on it) > Our capacity monitoring is based on aggregate memberships and this > offer an easy overview of the current capacity. By "based on aggregate membership", I believe you are referring to a system where you have all compute nodes in a particular aggregate only schedule instances with a particular flavor "A" and so you manage "capacity" by saying things like "aggregate X can fit 10 more instances of flavor A in it"? Do I understand you correctly? > Note that a host can > be in one and only one aggregate in our setup. In *your* setup. And that's the only reason this works for you. You'd get totally unpredictable behaviour if your compute nodes were in multiple aggregates. > What's the migration path for us? > > My understanding is that we will now be forced to have people rely on > our configuration management system (which they don't have access to) > to perform simple task we used to be able to do through the API. > I find this unfortunate and I would like to be offered an alternative > solution as the current proposed solution is not acceptable for us. > We are loosing "agility" in our operational tasks. I see a possible path forward: We add a new CONF option called "disable_allocation_ratio_autoset". This new CONF option would disable the behaviour of the nova-compute service in automatically setting the allocation ratio of its inventory records for VCPU, MEMORY_MB and DISK_GB resources. This would allow you to set compute node allocation ratios in batches. At first, it might be manual... executing something like this against the API database: UPDATE inventories INNER JOIN resource_provider ON inventories.resource_provider_id = resource_provider.id AND inventories.resource_class_id = $RESOURCE_CLASS_ID INNER JOIN resource_provider_aggregates ON resource_providers.id = resource_provider_aggregates.resource_provider_id INNER JOIN provider_aggregates ON resource_provider_aggregates.aggregate_id = provider_aggregates.id AND provider_aggregates.uuid = $AGGREGATE_UUID SET inventories.allocation_ratio = $NEW_VALUE; We could follow up with a little CLI tool that would do the above for you on the command line... something like this: nova-manage db set_aggregate_placement_allocation_ratio --aggregate_uuid=$AGG_UUID --resource_class=VCPU --ratio 16.0 Of course, you could always call the Placement REST API to override the allocation ratio for particular providers: DATA='{"resource_provider_generation": X, "allocation_ratio": $RATIO}' curl -XPUT -H "Content-Type: application/json" -H$AUTH_TOKEN -d$DATA \ https://$PLACEMENT/resource_providers/$RP_UUID/inventories/VCPU and you could loop through all the resource providers listed under a particular aggregate, which you can find using something like this: curl https://$PLACEMENT/resource_providers?member_of:$AGG_UUID Anyway, there's multiple ways to set the allocation ratios in batches, as you can tell. I think the key is somehow disabling the behaviour of the nova-compute service of overriding the allocation ratio of compute nodes with the value of the nova.cnf options. Thoughts? -jay From cboylan at sapwetik.org Thu Jan 18 22:27:59 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 18 Jan 2018 14:27:59 -0800 Subject: [openstack-dev] [requirements] requirements-tox-validate-projects FAILURE In-Reply-To: <47EFB32CD8770A4D9590812EE28C977E961DD31C@ALA-MBC.corp.ad.wrs.com> References: <47EFB32CD8770A4D9590812EE28C977E961DD31C@ALA-MBC.corp.ad.wrs.com> Message-ID: <1516314479.2080791.1240370112.48E05F7B@webmail.messagingengine.com> On Thu, Jan 18, 2018, at 1:54 PM, Kwan, Louie wrote: > Would like to add the following module to openstack.masakari project > > https://github.com/pytransitions/transitions > > https://review.openstack.org/#/c/534990/ > > requirements-tox-validate-projects failed: > > http://logs.openstack.org/90/534990/6/check/requirements-tox-validate-projects/ed69273/ara/result/4ee4f7a1-456c-4b89-933a-fe282cf534a3/ > > What else need to be done? Reading the log [0] the job failed because python-cratonclient removed its check-requirements job. This was done in https://review.openstack.org/#/c/535344/ as part of the craton retirement and should be fixed on the requirements side by https://review.openstack.org/#/c/535351/. I think a recheck at this point will come back green (so I have done that for you). [0] http://logs.openstack.org/90/534990/6/check/requirements-tox-validate-projects/ed69273/job-output.txt.gz#_2018-01-18_20_07_54_531014 Hope this helps, Clark From ihrachys at redhat.com Thu Jan 18 22:47:35 2018 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 18 Jan 2018 14:47:35 -0800 Subject: [openstack-dev] [gate][devstack][neutron][qa][release] Switch to lib/neutron in gate In-Reply-To: References: Message-ID: On Thu, Jan 18, 2018 at 8:33 AM, Michael Johnson wrote: > This sounds great Ihar! > > Let us know when we should make the changes to the neutron-lbaas projects. > > Michael Hi Michael! You can already start, by introducing new service names without q-* for your services. For example, neutron-lbaasv2 instead of q-lbaasv2. You can have both in parallel, behaving the same way, like we do in neutron devstack plugin: https://github.com/openstack/neutron/blob/master/devstack/plugin.sh#L34 Once you have it in your devstack plugin, you should be able to safely replace all occurrences of q-lbaasv2 in infra projects with the new name. Handly link to detect them: http://codesearch.openstack.org/?q=q-lbaas&i=nope&files=&repos= Thanks! Ihar From mgagne at calavera.ca Fri Jan 19 00:24:53 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Thu, 18 Jan 2018 19:24:53 -0500 Subject: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata In-Reply-To: <57306d1a-529b-3907-7c5a-a9b46057b236@gmail.com> References: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> <57306d1a-529b-3907-7c5a-a9b46057b236@gmail.com> Message-ID: On Thu, Jan 18, 2018 at 5:19 PM, Jay Pipes wrote: > On 01/18/2018 03:54 PM, Mathieu Gagné wrote: >> >> Hi, >> >> On Tue, Jan 16, 2018 at 4:24 PM, melanie witt wrote: >>> >>> Hello Stackers, >>> >>> This is a heads up to any of you using the AggregateCoreFilter, >>> AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler. >>> These filters have effectively allowed operators to set overcommit ratios >>> per aggregate rather than per compute node in <= Newton. >>> >>> Beginning in Ocata, there is a behavior change where aggregate-based >>> overcommit ratios will no longer be honored during scheduling. Instead, >>> overcommit values must be set on a per compute node basis in nova.conf. >>> >>> Details: as of Ocata, instead of considering all compute nodes at the >>> start >>> of scheduler filtering, an optimization has been added to query resource >>> capacity from placement and prune the compute node list with the result >>> *before* any filters are applied. Placement tracks resource capacity and >>> usage and does *not* track aggregate metadata [1]. Because of this, >>> placement cannot consider aggregate-based overcommit and will exclude >>> compute nodes that do not have capacity based on per compute node >>> overcommit. >>> >>> How to prepare: if you have been relying on per aggregate overcommit, >>> during >>> your upgrade to Ocata, you must change to using per compute node >>> overcommit >>> ratios in order for your scheduling behavior to stay consistent. >>> Otherwise, >>> you may notice increased NoValidHost scheduling failures as the >>> aggregate-based overcommit is no longer being considered. You can safely >>> remove the AggregateCoreFilter, AggregateRamFilter, and >>> AggregateDiskFilter >>> from your enabled_filters and you do not need to replace them with any >>> other >>> core/ram/disk filters. The placement query takes care of the >>> core/ram/disk >>> filtering instead, so CoreFilter, RamFilter, and DiskFilter are >>> redundant. >>> >>> Thanks, >>> -melanie >>> >>> [1] Placement has been a new slate for resource management and prior to >>> placement, there were conflicts between the different methods for setting >>> overcommit ratios that were never addressed, such as, "which value to >>> take >>> if a compute node has overcommit set AND the aggregate has it set? Which >>> takes precedence?" And, "if a compute node is in more than one aggregate, >>> which overcommit value should be taken?" So, the ambiguities were not >>> something that was desirable to bring forward into placement. >> >> >> So we are a user of this feature and I do have some questions/concerns. >> >> We use this feature to segregate capacity/hosts based on CPU >> allocation ratio using aggregates. >> This is because we have different offers/flavors based on those >> allocation ratios. This is part of our business model. >> A flavor extra_specs is use to schedule instances on appropriate hosts >> using AggregateInstanceExtraSpecsFilter. > > > The AggregateInstanceExtraSpecsFilter will continue to work, but this filter > is run *after* the placement service would have already eliminated compute > node records due to placement considering the allocation ratio set for the > compute node provider's inventory records. Ok. Does it mean I will have to use something else to properly filter compute nodes based on flavor? Is there a way for a compute node to expose some arbitrary feature/spec instead and still use flavor extra_specs to filter? (I still have to read on placement API) I don't mind migrating out of aggregates but I need to find a way to make it "self service" through the API with granular control like aggregates used to offer. We won't be giving access to our configuration manager to our technicians and even less direct access to the database. I see that you are suggesting using the placement API below, see my comments below. >> Our setup has a configuration management system and we use aggregates >> exclusively when it comes to allocation ratio. > > > Yes, that's going to be a problem. You will need to use your configuration > management system to write the nova.CONF.XXX_allocation_ratio configuration > option values appropriately for each compute node. Yes, that's my understanding and which is a concern for us. >> We do not rely on cpu_allocation_ratio config in nova-scheduler or >> nova-compute. >> One of the reasons is we do not wish to have to >> update/package/redeploy our configuration management system just to >> add one or multiple compute nodes to an aggregate/capacity pool. > > > Yes, I understand. > >> This means anyone (likely an operator or other provisioning >> technician) can perform this action without having to touch or even >> know about our configuration management system. >> We can also transfer capacity from one aggregate to another if there >> is a need, again, using aggregate memberships. > > > Aggregates don't have "capacity". Aggregates are not capacity pools. Only > compute nodes provide resources for guests to consume. Aggregates have been so far a very useful construct for us. You might not agree with our concept of "capacity pools" but so far, that's what we got and has been working very well for years. Our monitoring/operations are entirely based on this concept. You list the aggregate members, do some computing and cross calculation with hypervisor stats and you have a capacity monitoring system going. >> (we do "evacuate" the >> >> node if there are instances on it) >> Our capacity monitoring is based on aggregate memberships and this >> offer an easy overview of the current capacity. > > > By "based on aggregate membership", I believe you are referring to a system > where you have all compute nodes in a particular aggregate only schedule > instances with a particular flavor "A" and so you manage "capacity" by > saying things like "aggregate X can fit 10 more instances of flavor A in > it"? > > Do I understand you correctly? Yes, more or less. We do group compute nodes based on flavor "series". (we have A1 and B1 series) > >> Note that a host can >> >> be in one and only one aggregate in our setup. > > > In *your* setup. And that's the only reason this works for you. You'd get > totally unpredictable behaviour if your compute nodes were in multiple > aggregates. Yes. It worked very well for us so far. I do agree that it's not perfect and that you technically can end up with unpredictable behaviour if a host is part of multiple aggregates. That's why we avoid doing it. >> What's the migration path for us? >> >> My understanding is that we will now be forced to have people rely on >> our configuration management system (which they don't have access to) >> to perform simple task we used to be able to do through the API. >> I find this unfortunate and I would like to be offered an alternative >> solution as the current proposed solution is not acceptable for us. >> We are loosing "agility" in our operational tasks. > > > I see a possible path forward: > > We add a new CONF option called "disable_allocation_ratio_autoset". This new > CONF option would disable the behaviour of the nova-compute service in > automatically setting the allocation ratio of its inventory records for > VCPU, MEMORY_MB and DISK_GB resources. > > This would allow you to set compute node allocation ratios in batches. > > At first, it might be manual... executing something like this against the > API database: > > UPDATE inventories > INNER JOIN resource_provider > ON inventories.resource_provider_id = resource_provider.id > AND inventories.resource_class_id = $RESOURCE_CLASS_ID > INNER JOIN resource_provider_aggregates > ON resource_providers.id = > resource_provider_aggregates.resource_provider_id > INNER JOIN provider_aggregates > ON resource_provider_aggregates.aggregate_id = provider_aggregates.id > AND provider_aggregates.uuid = $AGGREGATE_UUID > SET inventories.allocation_ratio = $NEW_VALUE; > > We could follow up with a little CLI tool that would do the above for you on > the command line... something like this: > > nova-manage db set_aggregate_placement_allocation_ratio > --aggregate_uuid=$AGG_UUID --resource_class=VCPU --ratio 16.0 > > Of course, you could always call the Placement REST API to override the > allocation ratio for particular providers: > > DATA='{"resource_provider_generation": X, "allocation_ratio": $RATIO}' > curl -XPUT -H "Content-Type: application/json" -H$AUTH_TOKEN -d$DATA \ > https://$PLACEMENT/resource_providers/$RP_UUID/inventories/VCPU > > and you could loop through all the resource providers listed under a > particular aggregate, which you can find using something like this: > > curl https://$PLACEMENT/resource_providers?member_of:$AGG_UUID > > Anyway, there's multiple ways to set the allocation ratios in batches, as > you can tell. > > I think the key is somehow disabling the behaviour of the nova-compute > service of overriding the allocation ratio of compute nodes with the value > of the nova.cnf options. > > Thoughts? So far, a couple challenges/issues: We used to have fine grain control over the calls a user could make to the Nova API: * os_compute_api:os-aggregates:add_host * os_compute_api:os-aggregates:remove_host This means we could make it so our technicians could *ONLY* manage this aspect of our cloud. With placement API, it's all or nothing. (and found some weeks ago that it's hardcoded to the "admin" role) And you now have to craft your own curl calls and no more UI in Horizon. (let me know if I missed something regarding the ACL) I will read about placement API and see with my coworkers how we could adapt our systems/tools to use placement API instead. (assuming disable_allocation_ratio_autoset will be implemented) But ACL is a big concern for us if we go down that path. While I agree there are very technical/raw solutions to the issue (like the ones you suggested), please understand that from our side, this is still a major regression in the usability of OpenStack from an operator point of view. And it's unfortunate that I feel I now have to play catch up and explain my concerns about a "fait accompli" that wasn't well communicated to the operators and wasn't clearly mentioned in the release notes. I would have appreciated an email to the ops list explaining the proposed change and if anyone has concerns/comments about it. I don't often reply but I feel like I would have this time as this is a major change for us. Thanks for your time and suggestions, -- Mathieu From ken1ohmichi at gmail.com Fri Jan 19 00:52:59 2018 From: ken1ohmichi at gmail.com (Ken'ichi Ohmichi) Date: Thu, 18 Jan 2018 16:52:59 -0800 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: <1516307710-sup-8918@lrrr.local> References: <21a4f3d8-6386-7759-9d68-95a2b635b70d@ham.ie> <1516292546-sup-5080@lrrr.local> <0b626ab3-09c9-0899-7f9a-2309830f1d79@ham.ie> <1516300734-sup-8558@lrrr.local> <52aafa3f-ace9-26d9-2e17-8344d65f5081@ham.ie> <1516306691-sup-5728@lrrr.local> <1516307710-sup-8918@lrrr.local> Message-ID: 2018-01-18 12:36 GMT-08:00 Doug Hellmann : > Excerpts from Doug Hellmann's message of 2018-01-18 15:21:12 -0500: >> Excerpts from Graham Hayes's message of 2018-01-18 19:25:02 +0000: >> > >> > On 18/01/18 18:52, Doug Hellmann wrote: >> > > Excerpts from Graham Hayes's message of 2018-01-18 17:52:39 +0000: >> > >> On 18/01/18 16:25, Doug Hellmann wrote: >> > >>> Excerpts from Graham Hayes's message of 2018-01-18 15:33:12 +0000: >> > >> >> > >> >> > >> >> > >>> >> > >>> In the past the QA team agreed to accept trademark-related tests from >> > >>> all projects in the tempest repo. Has that changed? >> > >>> >> > >> >> > >> There has not been an explict rejection but in all conversations the >> > >> response has been "non core projects are outside the scope of tempest". >> > >> >> > >> Honestly, everytime we have tried to do something to core tempest >> > >> we have had major pushback, and I want to clarify this before I or >> > >> someone else put in the work of porting the base clients, getting CI >> > >> configured*, and proposing the tests to tempest. >> > > >> > > OK. >> > > >> > > The current policy doesn't say anything about "core" or different >> > > trademark programs or any other criteria. >> > > >> > > The TC therefore encourages the DefCore committee to consider it an >> > > indication of future technical direction that we do not want tests >> > > outside of the Tempest repository used for trademark enforcement, and >> > > that any new or existing tests that cover capabilities they want to >> > > consider for trademark enforcement should be placed in Tempest. >> > > >> > > That all seems very clear to me (setting aside some specific word >> > > choices like "future technical direction" that tie the resolution >> > > to language in the bylaws). Regardless of technical reasons why >> > > it may not be necessary, we still have many social justifications >> > > for doing it the way we originally set out to do it. Tests related >> > > to trademark enforcement need to go into the tempest repository. >> > > >> > > The way I think this should work (and the way I remember us describing >> > > it at the time the policy was established) is the Interop WG >> > > (previously DefCore) should identify capabilities and tests, then >> > > ask project teams to reproduce those tests in the tempest repo. >> > > When the tests land, they can be used by the trademark program. >> > > Teams can also, at their leisure, decide whether to remove the >> > > original versions of the tests from whatever repo they existed in >> > > to begin with. >> > > >> > > Graham, you've proposed a new resolution with several options for >> > > where to put tests for "add-on programs." I don't think we need >> > > that resolution if we want the tests to continue to live in tempest. >> > > The existing resolution doesn't qualify which tests, beyond "for >> > > trademark enforcement" and more words won't make that more clear, >> > > IMO. >> > > >> > > Now if you *do* want to change the policy, we should talk about >> > > that. But I can't tell whether you want to change it, you're worried >> > > the policy is unclear, or it is not being followed. Can you clarify >> > > which it is? >> > >> > It is not being followed. >> > >> > I have brought this up at every forum session on these programs, and the >> > people in the room from QA have *always* pushed back on it. >> >> OK, so that's a problem. I need to hear from the QA team why they've >> reversed that decision. >> >> > >> > And, for clarity (I saw this in a few logs) QA have *never* said that >> > they will take the interop designated tests for the DNS project into >> > openstack/tempest. >> >> When we approved the resolution that describes the current policy, the >> QA team agreed that they would take tests for trademark. There was no >> stipulation about which projects those apply to. > > I feel pretty sure that was discussed in a TC meeting, but I can't > find that. I do find Matt and Ken'ichi voting +1 on the resolution > itself. https://review.openstack.org/#/c/312718/. If I remember > correctly, Ken'ichi was the PTL at the time. Yeah, I have still agreed with the resolution. When I voted +1 on that, core projects were defined as 6 projects like Nova, Cinder, Glance, Keystone, Neutron and Swift. And the project navigator also showed these 6 projects as core projects. Now I cannot find such definition on the project navigator[1], the definition has been changed? I just want to clarify "is it true that designate and heat become core projects?" If there is a concrete decision, I don't have any objections against that we have these projects tests in Tempest as the resolution. Thanks Ken Ohmichi --- [1]: https://www.openstack.org/software/project-navigator From soulxu at gmail.com Fri Jan 19 02:14:08 2018 From: soulxu at gmail.com (Alex Xu) Date: Fri, 19 Jan 2018 10:14:08 +0800 Subject: [openstack-dev] [ResMgmt SIG]Proposal to form Resource Management SIG In-Reply-To: References: <11ce8607-0a59-401d-0605-c36c2a901cf9@gmail.com> Message-ID: ++, I also want to join this party :) 2018-01-09 8:40 GMT+08:00 Zhipeng Huang : > Agree 100% to avoid regular meeting and it is better to have bi-weekly > email report. Meeting should be arranged event based, and I think given the > status of OpenStack community's work on resource provider, mostly what we > need to do is attend k8s meetings (sig-scheduler, wg-resource-management, > etc.) > > BTW for the RM SIG proposed here, let's not limit the scope to k8s only > since we might have broader collaborative efforts happening in the future. > k8s is our first primary target community to sync up with. > > On Tue, Jan 9, 2018 at 4:12 AM, Jay Pipes wrote: > >> On 01/08/2018 12:26 PM, Zhipeng Huang wrote: >> >>> Hi all, >>> >>> With the maturing of resource provider/placement feature landing in >>> OpenStack in recent release, and also in light of Kubernetes community >>> increasing attention to the similar effort, I want to propose to form a >>> Resource Management SIG as a contact point for OpenStack community to >>> communicate with Kubernetes Resource Management WG[0] and other related >>> SIGs. >>> >>> The formation of the SIG is to provide a gathering of similar interested >>> parties and establish an official channel. Currently we have already >>> OpenStack developers actively participating in kubernetes discussion (e.g. >>> [1]), we would hope the ResMgmt SIG could further help such activities and >>> better align the resource mgmt mechanism, especially the data modeling >>> between the two communities (or even more communities with similar desire). >>> >>> I have floated the idea with Jay Pipes and Chris Dent and received >>> positive feedback. The SIG will have a co-lead structure so that people >>> could spearheading in the area they are most interested in. For example for >>> me as Cyborg dev, I will mostly lead in the area of acceleration[2]. >>> >>> If you are also interested please reply to this thread, and let's find a >>> efficient way to form this SIG. Efficient means no extra unnecessary >>> meetings and other undue burdens. >>> >> >> +1 >> >> From the Nova perspective, the scheduler meeting (which is Mondays at >> 1400 UTC) is the primary meeting where resource tracking and accounting >> issues are typically discussed. >> >> Chris Dent has done a fabulous job recording progress on the resource >> providers and placement work over the last couple releases by issuing >> status emails to the openstack-dev@ mailing list each Friday. >> >> I think having a bi-weekly cross-project (or even cross-ecosystem if >> we're talking about OpenStack+k8s) status email reporting any big events in >> the resource tracking world would be useful. As far as regular meetings for >> a resource management SIG, I'm +0 on that. I prefer to have targeted >> topical meetings over regular meetings. >> >> Best, >> -jay >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > Zhipeng (Howard) Huang > > Standard Engineer > IT Standard & Patent/IT Product Line > Huawei Technologies Co,. Ltd > Email: huangzhipeng at huawei.com > Office: Huawei Industrial Base, Longgang, Shenzhen > > (Previous) > Research Assistant > Mobile Ad-Hoc Network Lab, Calit2 > University of California, Irvine > Email: zhipengh at uci.edu > Office: Calit2 Building Room 2402 > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Fri Jan 19 02:26:26 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Fri, 19 Jan 2018 10:26:26 +0800 Subject: [openstack-dev] [ResMgmt SIG]Proposal to form Resource Management SIG In-Reply-To: References: <11ce8607-0a59-401d-0605-c36c2a901cf9@gmail.com> Message-ID: Feel free to add your name to the wiki :) On Fri, Jan 19, 2018 at 10:14 AM, Alex Xu wrote: > ++, I also want to join this party :) > > 2018-01-09 8:40 GMT+08:00 Zhipeng Huang : > >> Agree 100% to avoid regular meeting and it is better to have bi-weekly >> email report. Meeting should be arranged event based, and I think given the >> status of OpenStack community's work on resource provider, mostly what we >> need to do is attend k8s meetings (sig-scheduler, wg-resource-management, >> etc.) >> >> BTW for the RM SIG proposed here, let's not limit the scope to k8s only >> since we might have broader collaborative efforts happening in the future. >> k8s is our first primary target community to sync up with. >> >> On Tue, Jan 9, 2018 at 4:12 AM, Jay Pipes wrote: >> >>> On 01/08/2018 12:26 PM, Zhipeng Huang wrote: >>> >>>> Hi all, >>>> >>>> With the maturing of resource provider/placement feature landing in >>>> OpenStack in recent release, and also in light of Kubernetes community >>>> increasing attention to the similar effort, I want to propose to form a >>>> Resource Management SIG as a contact point for OpenStack community to >>>> communicate with Kubernetes Resource Management WG[0] and other related >>>> SIGs. >>>> >>>> The formation of the SIG is to provide a gathering of similar >>>> interested parties and establish an official channel. Currently we have >>>> already OpenStack developers actively participating in kubernetes >>>> discussion (e.g. [1]), we would hope the ResMgmt SIG could further help >>>> such activities and better align the resource mgmt mechanism, especially >>>> the data modeling between the two communities (or even more communities >>>> with similar desire). >>>> >>>> I have floated the idea with Jay Pipes and Chris Dent and received >>>> positive feedback. The SIG will have a co-lead structure so that people >>>> could spearheading in the area they are most interested in. For example for >>>> me as Cyborg dev, I will mostly lead in the area of acceleration[2]. >>>> >>>> If you are also interested please reply to this thread, and let's find >>>> a efficient way to form this SIG. Efficient means no extra unnecessary >>>> meetings and other undue burdens. >>>> >>> >>> +1 >>> >>> From the Nova perspective, the scheduler meeting (which is Mondays at >>> 1400 UTC) is the primary meeting where resource tracking and accounting >>> issues are typically discussed. >>> >>> Chris Dent has done a fabulous job recording progress on the resource >>> providers and placement work over the last couple releases by issuing >>> status emails to the openstack-dev@ mailing list each Friday. >>> >>> I think having a bi-weekly cross-project (or even cross-ecosystem if >>> we're talking about OpenStack+k8s) status email reporting any big events in >>> the resource tracking world would be useful. As far as regular meetings for >>> a resource management SIG, I'm +0 on that. I prefer to have targeted >>> topical meetings over regular meetings. >>> >>> Best, >>> -jay >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> >> -- >> Zhipeng (Howard) Huang >> >> Standard Engineer >> IT Standard & Patent/IT Product Line >> Huawei Technologies Co,. Ltd >> Email: huangzhipeng at huawei.com >> Office: Huawei Industrial Base, Longgang, Shenzhen >> >> (Previous) >> Research Assistant >> Mobile Ad-Hoc Network Lab, Calit2 >> University of California, Irvine >> Email: zhipengh at uci.edu >> Office: Calit2 Building Room 2402 >> >> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghanshyammann at gmail.com Fri Jan 19 03:28:02 2018 From: ghanshyammann at gmail.com (Ghanshyam Mann) Date: Fri, 19 Jan 2018 08:58:02 +0530 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: References: Message-ID: On Thu, Jan 11, 2018 at 10:06 PM, Colleen Murphy wrote: > Hi everyone, > > We have governance review under debate[1] that we need the community's help on. > The debate is over what recommendation the TC should make to the Interop team > on where the tests it uses for the OpenStack trademark program should be > located, specifically those for the new add-on program being introduced. Let me > badly summarize: > > A couple of years ago we issued a resolution[2] officially recommending that > the Interop team use solely tempest as its source of tests for capability > verification. The Interop team has always had the view that the developers, > being the people closest to the project they're creating, are the best people > to write tests verifying correct functionality, and so the Interop team doesn't > maintain its own test suite, instead selecting tests from those written in > coordination between the QA team and the other project teams. These tests are > used to validate clouds applying for the OpenStack Powered tag, and since all > of the projects included in the OpenStack Powered program already had tests in > tempest, this was a natural fit. When we consider adding new trademark programs > comprising of other projects, the test source is less obvious. Two examples are > designate, which has never had tests in the tempest repo, and heat, which > recently had its tests removed from the tempest repo. > > So far the patch proposes three options: > > 1) All trademark-related tests should go in the tempest repo, in accordance > with the original resolution. This would mean that even projects that have > never had tests in tempest would now have to add at least some of their > black-box tests to tempest. > > The value of this option is that centralizes tests used for the Interop program > in a location where interop-minded folks from the QA team can control them. The > downside is that projects that so far have avoided having a dependency on > tempest will now lose some control over the black-box tests that they use for > functional and integration that would now also be used for trademark > certification. > There's also concern for the review bandwidth of the QA team - we can't expect > the QA team to be continually responsible for an ever-growing list of projects > and their trademark tests. > > 2) All trademark-related tests for *add-on projects* should be sourced from > plugins external to tempest. > > The value of this option is it allows project teams to retain control over > these tests. The potential problem with it is that individual project teams are > not necessarily reviewing test changes with an eye for interop concerns and so > could inadvertently change the behavior of the trademark-verification tools. > > 3) All trademark-related tests should go in a single separate tempest plugin. > > This has the value of giving the QA and Interop teams control over > interop-related tests while also making clear the distinction between tests > used for trademark verification and tests used for CI. Matt's argument against > this is that there actually is very little distinction between those two cases, > and that a given test could have many different applications. options#3 can solve centralize test location issue but there is another issue it leads. If we start moving all interop test to separate interop repo then, many of exiting tempest test (used by interop) also falls under this category. Which means those existing tempest tests need to stay in 2 location one in new interop plugin and second in tempest also as tempest is being used for lot other purpose also, gate, production Cloud testing & stability etc. Duplication tests in 2 location is not good option. > > Other ideas that have been thrown around are: > > * Maintaining a branch in the tempest repo that Interop tests are pulled from. > > * Tagging Interop-related tests with decorators to make it clear that they need > to be handled carefully. Nice and imp point. This is been take care very carefully in Tempest till now . While changing tests or removing test, we have a very clear and strict process [4] to not affect any interop tests and i think it is 100% success till now, i have not heard any complained that we have changed any test which has broken interop. Adding new decorator etc has different issues to we did not accepted but main problem is solved by defining process.. > > At the heart of the issue is the perception that projects that keep their > integration tests within the tempest tree are somehow blessed, maybe by the QA > team or by the TC. It would be nice to try to clarify what technical > and political > reasons we have for why different projects have tests in different places - > review bandwidth of the QA team, ownership/control by the project teams, > technical interdependency between certain projects, or otherwise. > > Ultimately, as Jeremy said in the comments on the resolution patch, the > recommendation should be one that works best for the QA and Interop teams. So > far we've heard from Matt and Mark expressing moderate support for option 2. > We'd like to hear more from those teams about how they see this working, > especially with regard to concerns about the quality and stability standards > that out-of-tree tests may be held to. We additionally need input from the > whole community on how maintaining trademark-related tests in tempest will > affect you if you don't already have your tests there. We'd especially like to > address any perceptions of favoritism or exclusionism that stem from these > issues. > > And to quickly clear up one detail before it makes it onto this thread, the > Queens Community Goal about splitting tempest plugins out of the main project's > tree[3] is entirely about addressing technical problems related to packaging for > existing tempest plugins, it's not a decree about what should live > within the tempest > repository nor does it have anything to do with the Interop program. > > As I'm not deeply steeped in the history of either the Interop or QA teams I am > sure I've misrepresented some details here, I'm sorry about that. But we'd like > to get this resolution moving forward and we're currently stuck, so this thread > is intended to gather enough community input to get unstuck and avoid letting > this proposal become stale. Please respond to this thread or comment on the > resolution proposal[1] if you have any thoughts. > > Colleen > > [1] https://review.openstack.org/#/c/521602 > [2] https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html > [3] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html > .. [4] https://docs.openstack.org/tempest/latest/test_removal.html > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ghanshyammann at gmail.com Fri Jan 19 04:10:00 2018 From: ghanshyammann at gmail.com (Ghanshyam Mann) Date: Fri, 19 Jan 2018 09:40:00 +0530 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: <0b626ab3-09c9-0899-7f9a-2309830f1d79@ham.ie> References: <21a4f3d8-6386-7759-9d68-95a2b635b70d@ham.ie> <1516292546-sup-5080@lrrr.local> <0b626ab3-09c9-0899-7f9a-2309830f1d79@ham.ie> Message-ID: On Thu, Jan 18, 2018 at 11:22 PM, Graham Hayes wrote: > On 18/01/18 16:25, Doug Hellmann wrote: >> Excerpts from Graham Hayes's message of 2018-01-18 15:33:12 +0000: > > > >> >> In the past the QA team agreed to accept trademark-related tests from >> all projects in the tempest repo. Has that changed? >> > > There has not been an explict rejection but in all conversations the > response has been "non core projects are outside the scope of tempest". > > Honestly, everytime we have tried to do something to core tempest > we have had major pushback, and I want to clarify this before I or > someone else put in the work of porting the base clients, getting CI > configured*, and proposing the tests to tempest. Yes, i do not remember that we have rejected the actual proposal or patches or ever said that "QA team not going to accept interop needed test inside Tempest even those are for other project than tempest scope". Rather its been discussed that we need to check the situation when new project going to be make in interop program. At least i remember the discussion during Barcelona summit where there talked about heat tests. We discussed we can check that based on when heat is going to make in interop program. Anyways let's analysts the current situation and work on best possible solution than past things. I agree with Doug point about previous resolution was passed and then why this new resolution ? and what is not clear in previous resolution?. I think main issue is in understanding the difference between 'trademark' program and 'adds-on trademark' program. Let me add the things point by point. 1. What is difference between "Trademark" program and "Adds-on Trademark" program from interop certification? Can new projects go under "Trademark" program. This will be helpful to understand the situation of keeping all "Trademark" program tests and "Adds-on" program tests together or separate. For example: any difference of doing their certification, logo etc. 2. As per previous resolution, and with all point of centralized test location, expertise review, project independent ownership etc etc i agree with option#1 and no "NO" to that now also. Now question comes to practice implementation of that resolution which depends on 2 factor: 1. scale and number of program going to be in interop: As per current proposal, (i think its heat and designate and around 20-30 tests as total) there is no issue for tempest team to add/review/maintain them. But if that grows in number of program (than number tests for e.x. having 50 tests of designate than 10 is not much different things) and say 10 more program then it is difficult for QA team to maintain those. 2. QA team review bandwidth. This is one of the big obstacle to extend the tempest scope. Like other project, QA team face less contributors issues. Since 1-2 years, I have been trying to attract new contributor in QA during upstream training, mentorship program etc but people gets disappear after month or so. Even all QA members are trying their best in this area but unfortunately no success. With both these factor i feel we can go with current resolution (option#1- below solution) and help QA team also if situation gets worst (QA team also human beings and need time to sleep :)). 1. QA team accept all interop defined program tests (tests only needed by interop ). 2. Define a very clear process for collaboration between Interop, QA, project team to help on adding/maintaining tests. Something like clear guidelines of test req from interop, MUST +1 from interop and project PTL. 3. If interop program grows more which become difficult to maintain by QA team then accept the necessary change to resolution. -gmann > > - Graham > > > * With zuulv3 this is *much* easier, so not as big a deal as it once was > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From aj at suse.com Fri Jan 19 08:25:07 2018 From: aj at suse.com (Andreas Jaeger) Date: Fri, 19 Jan 2018 09:25:07 +0100 Subject: [openstack-dev] [cerberus][sticks] Retire openstack/*cerberus* and sticks In-Reply-To: <7de8fb68-078d-ed09-b4ee-ed0037b41444@suse.com> References: <7de8fb68-078d-ed09-b4ee-ed0037b41444@suse.com> Message-ID: <8f6f2f6f-f02c-0fd0-310f-270fd6e85ee9@suse.com> On 2017-12-09 15:57, Andreas Jaeger wrote: > Anthony, Christophe, Gauvin, > > The last merge to one of the cerberus repositories was December 2015, I > propose to retire the repositories to avoid people submitting changes to > them. > > See > https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project > on how to retire a project. > > If you need help with retiring, please tell me, FYI, Gauvain started the retirement process and included sticks as well: https://review.openstack.org/#/c/535660/ thanks, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From amoralej at redhat.com Fri Jan 19 09:07:05 2018 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Fri, 19 Jan 2018 10:07:05 +0100 Subject: [openstack-dev] [release][puppet] Tarballs missing for some puppet modules in pike Message-ID: Hi, Review https://review.openstack.org/#/c/535206/ was merged yesterday to create some new releases for puppet modules in pike. I've observed that releases were not created for following modules: puppet-tacker puppet-zaqar puppet-tempest puppet-vswitch puppet-vitrage puppet-trove I couldn't find the logs to the release create jobs so i'm not sure what the problem was. What's the best way to proceed here?, revert and resend the review would work? Best regards, Alfredo -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Fri Jan 19 09:46:30 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 19 Jan 2018 09:46:30 +0000 (GMT) Subject: [openstack-dev] [nova] [placement] resource providers update 18-03 Message-ID: Here's resource provider and placement update 18-03. I'm travelling so this version may be a bit abridged. # Most Important This remains mostly the same, getting alternate hosts all the way in and finishing up nested resource provider support (as ProviderTree on the nova side and support for nested in /allocation_candidates on the placement side). Both of these will likely need some time to be rigorously run through their paces before the end of the cycle, so the sooner stuff merges the sooner we can start getting the whole suite exercised by humans. Earlier in the week I did some exercising by humans and was confused by the state of traits handling on /allocation_candidates (it could be the current state is the expected state but the code didn't make that clear) so I made a bug on it make sure that confusion didn't get forgotten: https://bugs.launchpad.net/nova/+bug/1743860 I highlight this not because I think that problems is especially a "most important" but that it is a type of problem that I think we'll see a fair bit of over the next small number of weeks as we close out Queens and head for Rocky. (Looks like Alex is working on the correct fix at https://review.openstack.org/#/c/535642/ Based on that it seems most of the confusion here is mine, but that it was hard to tell what is up or the plan is is something we probably need to get better at.) The Rocky PTG prep etherpad is in flight at https://etherpad.openstack.org/p/nova-ptg-rocky please add things you think need to be talked about at the PTG. There's an email thread in progress that is probably pretty important to understand, if you're working on placement related things: http://lists.openstack.org/pipermail/openstack-dev/2018-January/126283.html The behavior of the Aggregate*FilterS has gone awry in the face of placement satisfying allocation_ratio concerns before those filters ever see proposed hosts. There are some ideas on how to improve the situation in the thread, but it appears there are still some open questions. # What's Changed An issue with foreign key constraints and deleting a resource provider whose root is itself has been resolved and the change merged: https://review.openstack.org/#/c/529519/ Anybody (or thing) that was experimenting with deleting resource providers with a database with some integrity would have encountered this problem. A proposal to create a Resource Management SIG has merged. There was some email discussion about it: http://lists.openstack.org/pipermail/openstack-dev/2018-January/126039.html # Help Wanted There are a fair few unstarted bugs related to placement that could do with some attention. Here's a handy URL: https://goo.gl/TgiPXb # Main Themes ## Nested Providers The nested provider work is proceeding along two main courses: getting the ProviderTree on the nova side gathering and syncing all the necessary information, and enabling nested provider searching when requesting /allocation_candidates. Both of these are within the same topic: https://review.openstack.org/#/q/topic:bp/nested-resource-providers One of the challenges this week was working out a reasonable way to have a read-only and thread-safe duplicate of a ProviderTree so that tree A and tree B can have what amounts to a diff done on them. This is being figured out on https://review.openstack.org/#/c/533244/ ## Alternate Hosts The last piece of the puzzle, changing the RPC interface, is pending: https://review.openstack.org/#/q/topic:bp/return-alternate-hosts Related to this, exploration has started on limiting the number of responses that the scheduler will get when requesting hosts (some of which will become alternates): https://review.openstack.org/#/c/531517/ # Other * Support traits in allocation candidates https://review.openstack.org/#/c/535642/ * Extract instance allocation removal code https://review.openstack.org/#/c/513041/ * Sending global request ids from nova to placement https://review.openstack.org/#/q/topic:bug/1734625 * VGPU suppport https://review.openstack.org/#/q/topic:bp/add-support-for-vgpu * Use traits with ironic https://review.openstack.org/#/q/topic:bp/ironic-driver-traits * Move api schemas to own dir https://review.openstack.org/#/c/528629/ Just one of these left * request limit /allocation_candidate WIP https://review.openstack.org/#/c/531517/ * Update resources once in update available resources https://review.openstack.org/#/c/520024/ (This ought, when it works, to help address some performance concerns with nova making too many requests to placement) * spec: treat devices as generic resources https://review.openstack.org/#/c/497978/ This is a WIP and will need to move to Rocky * log options at DEBUG when starting wsgi app https://review.openstack.org/#/c/519462/ * Support aggregate affinity filters/weighers https://review.openstack.org/#/q/topic:bp/aggregate-affinity A rocky targeted improvement to affinity handling * Move placement body samples in docs to own dir https://review.openstack.org/#/c/529998/ * Improved functional test coverage for placement https://review.openstack.org/#/q/topic:bp/placement-test-enhancement * Functional tests for traits api https://review.openstack.org/#/c/524094/ * Functional test improvements for resource class https://review.openstack.org/#/c/524506/ * annotate loadapp() (for placement wsgi app) as public https://review.openstack.org/#/c/526691/ * Remove microversion fallback code from report client https://review.openstack.org/#/c/528794/ * Fix documentation nits in set_and_clear_allocations https://review.openstack.org/#/c/531001/ * WIP: SchedulerReportClient.set_aggregates_for_provider https://review.openstack.org/#/c/532995/ This is likely for rocky as it depends on changing the api for aggregates handling on the placement side to accept and provide a generation * Naming update cn to rp (for clarity) https://review.openstack.org/#/c/529786/ * Add functional test for two-cell scheduler behaviors https://review.openstack.org/#/c/452006/ (This is old and maybe out of date, but something we might like to resurrect) * Make API history doc consistent https://review.openstack.org/#/c/477478/ * WIP: General policy sample file for placement https://review.openstack.org/#/c/524425/ * Fix missing marker functions https://review.openstack.org/#/c/514579/ (some placement exceptions are not translatable) * Support relay RP for allocation candidates https://review.openstack.org/#/c/533437/ Bug fix for sharing with multiple providers # End As usual, I'm sure I missed something. Please reply with any corrections. Your prize is an invitation to do some exercising by humans. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From thierry at openstack.org Fri Jan 19 10:05:22 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 19 Jan 2018 11:05:22 +0100 Subject: [openstack-dev] [tc] Technical Committee Status update, January 19th Message-ID: Hi! This is the weekly summary of Technical Committee initiatives. You can find the full list of all open topics (updated twice a week) at: https://wiki.openstack.org/wiki/Technical_Committee_Tracker If you are working on something (or plan to work on something) governance-related that is not reflected on the tracker yet, please feel free to add to it ! == Recently-approved changes == * Extra ATCs updates: trove, I18n * Goal updates: blazar, trove Not much was approved last week as we rebuild a pipeline of changes after the holiday break. == Rocky goals == We now have a set of proposed goals and associated champions: * Storyboard Migration [1] (diablo_rojo) * Remove mox [2] (chandankumar) * Ensure pagination links [3] (mordred) * Add Cold upgrades capabilities [4] (masayuki) * Enable mutable configuration [5] (gcb) [1] https://review.openstack.org/513875 [2] https://review.openstack.org/532361 [3] https://review.openstack.org/532627 [4] https://review.openstack.org/#/c/533544/ [5] https://review.openstack.org/534605 Many thanks to the volunteers! Now we need to decide how many goals we want to pursue during the Rocky cycle, and which ones. The selection has a good mix of dev-facing improvements (storyboard, mox), operator-facing improvements (cold upgrade capabilities, mutable config), and enduser-facing improvements (pagination links). Emilien started a thread to collect the community input on those, before the TC makes the final cut. Please chime in on the thread at: http://lists.openstack.org/pipermail/openstack-dev/2018-January/126266.html == Voting in progress == Matt Treinish's update to the Python PTI for tests to be specific and explicit is still missing a couple of votes to pass: https://review.openstack.org/#/c/519751/ Following Monty's suggestion, I proposed a change in the release naming process polling to use a CIVS public poll to rank the naming candidates. This is still missing a couple of votes to pass: https://review.openstack.org/#/c/534226/ Doug proposed to use StoryBoard for tracking Rocky goal completion (rather than a truckload of governance changes). Voting is in progress on that change, please comment at: https://review.openstack.org/#/c/534443/ == Under discussion == The discussion started by Graham Hayes to clarify how the testing of interoperability programs should be organized in the age of add-on trademark programs is still going on, now on an active mailing-list thread. Please chime in to inform the TC choice: https://review.openstack.org/521602 http://lists.openstack.org/pipermail/openstack-dev/2018-January/126146.html A new OpenStack project team was proposed to add a function-as-a-service component to OpenStack (called Qinling). At this point this team would obviously be added to the Rocky development cycle. Please give your opinion at: https://review.openstack.org/#/c/533827/ == TC member actions for the coming week(s) == TC member activity will be focused on Rocky goal selection for the coming week. We also need to look into the discussions we need to have at the PTG -- we have plenty of room for additional discussion topics during the Monday-Tuesday part of the event, so it is easy to dedicate a room for half a day to a full day to make good progress on critical issues. Finally, we should also be thinking about topics that would make good post-lunch presentations at the PTG in Dublin: http://lists.openstack.org/pipermail/openstack-dev/2018-January/126102.html == Office hours == To be more inclusive of all timezones and more mindful of people for which English is not the primary language, the Technical Committee dropped its dependency on weekly meetings. So that you can still get hold of TC members on IRC, we instituted a series of office hours on #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays For the coming week, I expect discussions to be focused around Rocky goal selection. Cheers, -- Thierry Carrez (ttx) From colleen at gazlene.net Fri Jan 19 10:44:55 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 19 Jan 2018 11:44:55 +0100 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: References: <21a4f3d8-6386-7759-9d68-95a2b635b70d@ham.ie> <1516292546-sup-5080@lrrr.local> <0b626ab3-09c9-0899-7f9a-2309830f1d79@ham.ie> <1516300734-sup-8558@lrrr.local> <52aafa3f-ace9-26d9-2e17-8344d65f5081@ham.ie> <1516306691-sup-5728@lrrr.local> <1516307710-sup-8918@lrrr.local> Message-ID: On Fri, Jan 19, 2018 at 1:52 AM, Ken'ichi Ohmichi wrote: > 2018-01-18 12:36 GMT-08:00 Doug Hellmann : >> >> I feel pretty sure that was discussed in a TC meeting, but I can't >> find that. I do find Matt and Ken'ichi voting +1 on the resolution >> itself. https://review.openstack.org/#/c/312718/. If I remember >> correctly, Ken'ichi was the PTL at the time. > > Yeah, I have still agreed with the resolution. > When I voted +1 on that, core projects were defined as 6 projects like > Nova, Cinder, Glance, Keystone, Neutron and Swift. > And the project navigator also showed these 6 projects as core projects. > Now I cannot find such definition on the project navigator[1], the > definition has been changed? > I just want to clarify "is it true that designate and heat become core > projects?" > If there is a concrete decision, I don't have any objections against > that we have these projects tests in Tempest as the resolution. I think the fuzziness between what we're colloquially calling "core" (or sometimes "integrated"), what has tests in tempest, and what is part of the original trademark program, is part of the problem. As I understand it, designate and heat are not trying to become "core". What they are applying for is to be part of a new subgroup within the trademark program. The question at hand is, given that they are not "core" (whatever that really means), but they are going to be part of the trademark program, is there a technical reason they shouldn't have some of their tests in tempest? And if not, is there a social reason for it? Colleen > > Thanks > Ken Ohmichi > > --- > [1]: https://www.openstack.org/software/project-navigator > From gr at ham.ie Fri Jan 19 10:50:50 2018 From: gr at ham.ie (Graham Hayes) Date: Fri, 19 Jan 2018 10:50:50 +0000 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: References: <21a4f3d8-6386-7759-9d68-95a2b635b70d@ham.ie> <1516292546-sup-5080@lrrr.local> <0b626ab3-09c9-0899-7f9a-2309830f1d79@ham.ie> Message-ID: <8a13827f-51d0-c699-c45b-1f1b5796acbf@ham.ie> On 19/01/18 04:10, Ghanshyam Mann wrote: > On Thu, Jan 18, 2018 at 11:22 PM, Graham Hayes wrote: >> On 18/01/18 16:25, Doug Hellmann wrote: >>> Excerpts from Graham Hayes's message of 2018-01-18 15:33:12 +0000: >> >> >> >>> >>> In the past the QA team agreed to accept trademark-related tests from >>> all projects in the tempest repo. Has that changed? >>> >> >> There has not been an explict rejection but in all conversations the >> response has been "non core projects are outside the scope of tempest". >> >> Honestly, everytime we have tried to do something to core tempest >> we have had major pushback, and I want to clarify this before I or >> someone else put in the work of porting the base clients, getting CI >> configured*, and proposing the tests to tempest. > > Yes, i do not remember that we have rejected the actual proposal or > patches or ever said that "QA team not going to accept interop needed > test inside Tempest even those are for other project than tempest > scope". Rather its been discussed that we need to check the situation > when new project going to be make in interop program. At least i > remember the discussion during Barcelona summit where there talked > about heat tests. We discussed we can check that based on when heat is > going to make in interop program. The decision seems to have been made already - people from the QA team have been pushing for Option 2 since Boston. > Anyways let's analysts the current situation and work on best possible > solution than past things. I agree with Doug point about previous > resolution was passed and then why this new resolution ? and what is > not clear in previous resolution?. I think main issue is in > understanding the difference between 'trademark' program and 'adds-on > trademark' program. Let me add the things point by point. They are both trademark programs under the By-Laws. > 1. What is difference between "Trademark" program and "Adds-on > Trademark" program from interop certification? Can new projects go > under "Trademark" program. > This will be helpful to understand the situation of keeping all > "Trademark" program tests and "Adds-on" program tests together or > separate. For example: any difference of doing their certification, > logo etc. The only difference is that an Add On needs a cloud to pass an OpenStack Powered program first. (e.g. you cannot have OpenStack DNS / Orchestration on its own) > 2. As per previous resolution, and with all point of centralized test > location, expertise review, project independent ownership etc etc i > agree with option#1 and no "NO" to that now also. Now question comes > to practice implementation of that resolution which depends on 2 > factor: > > 1. scale and number of program going to be in interop: > As per current proposal, (i think its heat and designate and > around 20-30 tests as total) there is no issue for tempest team to > add/review/maintain them. But if that grows in number of program (than > number tests for e.x. having 50 tests of designate than 10 is not much > different things) and say 10 more program then it is difficult for QA > team to maintain those. > > 2. QA team review bandwidth. > This is one of the big obstacle to extend the tempest scope. Like > other project, QA team face less contributors issues. Since 1-2 years, > I have been trying to attract new contributor in QA during upstream > training, mentorship program etc but people gets disappear after month > or so. Even all QA members are trying their best in this area but > unfortunately no success. > > With both these factor i feel we can go with current resolution > (option#1- below solution) and help QA team also if situation gets > worst (QA team also human beings and need time to sleep :)). > > 1. QA team accept all interop defined program tests (tests only needed > by interop ). > 2. Define a very clear process for collaboration between Interop, QA, > project team to help on adding/maintaining tests. Something like clear > guidelines of test req from interop, MUST +1 from interop and project > PTL. > 3. If interop program grows more which become difficult to maintain by > QA team then accept the necessary change to resolution. This sounds like a great way forward. > -gmann > >> >> - Graham >> >> >> * With zuulv3 this is *much* easier, so not as big a deal as it once was >> >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From gr at ham.ie Fri Jan 19 10:53:27 2018 From: gr at ham.ie (Graham Hayes) Date: Fri, 19 Jan 2018 10:53:27 +0000 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: References: Message-ID: On 19/01/18 03:28, Ghanshyam Mann wrote: > On Thu, Jan 11, 2018 at 10:06 PM, Colleen Murphy wrote: >> Hi everyone, >> >> We have governance review under debate[1] that we need the community's help on. >> The debate is over what recommendation the TC should make to the Interop team >> on where the tests it uses for the OpenStack trademark program should be >> located, specifically those for the new add-on program being introduced. Let me >> badly summarize: >> >> A couple of years ago we issued a resolution[2] officially recommending that >> the Interop team use solely tempest as its source of tests for capability >> verification. The Interop team has always had the view that the developers, >> being the people closest to the project they're creating, are the best people >> to write tests verifying correct functionality, and so the Interop team doesn't >> maintain its own test suite, instead selecting tests from those written in >> coordination between the QA team and the other project teams. These tests are >> used to validate clouds applying for the OpenStack Powered tag, and since all >> of the projects included in the OpenStack Powered program already had tests in >> tempest, this was a natural fit. When we consider adding new trademark programs >> comprising of other projects, the test source is less obvious. Two examples are >> designate, which has never had tests in the tempest repo, and heat, which >> recently had its tests removed from the tempest repo. >> >> So far the patch proposes three options: >> >> 1) All trademark-related tests should go in the tempest repo, in accordance >> with the original resolution. This would mean that even projects that have >> never had tests in tempest would now have to add at least some of their >> black-box tests to tempest. >> >> The value of this option is that centralizes tests used for the Interop program >> in a location where interop-minded folks from the QA team can control them. The >> downside is that projects that so far have avoided having a dependency on >> tempest will now lose some control over the black-box tests that they use for >> functional and integration that would now also be used for trademark >> certification. >> There's also concern for the review bandwidth of the QA team - we can't expect >> the QA team to be continually responsible for an ever-growing list of projects >> and their trademark tests. >> >> 2) All trademark-related tests for *add-on projects* should be sourced from >> plugins external to tempest. >> >> The value of this option is it allows project teams to retain control over >> these tests. The potential problem with it is that individual project teams are >> not necessarily reviewing test changes with an eye for interop concerns and so >> could inadvertently change the behavior of the trademark-verification tools. >> >> 3) All trademark-related tests should go in a single separate tempest plugin. >> >> This has the value of giving the QA and Interop teams control over >> interop-related tests while also making clear the distinction between tests >> used for trademark verification and tests used for CI. Matt's argument against >> this is that there actually is very little distinction between those two cases, >> and that a given test could have many different applications. > > options#3 can solve centralize test location issue but there is > another issue it leads. If we start moving all interop test to > separate interop repo then, many of exiting tempest test (used by > interop) also falls under this category. Which means those existing > tempest tests need to stay in 2 location one in new interop plugin and > second in tempest also as tempest is being used for lot other purpose > also, gate, production Cloud testing & stability etc. Duplication > tests in 2 location is not good option. We could just install the interop plugin into all the gates, and ensure it is ran, which would mean the tests are only ever in one place. >> >> Other ideas that have been thrown around are: >> >> * Maintaining a branch in the tempest repo that Interop tests are pulled from. >> >> * Tagging Interop-related tests with decorators to make it clear that they need >> to be handled carefully. > > Nice and imp point. This is been take care very carefully in Tempest > till now . While changing tests or removing test, we have a very clear > and strict process [4] to not affect any interop tests and i think it > is 100% success till now, i have not heard any complained that we have > changed any test which has broken interop. Adding new decorator etc > has different issues to we did not accepted but main problem is solved > by defining process.. Out of interest, what is the issue with a new test tag? it seems like it would be a good way to highlight to people what tests need extra care. > >> >> At the heart of the issue is the perception that projects that keep their >> integration tests within the tempest tree are somehow blessed, maybe by the QA >> team or by the TC. It would be nice to try to clarify what technical >> and political >> reasons we have for why different projects have tests in different places - >> review bandwidth of the QA team, ownership/control by the project teams, >> technical interdependency between certain projects, or otherwise. >> >> Ultimately, as Jeremy said in the comments on the resolution patch, the >> recommendation should be one that works best for the QA and Interop teams. So >> far we've heard from Matt and Mark expressing moderate support for option 2. >> We'd like to hear more from those teams about how they see this working, >> especially with regard to concerns about the quality and stability standards >> that out-of-tree tests may be held to. We additionally need input from the >> whole community on how maintaining trademark-related tests in tempest will >> affect you if you don't already have your tests there. We'd especially like to >> address any perceptions of favoritism or exclusionism that stem from these >> issues. >> >> And to quickly clear up one detail before it makes it onto this thread, the >> Queens Community Goal about splitting tempest plugins out of the main project's >> tree[3] is entirely about addressing technical problems related to packaging for >> existing tempest plugins, it's not a decree about what should live >> within the tempest >> repository nor does it have anything to do with the Interop program. >> >> As I'm not deeply steeped in the history of either the Interop or QA teams I am >> sure I've misrepresented some details here, I'm sorry about that. But we'd like >> to get this resolution moving forward and we're currently stuck, so this thread >> is intended to gather enough community input to get unstuck and avoid letting >> this proposal become stale. Please respond to this thread or comment on the >> resolution proposal[1] if you have any thoughts. >> >> Colleen >> >> [1] https://review.openstack.org/#/c/521602 >> [2] https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html >> [3] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html >> > > .. [4] https://docs.openstack.org/tempest/latest/test_removal.html > >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From amoralej at redhat.com Fri Jan 19 10:54:09 2018 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Fri, 19 Jan 2018 11:54:09 +0100 Subject: [openstack-dev] [release][puppet] Tarballs missing for some puppet modules in pike In-Reply-To: References: Message-ID: Looking at http://logs.openstack.org/0f/0f0c9871144bf3b658f73f60f35f3068612a6bb2/release-post/tag-releases/629d779/job-output.txt.gz (thanks ykarel for pointing it out) it seems the "run releases script" tasks timed out and some releases were not properly created. I'd say that the script stuck doing a "git pull --ff-only" for a repo. Is it safe to revert/resend the review to get the missing releases? Best regards, Alfredo On Fri, Jan 19, 2018 at 10:07 AM, Alfredo Moralejo Alonso < amoralej at redhat.com> wrote: > Hi, > > Review https://review.openstack.org/#/c/535206/ was merged yesterday to > create some new releases for puppet modules in pike. I've observed that > releases were not created for following modules: > > puppet-tacker > puppet-zaqar > puppet-tempest > puppet-vswitch > puppet-vitrage > puppet-trove > > I couldn't find the logs to the release create jobs so i'm not sure what > the problem was. What's the best way to proceed here?, revert and resend > the review would work? > > Best regards, > > Alfredo > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gr at ham.ie Fri Jan 19 10:57:01 2018 From: gr at ham.ie (Graham Hayes) Date: Fri, 19 Jan 2018 10:57:01 +0000 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: References: <21a4f3d8-6386-7759-9d68-95a2b635b70d@ham.ie> <1516292546-sup-5080@lrrr.local> <0b626ab3-09c9-0899-7f9a-2309830f1d79@ham.ie> <1516300734-sup-8558@lrrr.local> <52aafa3f-ace9-26d9-2e17-8344d65f5081@ham.ie> <1516306691-sup-5728@lrrr.local> <1516307710-sup-8918@lrrr.local> Message-ID: On 19/01/18 00:52, Ken'ichi Ohmichi wrote: > 2018-01-18 12:36 GMT-08:00 Doug Hellmann : >> Excerpts from Doug Hellmann's message of 2018-01-18 15:21:12 -0500: >>> Excerpts from Graham Hayes's message of 2018-01-18 19:25:02 +0000: >>>> >>>> On 18/01/18 18:52, Doug Hellmann wrote: >>>>> Excerpts from Graham Hayes's message of 2018-01-18 17:52:39 +0000: >>>>>> On 18/01/18 16:25, Doug Hellmann wrote: >>>>>>> Excerpts from Graham Hayes's message of 2018-01-18 15:33:12 +0000: >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> In the past the QA team agreed to accept trademark-related tests from >>>>>>> all projects in the tempest repo. Has that changed? >>>>>>> >>>>>> >>>>>> There has not been an explict rejection but in all conversations the >>>>>> response has been "non core projects are outside the scope of tempest". >>>>>> >>>>>> Honestly, everytime we have tried to do something to core tempest >>>>>> we have had major pushback, and I want to clarify this before I or >>>>>> someone else put in the work of porting the base clients, getting CI >>>>>> configured*, and proposing the tests to tempest. >>>>> >>>>> OK. >>>>> >>>>> The current policy doesn't say anything about "core" or different >>>>> trademark programs or any other criteria. >>>>> >>>>> The TC therefore encourages the DefCore committee to consider it an >>>>> indication of future technical direction that we do not want tests >>>>> outside of the Tempest repository used for trademark enforcement, and >>>>> that any new or existing tests that cover capabilities they want to >>>>> consider for trademark enforcement should be placed in Tempest. >>>>> >>>>> That all seems very clear to me (setting aside some specific word >>>>> choices like "future technical direction" that tie the resolution >>>>> to language in the bylaws). Regardless of technical reasons why >>>>> it may not be necessary, we still have many social justifications >>>>> for doing it the way we originally set out to do it. Tests related >>>>> to trademark enforcement need to go into the tempest repository. >>>>> >>>>> The way I think this should work (and the way I remember us describing >>>>> it at the time the policy was established) is the Interop WG >>>>> (previously DefCore) should identify capabilities and tests, then >>>>> ask project teams to reproduce those tests in the tempest repo. >>>>> When the tests land, they can be used by the trademark program. >>>>> Teams can also, at their leisure, decide whether to remove the >>>>> original versions of the tests from whatever repo they existed in >>>>> to begin with. >>>>> >>>>> Graham, you've proposed a new resolution with several options for >>>>> where to put tests for "add-on programs." I don't think we need >>>>> that resolution if we want the tests to continue to live in tempest. >>>>> The existing resolution doesn't qualify which tests, beyond "for >>>>> trademark enforcement" and more words won't make that more clear, >>>>> IMO. >>>>> >>>>> Now if you *do* want to change the policy, we should talk about >>>>> that. But I can't tell whether you want to change it, you're worried >>>>> the policy is unclear, or it is not being followed. Can you clarify >>>>> which it is? >>>> >>>> It is not being followed. >>>> >>>> I have brought this up at every forum session on these programs, and the >>>> people in the room from QA have *always* pushed back on it. >>> >>> OK, so that's a problem. I need to hear from the QA team why they've >>> reversed that decision. >>> >>>> >>>> And, for clarity (I saw this in a few logs) QA have *never* said that >>>> they will take the interop designated tests for the DNS project into >>>> openstack/tempest. >>> >>> When we approved the resolution that describes the current policy, the >>> QA team agreed that they would take tests for trademark. There was no >>> stipulation about which projects those apply to. >> >> I feel pretty sure that was discussed in a TC meeting, but I can't >> find that. I do find Matt and Ken'ichi voting +1 on the resolution >> itself. https://review.openstack.org/#/c/312718/. If I remember >> correctly, Ken'ichi was the PTL at the time. > > Yeah, I have still agreed with the resolution. > When I voted +1 on that, core projects were defined as 6 projects like > Nova, Cinder, Glance, Keystone, Neutron and Swift. > And the project navigator also showed these 6 projects as core projects. > Now I cannot find such definition on the project navigator[1], the > definition has been changed? > I just want to clarify "is it true that designate and heat become core > projects?" > If there is a concrete decision, I don't have any objections against > that we have these projects tests in Tempest as the resolution. This seems to be the problem - there is not now, or ever been a "core" project definition that was decided by TC / community. We have a set of projects that most people will refer to as "core", but there is no way to add projects to that. What was highlighted on the project navigator was a set of projects the marketing dept in the foundation considered core, which is definitely *not* something we as a community should use as a technical basis for anything. > Thanks > Ken Ohmichi > > --- > [1]: https://www.openstack.org/software/project-navigator > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From thierry at openstack.org Fri Jan 19 11:04:19 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 19 Jan 2018 12:04:19 +0100 Subject: [openstack-dev] [ptg] Dublin PTG proposed track schedule In-Reply-To: <20180118160648.edpsvmhwzzni3i7s@gentoo.org> References: <27e2401b-9753-51b9-f465-eeb0281dc350@openstack.org> <20180118160648.edpsvmhwzzni3i7s@gentoo.org> Message-ID: <79de231b-8868-cac7-97a7-739820004b4e@openstack.org> Matthew Thode wrote: > On 18-01-18 11:20:21, Thierry Carrez wrote: >> [...] >> You'll also notice that some teams (in orange below the table in above >> link) do not have pre-allocated slots. One key difference this time >> around is that we set aside a larger number of rooms and meeting spots >> for dynamically scheduling tracks. The idea is to avoid pre-allocating >> smaller tracks to a specific time slot that might or might not create >> conflicts, and let that team book a space at a time that makes the most >> sense for them, while the event happens. This dynamic booking will be >> done through the PTGbot. >> >> So the unscheduled teams (in orange) are expected to take advantage of >> this flexibility and schedule themselves during the event. This >> flexibility is not limited to those orange teams: other teams may want >> to meet for more than their pre-allocated time slots, and can book extra >> space as well. For example if you are on the First Contact SIG and >> realize on Tuesday afternoon that you would like to continue the >> discussions on Friday morning, it's easy to extend your track to a time >> slot there. >> [...] > > As one of the teams in orange, what specific steps, if any, do we need to > take to schedule a specific time/place for a meeting? Two options. If you already know for sure that you want to meet at a specific time on Monday, Tuesday or Friday, we can pre-allocate you a slot. Or you can come to the event, see who from the team is around, discuss what their schedule looks like and decide when/where would make the most sense to meet. Then you look at the list of bookable rooms and time slots and pick the one you prefer. Tracks scheduled that way will appear on the PTGbot together with pre-allocated tracks. To book a slot, you will use the new PTGbot 'book' command. See: https://git.openstack.org/cgit/openstack/ptgbot/tree/README.rst#n64 Given how difficult it is to predict conflicts, my advice would be to use the latter option and decide on the spot. -- Thierry Carrez (ttx) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From thierry at openstack.org Fri Jan 19 12:44:34 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 19 Jan 2018 13:44:34 +0100 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: References: <21a4f3d8-6386-7759-9d68-95a2b635b70d@ham.ie> <1516292546-sup-5080@lrrr.local> <0b626ab3-09c9-0899-7f9a-2309830f1d79@ham.ie> <1516300734-sup-8558@lrrr.local> <52aafa3f-ace9-26d9-2e17-8344d65f5081@ham.ie> <1516306691-sup-5728@lrrr.local> <1516307710-sup-8918@lrrr.local> Message-ID: <875c31a6-435b-42db-99a9-ac4475e5d92b@openstack.org> Colleen Murphy wrote: > On Fri, Jan 19, 2018 at 1:52 AM, Ken'ichi Ohmichi wrote: >> 2018-01-18 12:36 GMT-08:00 Doug Hellmann : >>> >>> I feel pretty sure that was discussed in a TC meeting, but I can't >>> find that. I do find Matt and Ken'ichi voting +1 on the resolution >>> itself. https://review.openstack.org/#/c/312718/. If I remember >>> correctly, Ken'ichi was the PTL at the time. >> >> Yeah, I have still agreed with the resolution. >> When I voted +1 on that, core projects were defined as 6 projects like >> Nova, Cinder, Glance, Keystone, Neutron and Swift. >> And the project navigator also showed these 6 projects as core projects. >> Now I cannot find such definition on the project navigator[1], the >> definition has been changed? >> I just want to clarify "is it true that designate and heat become core >> projects?" >> If there is a concrete decision, I don't have any objections against >> that we have these projects tests in Tempest as the resolution. > > I think the fuzziness between what we're colloquially calling "core" > (or sometimes "integrated"), what has tests in tempest, and what is > part of the original trademark program, is part of the problem. Right. People are using "core" to mean different things. And the reason is that there is no definition for it. So it's a convenient catch-all term for "the main projects" that works in all situations, whatever you mean by that. The bylaws only define "Trademark Designated OpenStack Software", and the badly-named "Defcore" subcommittee was tasked with defining it, out of a subset of the "OpenStack Technical Committee Approved Release", which itself is a subset of the OpenStack official deliverables. The resolution above did not mention "core" at all. It only mentions the interoperability tests used by the Defcore workgroup to define "Trademark Designated OpenStack Software". The tension here is that the QA team agreed to the resolution with the feeling that "Trademark Designated OpenStack Software" would be a pretty narrow set. With the creation of add-on trademark programs, that set probably increased in size more than they expected. I agree the language in our resolution is clear (and mandates that the interop tests related to Designate or Heat are accepted in tempest). We just need to reaffirm or reconsider that resolution based on the recent evolution of "Trademark Designated OpenStack Software". -- Thierry Carrez (ttx) From sean.mcginnis at gmx.com Fri Jan 19 13:43:31 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 19 Jan 2018 07:43:31 -0600 Subject: [openstack-dev] [release][puppet] Tarballs missing for some puppet modules in pike In-Reply-To: References: Message-ID: <20180119134330.GA30356@sm-xps> On Fri, Jan 19, 2018 at 11:54:09AM +0100, Alfredo Moralejo Alonso wrote: > Looking at > http://logs.openstack.org/0f/0f0c9871144bf3b658f73f60f35f3068612a6bb2/release-post/tag-releases/629d779/job-output.txt.gz > (thanks ykarel for pointing it out) it seems the "run releases script" > tasks timed out and some releases were not properly created. I'd say that > the script stuck doing a "git pull --ff-only" for a repo. > > Is it safe to revert/resend the review to get the missing releases? > > Best regards, > > Alfredo > There appears to have been a transient failure with a timeout on a git pull request in the job. I have asked in -infra if there is anyone that can just re-enque the job for us. It should pass this time. This one is for puppet-swift. There was another failure yesterday with puppet-horizon that ianw was able to rerun for us that did succeed the second time around. I didn't see any release job failure emails for the other ones here, though it's possible the failure could have included the ability to send out the email notification for it. I will check in to the rest of these. Worse case in these scenarios - yes, we can revert and redo the release patch, but I'm hoping we can just manually rerun the few left that had failed. Thanks, Sean > > On Fri, Jan 19, 2018 at 10:07 AM, Alfredo Moralejo Alonso < > amoralej at redhat.com> wrote: > > > Hi, > > > > Review https://review.openstack.org/#/c/535206/ was merged yesterday to > > create some new releases for puppet modules in pike. I've observed that > > releases were not created for following modules: > > > > puppet-tacker > > puppet-zaqar > > puppet-tempest > > puppet-vswitch > > puppet-vitrage > > puppet-trove > > > > I couldn't find the logs to the release create jobs so i'm not sure what > > the problem was. What's the best way to proceed here?, revert and resend > > the review would work? > > > > Best regards, > > > > Alfredo > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From openstack at fried.cc Fri Jan 19 13:55:33 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 19 Jan 2018 07:55:33 -0600 Subject: [openstack-dev] [nova] [placement] resource providers update 18-03 In-Reply-To: References: Message-ID: <273416fc-f228-3db6-909e-08b6d3a1bca0@fried.cc> > Earlier in the week I did some exercising by humans and was confused > by the state of traits handling on /allocation_candidates (it could be > the current state is the expected state but the code didn't make that > clear) so I made a bug on it make sure that confusion didn't get forgotten: > >     https://bugs.launchpad.net/nova/+bug/1743860 I can help with the confusion. The current state is indeed expected (at least by me). There were some WIPs early in the cycle to get just the ?required= part of traits in place, BUT the granular resource requests effort was a superset of that. Granular was mostly finished even at that time, but the final piece of the puzzle relies on code that's in progress right now (NRP in allocation candidates) so has been on hold. Whereas I hope it's still possible to tie all that off in Q, we're now getting to a point where it's prudent to hedge our bets and make sure we at least support traits on the single (un-numbered) request group. TL;DR: Yes, let's move forward with Alex's patch: > (Looks like Alex is working on the correct fix at > >     https://review.openstack.org/#/c/535642/ ...but also make sure we get lots of review focus on Jay's NRP-in-alloc-cands series to give Granular a fighting chance. From sean.mcginnis at gmx.com Fri Jan 19 14:34:58 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 19 Jan 2018 08:34:58 -0600 Subject: [openstack-dev] [release][puppet] Tarballs missing for some puppet modules in pike In-Reply-To: <20180119134330.GA30356@sm-xps> References: <20180119134330.GA30356@sm-xps> Message-ID: <20180119143457.GB30356@sm-xps> > > There appears to have been a transient failure with a timeout on a git pull > request in the job. I have asked in -infra if there is anyone that can just > re-enque the job for us. It should pass this time. This one is for > puppet-swift. > > There was another failure yesterday with puppet-horizon that ianw was able to > rerun for us that did succeed the second time around. > Just an update - it does look like it was just the one job failure. There are multiple puppet-* releases done as part of the one job, and it appears they are processed in alphabetically order. So this last time it got as far as puppet-swift (at least further along than puppet-horizon) before it hit this timeout. I'm fairly confident once we get the job to run again it should make it through these last few releases. From andrea.frittoli at gmail.com Fri Jan 19 15:20:31 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Fri, 19 Jan 2018 15:20:31 +0000 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: <13089710.Tc0ZGb15F4@whitebase.usersys.redhat.com> References: <13089710.Tc0ZGb15F4@whitebase.usersys.redhat.com> Message-ID: On Fri, Jan 12, 2018 at 9:50 AM Luigi Toscano wrote: > On Thursday, 11 January 2018 23:52:00 CET Matt Riedemann wrote: > > On 1/11/2018 10:36 AM, Colleen Murphy wrote: > > > 1) All trademark-related tests should go in the tempest repo, in > > > accordance > > > > > > with the original resolution. This would mean that even projects > that > > > have > > > never had tests in tempest would now have to add at least some of > > > their > > > black-box tests to tempest. > > > > > > The value of this option is that centralizes tests used for the Interop > > > program in a location where interop-minded folks from the QA team can > > > control them. The downside is that projects that so far have avoided > > > having a dependency on tempest will now lose some control over the > > > black-box tests that they use for functional and integration that would > > > now also be used for trademark certification. > > > There's also concern for the review bandwidth of the QA team - we can't > > > expect the QA team to be continually responsible for an ever-growing > list > > > of projects and their trademark tests. > > > > How many tests are we talking about for designate and heat? Half a > > dozen? A dozen? More? > > > > If it's just a couple of tests per project it doesn't seem terrible to > > have them live in Tempest so you get the "interop eye" on reviews, as > > noted in your email. If it's a considerable amount, then option 2 seems > > the best for the majority of parties. > > I would argue that it does not scale; what if some test is taken out from > the > interoperability, and others are added? It would mean moving tests from one > repository to another, with change of paths. I think that the solution 2, > where the repository where a test belong and the functionality of a test > are > not linked, is better. > This probably does not happen too often, but it does happen, and I agree that it would make things easier to have interop and non-iterop tests in the same repo. > > Ciao > -- > Luigi > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Fri Jan 19 15:21:02 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 19 Jan 2018 10:21:02 -0500 Subject: [openstack-dev] [tripleo] tripleo-upgrade pike branch In-Reply-To: References: Message-ID: Thanks Marius for sending this out and kicking off a conversation. On Tue, Jan 2, 2018 at 12:56 PM, Marius Cornea wrote: > Hi everyone and Happy New Year! > > As the migration of tripleo-upgrade repo to the openstack namespace is > now complete I think it's the time to create a Pike branch to capture > the current state so we can use it for Pike testing and keep the > master branch for Queens changes. The update/upgrade steps are > changing between versions and the aim of branching the repo is to keep > the update/upgrade steps clean per branch to avoid using conditionals > based on release. Also tripleo-upgrade should be compatible with > different tools used for deployment(tripleo-quickstart, infrared, > manual deployments) which use different vars for the version release > so in case of using conditionals we would need extra steps to > normalize these variables. > I understand the desire to create a branch to protect the work that has been done previously. The interesting thing is that you guys are proposing to use a branched ansible role with a branchless upstream project. I want to make sure we have enough review so that we don't hit issues in the future. Maybe that is OK, but I have at least one concern. My conern is about gating the tripleo-upgrade role and it's branches. When tripleo-quickstart is changed which is branchless we will be have to kick off a job for each tripleo-upgrade branch? That immediately doubles the load on gates. It's extemely important to properly gate this role against the versions of TripleO and OSP. I see very limited check jobs and gate jobs on tripleo-upgrades atm. I have only found [1]. I think we need to see some external and internal jobs checking and gating this role with comments posted to changes. [1] https://review.rdoproject.org/jenkins/job/gate-tripleo-ci-centos-7-containers-multinode-upgrades-pike/ > > I wanted to bring this topic up for discussion to see if branching is > the proper thing to do here. > > Thanks, > Marius > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Fri Jan 19 15:21:32 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Fri, 19 Jan 2018 09:21:32 -0600 Subject: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata In-Reply-To: References: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> Message-ID: <5A620CFC.5050802@windriver.com> On 01/18/2018 02:54 PM, Mathieu Gagné wrote: > We use this feature to segregate capacity/hosts based on CPU > allocation ratio using aggregates. > This is because we have different offers/flavors based on those > allocation ratios. This is part of our business model. > A flavor extra_specs is use to schedule instances on appropriate hosts > using AggregateInstanceExtraSpecsFilter. > > Our setup has a configuration management system and we use aggregates > exclusively when it comes to allocation ratio. > We do not rely on cpu_allocation_ratio config in nova-scheduler or nova-compute. > One of the reasons is we do not wish to have to > update/package/redeploy our configuration management system just to > add one or multiple compute nodes to an aggregate/capacity pool. > This means anyone (likely an operator or other provisioning > technician) can perform this action without having to touch or even > know about our configuration management system. > We can also transfer capacity from one aggregate to another if there > is a need, again, using aggregate memberships. (we do "evacuate" the > node if there are instances on it) > Our capacity monitoring is based on aggregate memberships and this > offer an easy overview of the current capacity. Note that a host can > be in one and only one aggregate in our setup. The existing mechanisms to control aggregate membership will still work, so the remaining issue is how to control the allocation ratios. What about implementing a new HTTP API call (as a local private patch) to set the allocation ratios for a given host? This would only be valid for your scenario where a given host is only present in a single aggregate, but it would allow your techs to modify the ratios. Chris From andrea.frittoli at gmail.com Fri Jan 19 15:29:34 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Fri, 19 Jan 2018 15:29:34 +0000 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: References: Message-ID: On Mon, Jan 15, 2018 at 12:59 PM Erno Kuvaja wrote: > On Thu, Jan 11, 2018 at 4:36 PM, Colleen Murphy > wrote: > > Hi everyone, > > > > We have governance review under debate[1] that we need the community's > help on. > > The debate is over what recommendation the TC should make to the Interop > team > > on where the tests it uses for the OpenStack trademark program should be > > located, specifically those for the new add-on program being introduced. > Let me > > badly summarize: > > > > A couple of years ago we issued a resolution[2] officially recommending > that > > the Interop team use solely tempest as its source of tests for capability > > verification. The Interop team has always had the view that the > developers, > > being the people closest to the project they're creating, are the best > people > > to write tests verifying correct functionality, and so the Interop team > doesn't > > maintain its own test suite, instead selecting tests from those written > in > > coordination between the QA team and the other project teams. These > tests are > > used to validate clouds applying for the OpenStack Powered tag, and > since all > > of the projects included in the OpenStack Powered program already had > tests in > > tempest, this was a natural fit. When we consider adding new trademark > programs > > comprising of other projects, the test source is less obvious. Two > examples are > > designate, which has never had tests in the tempest repo, and heat, which > > recently had its tests removed from the tempest repo. > > > > So far the patch proposes three options: > > > > 1) All trademark-related tests should go in the tempest repo, in > accordance > > with the original resolution. This would mean that even projects that > have > > never had tests in tempest would now have to add at least some of > their > > black-box tests to tempest. > > > > The value of this option is that centralizes tests used for the Interop > program > > in a location where interop-minded folks from the QA team can control > them. The > > downside is that projects that so far have avoided having a dependency on > > tempest will now lose some control over the black-box tests that they > use for > > functional and integration that would now also be used for trademark > > certification. > > There's also concern for the review bandwidth of the QA team - we can't > expect > > the QA team to be continually responsible for an ever-growing list of > projects > > and their trademark tests. > > > > 2) All trademark-related tests for *add-on projects* should be sourced > from > > plugins external to tempest. > > > > The value of this option is it allows project teams to retain control > over > > these tests. The potential problem with it is that individual project > teams are > > not necessarily reviewing test changes with an eye for interop concerns > and so > > could inadvertently change the behavior of the trademark-verification > tools. > > > > 3) All trademark-related tests should go in a single separate tempest > plugin. > > > > This has the value of giving the QA and Interop teams control over > > interop-related tests while also making clear the distinction between > tests > > used for trademark verification and tests used for CI. Matt's argument > against > > this is that there actually is very little distinction between those two > cases, > > and that a given test could have many different applications. > > > > Other ideas that have been thrown around are: > > > > * Maintaining a branch in the tempest repo that Interop tests are pulled > from. > > > > * Tagging Interop-related tests with decorators to make it clear that > they need > > to be handled carefully. > > > > At the heart of the issue is the perception that projects that keep their > > integration tests within the tempest tree are somehow blessed, maybe by > the QA > > team or by the TC. It would be nice to try to clarify what technical > > and political > > reasons we have for why different projects have tests in different > places - > > review bandwidth of the QA team, ownership/control by the project teams, > > technical interdependency between certain projects, or otherwise. > > > As someone who has been in middle of all that already once I'd like to > bring up bit more fundamental problem into this topic. I'm not able to > provide one size fits all solution but hopefully some insight that > would help the community to make the right decision. > > I think the biggest problem is who's fox is let to guard the chicken coop. > > By that I mean the basic problem of our testing still relies on what > is tested based on which assumptions and by whom. If the tests are > provided by the project teams, the test is more likely to cover the > intended usecase of the feature as it's implemented and if there is > bug found on that, the likelyhood that the test is altered is quite > high also the individual projects might not have the best idea what > might be the important things to the interoperability and trademark > purposes. Obviously when the test is written against intended behavior > it's less likely but also those changes might sneak in affecting the > interoprability. On the other hand if the test is written by > QA/interoperability people, is it actually testing the right thing and > is there more fundamental need to break it due to the fact that > instead of catching and reporting the bug when the test is written, we > start enforcing it. Are the tests written based on the intended > behavior, documented behavior or the current actual behavior? And the > biggest question of them all is who is going to have the bandwidth to > understand the depth of the projects and their ties between to ensure > we minimize the above? > > In perfect world all features are bug free, rational to use and well > documented so that anyone can easily write a test that can be ran > against any version to verify that we do not have regressions. We just > are not living in that perfect world and each of the options are risky > to cause conflicts. > > I think the optimal solution if we were introducing this as new fresh > concept would be using tempest as engine to run trademark test plugins > from their own repo and those plugins would be provided in > collaboration between the trademark group as what are the > functionalities tested, QA to ensure that the tests actually verify > what they should be testing and the project teams ensuring that the > tested feature is a) behaving and b) tested as it's intended to work > and documentation is aligned with that, where the faults on any 3 > could be rectified before enforcing. Unfortunately I do not see us as > the community having the resources to this "the right way" and I have > really hard time trying to decide which of the proposed options would > be least bad. > Why not? So far interoperability testing has been a shared effort between project, QA and interop team. For a project to be part of an interoperability program there must be resources allocated to writing / maintaining interop tests. If they are not avaiable the interop program won't be very successful. andreaf > > I think the worst case scenario is that we scrape together what ever > we can just to have something to say that we test it and not have > consistency nor clear responsibility of who, what and how. > (Unfortunately I think this is the current situation and I'm super > happy to hear that this is being discussed and the decision is not > made lightly.) > > Best, > Erno -jokke- Kuvaja > > > Ultimately, as Jeremy said in the comments on the resolution patch, the > > recommendation should be one that works best for the QA and Interop > teams. So > > far we've heard from Matt and Mark expressing moderate support for > option 2. > > We'd like to hear more from those teams about how they see this working, > > especially with regard to concerns about the quality and stability > standards > > that out-of-tree tests may be held to. We additionally need input from > the > > whole community on how maintaining trademark-related tests in tempest > will > > affect you if you don't already have your tests there. We'd especially > like to > > address any perceptions of favoritism or exclusionism that stem from > these > > issues. > > > > And to quickly clear up one detail before it makes it onto this thread, > the > > Queens Community Goal about splitting tempest plugins out of the main > project's > > tree[3] is entirely about addressing technical problems related to > packaging for > > existing tempest plugins, it's not a decree about what should live > > within the tempest > > repository nor does it have anything to do with the Interop program. > > > > As I'm not deeply steeped in the history of either the Interop or QA > teams I am > > sure I've misrepresented some details here, I'm sorry about that. But > we'd like > > to get this resolution moving forward and we're currently stuck, so this > thread > > is intended to gather enough community input to get unstuck and avoid > letting > > this proposal become stale. Please respond to this thread or comment on > the > > resolution proposal[1] if you have any thoughts. > > > > Colleen > > > > [1] https://review.openstack.org/#/c/521602 > > [2] > https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html > > [3] > https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgagne at calavera.ca Fri Jan 19 15:35:38 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Fri, 19 Jan 2018 10:35:38 -0500 Subject: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata In-Reply-To: <5A620CFC.5050802@windriver.com> References: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> <5A620CFC.5050802@windriver.com> Message-ID: Hi Chris, On Fri, Jan 19, 2018 at 10:21 AM, Chris Friesen wrote: > > The existing mechanisms to control aggregate membership will still work, so > the remaining issue is how to control the allocation ratios. > > What about implementing a new HTTP API call (as a local private patch) to > set the allocation ratios for a given host? This would only be valid for > your scenario where a given host is only present in a single aggregate, but > it would allow your techs to modify the ratios. While I agree that we can implement something in a private patch, this is something we are trying very hard to much away from. If Nova proposes using the placement API for such use cases, I think it should also provides the same features as the replaced solutions because people could have relied on those. And suggesting a configuration management system is enough was unfortunately a false assumption. I still have to figure out how a workflow based on placement will be feasible for us. But as I said, the lack of granular ACLs is a huge concern for us and I don't know how it can be addressed in the near future. -- Mathieu From jaypipes at gmail.com Fri Jan 19 15:46:03 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 19 Jan 2018 10:46:03 -0500 Subject: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata In-Reply-To: <5A620CFC.5050802@windriver.com> References: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> <5A620CFC.5050802@windriver.com> Message-ID: <8d335e8b-ca2e-2e75-56c7-0fd004e03d61@gmail.com> On 01/19/2018 10:21 AM, Chris Friesen wrote: > On 01/18/2018 02:54 PM, Mathieu Gagné wrote: > >> We use this feature to segregate capacity/hosts based on CPU >> allocation ratio using aggregates. >> This is because we have different offers/flavors based on those >> allocation ratios. This is part of our business model. >> A flavor extra_specs is use to schedule instances on appropriate hosts >> using AggregateInstanceExtraSpecsFilter. >> >> Our setup has a configuration management system and we use aggregates >> exclusively when it comes to allocation ratio. >> We do not rely on cpu_allocation_ratio config in nova-scheduler or >> nova-compute. >> One of the reasons is we do not wish to have to >> update/package/redeploy our configuration management system just to >> add one or multiple compute nodes to an aggregate/capacity pool. >> This means anyone (likely an operator or other provisioning >> technician) can perform this action without having to touch or even >> know about our configuration management system. >> We can also transfer capacity from one aggregate to another if there >> is a need, again, using aggregate memberships. (we do "evacuate" the >> node if there are instances on it) >> Our capacity monitoring is based on aggregate memberships and this >> offer an easy overview of the current capacity. Note that a host can >> be in one and only one aggregate in our setup. > > The existing mechanisms to control aggregate membership will still work, > so the remaining issue is how to control the allocation ratios. > > What about implementing a new HTTP API call (as a local private patch) > to set the allocation ratios for a given host? This would only be valid > for your scenario where a given host is only present in a single > aggregate, but it would allow your techs to modify the ratios. The problem is the nova-compute will override the placement inventory records' allocation_ratio field with the value of the nova.cnf CONF.$resource_allocation_ratio options each time the update_available_resource() periodic task runs. :( That's why the solution needs some way of disabling that behaviour. Best, -jay From trown at redhat.com Fri Jan 19 15:47:40 2018 From: trown at redhat.com (John Trowbridge) Date: Fri, 19 Jan 2018 10:47:40 -0500 Subject: [openstack-dev] [tripleo] tripleo-upgrade pike branch In-Reply-To: References: Message-ID: On Fri, Jan 19, 2018 at 10:21 AM, Wesley Hayutin wrote: > Thanks Marius for sending this out and kicking off a conversation. > > On Tue, Jan 2, 2018 at 12:56 PM, Marius Cornea wrote: > >> Hi everyone and Happy New Year! >> >> As the migration of tripleo-upgrade repo to the openstack namespace is >> now complete I think it's the time to create a Pike branch to capture >> the current state so we can use it for Pike testing and keep the >> master branch for Queens changes. The update/upgrade steps are >> changing between versions and the aim of branching the repo is to keep >> the update/upgrade steps clean per branch to avoid using conditionals >> based on release. Also tripleo-upgrade should be compatible with >> different tools used for deployment(tripleo-quickstart, infrared, >> manual deployments) which use different vars for the version release >> so in case of using conditionals we would need extra steps to >> normalize these variables. >> > > I understand the desire to create a branch to protect the work that has > been done previously. > The interesting thing is that you guys are proposing to use a branched > ansible role with > a branchless upstream project. I want to make sure we have enough review > so that we don't hit issues > in the future. Maybe that is OK, but I have at least one concern. > > My conern is about gating the tripleo-upgrade role and it's branches. > When tripleo-quickstart is changed > which is branchless we will be have to kick off a job for each > tripleo-upgrade branch? That immediately doubles > the load on gates. > I do not think CI repos should be branched. Even more than the concern Wes brought up about a larger gate matrix. Think about how much would need to get backported. To start you would just have the 2 branches, but eventually you will have 3. Likely all 3 will have slight differences in how different pieces of the upgrade are called (otherwise why branch), so when you need to fix something on all branches the backports have a high potential to be non-trivial too. Release conditionals are not perfect, but I dont think compatibility is really a major issue. Just document how to set the release, and the different CI tools that use your role will just have to adapt to that. > > It's extemely important to properly gate this role against the versions of > TripleO and OSP. I see very limited > check jobs and gate jobs on tripleo-upgrades atm. I have only found [1]. > I think we need to see some external and internal > jobs checking and gating this role with comments posted to changes. > > [1] https://review.rdoproject.org/jenkins/job/gate-tripleo- > ci-centos-7-containers-multinode-upgrades-pike/ > > > >> >> I wanted to bring this topic up for discussion to see if branching is >> the proper thing to do here. >> >> Thanks, >> Marius >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.frittoli at gmail.com Fri Jan 19 15:53:58 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Fri, 19 Jan 2018 15:53:58 +0000 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: <21a4f3d8-6386-7759-9d68-95a2b635b70d@ham.ie> References: <21a4f3d8-6386-7759-9d68-95a2b635b70d@ham.ie> Message-ID: On Thu, Jan 18, 2018 at 3:33 PM Graham Hayes wrote: > > > On 11/01/18 16:36, Colleen Murphy wrote: > > Hi everyone, > > > > We have governance review under debate[1] that we need the community's > help on. > > The debate is over what recommendation the TC should make to the Interop > team > > on where the tests it uses for the OpenStack trademark program should be > > located, specifically those for the new add-on program being introduced. > Let me > > badly summarize: > > > > A couple of years ago we issued a resolution[2] officially recommending > that > > the Interop team use solely tempest as its source of tests for capability > > verification. The Interop team has always had the view that the > developers, > > being the people closest to the project they're creating, are the best > people > > to write tests verifying correct functionality, and so the Interop team > doesn't > > maintain its own test suite, instead selecting tests from those written > in > > coordination between the QA team and the other project teams. These > tests are > > used to validate clouds applying for the OpenStack Powered tag, and > since all > > of the projects included in the OpenStack Powered program already had > tests in > > tempest, this was a natural fit. When we consider adding new trademark > programs > > comprising of other projects, the test source is less obvious. Two > examples are > > designate, which has never had tests in the tempest repo, and heat, which > > recently had its tests removed from the tempest repo. > > > > > > > > > As I'm not deeply steeped in the history of either the Interop or QA > teams I am > > sure I've misrepresented some details here, I'm sorry about that. But > we'd like > > to get this resolution moving forward and we're currently stuck, so this > thread > > is intended to gather enough community input to get unstuck and avoid > letting > > this proposal become stale. Please respond to this thread or comment on > the > > resolution proposal[1] if you have any thoughts. > > > > Colleen > > > > [1] https://review.openstack.org/#/c/521602 > > [2] > https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html > > [3] > https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html > > > > I had hoped for more of a discussion on this before I jumped back into > this debate - but it seams to be stalled still, so here it goes. > > I proposed this initially as we were unclear on where the tests should > go - we had a resolution that said all tests go into openstack/tempest > (with a list of reasons why), and the guidance and discussion that been > had in various summits was that "add-ons" should stay in plugins. > > So right now, we (by the governance rules) should be pushing tests to > tempest for the new programs. > > In the resolution that placed the tests in tempest there was a few > reasons proposed: > > For example, API and behavioral changes must be carefully managed, as > must mundane aspects such as test and module naming and location > within the test suite. Even changes that leave tests functionally > equivalent may cause unexpected consequences for their use in DefCore > processes and validation. Placing the tests in a central repository > will make it easier to maintain consistency and avoid breaking the > trademark enforcement tool. > > This still applies, and even more so for teams that traditionally do not > have a strong QE contributor / reviewer base (aka projects not in > "core"). > > Centralizing the tests also makes it easier for anyone running the > validation tool against their cloud or cloud distribution to use the > tests. It is easier to install the test suite and its dependencies, > and it is easier to read and understand a set of tests following a > consistent implementation pattern. > > Apparently users do not need central tests anymore, feedback from > RefStack is that people who run these tests are comfortable dealing > with extra python packages. > > The point about a single set of tests, in a single location and style > still stands. > > Finally, having the tests in a central location makes it easier to > ensure that all members of the community have equal input into what > the tests do and how they are implemented and maintained. > > Seems like a good value to strive for. > > One of the items that has been used to push back against adding > "add-ons" to tempest has been that tempest has a defined scope, and > neither of the current add-ons fit in that scope. > > Can someone clarify what the set of criteria is? I think it will help > this discussion. > > Another push back is the "scaling" issue - adding more tests will > overload the QA team. > > Right now, DNS adds 10 tests, Orchestration adds 22, to a current suite > of 353. > > I do not think there is many other add-ons proposed yet, and the new > Vertical programs will probably mainly be re-using tests in the > openstack/tempest repos as is. > > This is not a big tent-esque influx of programs - the only projects > that can be added to the trademarks are programs in tc-approved-release > [4], so I do not see scaling as a big issue, especially as these tests > are such base concepts that if they need to be changed there is a > completely new API, so the only overhead will be ensuring that nothing > in tempest breaks the new tests (which is a good thing for trademark > tests). > > Personally, for me, I like option 3. I did not initially add it, as I > knew it would cause endless bikesheding, but I do think it fits both > a technical and social model. > > I see 2 immediate routes forward: > > - Option 1, and we start adding these tests asap > - Pseudo Option 2, were we delete the resolution at [2] as it clearly > does not apply anymore, and abandon the review at [1]. > I think conditions changed a bit since that resolution was written. The decision should be based on finding the best compromise between upfront investment, ongoing maintenance cost and usability. If the solution does not match with the resolution we adapt it to match the current situation. andreaf > Finally - do not conflate my actions with those of the Designate team. > I have seen people talking about how this resolution was leverage the > team needed to move our tests in tree. This is definitely *not* true. > Having our tests in a plugin is useful to us, and if the above > resolution passed, I cannot see a situation where we would try to > move any tests that were not listed in the interop standards. > > This is something I have done as an individual in the community, not > something the designate team have pushed for. > > > [4] - > https://governance.openstack.org/tc/reference/tags/tc_approved-release.html > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.frittoli at gmail.com Fri Jan 19 16:27:56 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Fri, 19 Jan 2018 16:27:56 +0000 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: References: Message-ID: On Thu, Jan 11, 2018 at 4:36 PM Colleen Murphy wrote: > Hi everyone, > > We have governance review under debate[1] that we need the community's > help on. > The debate is over what recommendation the TC should make to the Interop > team > on where the tests it uses for the OpenStack trademark program should be > located, specifically those for the new add-on program being introduced. > Let me > badly summarize: > > A couple of years ago we issued a resolution[2] officially recommending > that > the Interop team use solely tempest as its source of tests for capability > verification. The Interop team has always had the view that the developers, > being the people closest to the project they're creating, are the best > people > to write tests verifying correct functionality, and so the Interop team > doesn't > maintain its own test suite, instead selecting tests from those written in > coordination between the QA team and the other project teams. These tests > are > used to validate clouds applying for the OpenStack Powered tag, and since > all > of the projects included in the OpenStack Powered program already had > tests in > tempest, this was a natural fit. When we consider adding new trademark > programs > comprising of other projects, the test source is less obvious. Two > examples are > designate, which has never had tests in the tempest repo, and heat, which > recently had its tests removed from the tempest repo. > Thanks for the summary! To be honest I don't see why this decision has to be difficult to take. Nothing we decide today is written in stone and the main risk ahead of us is to take a decision that requires a lot of upfront work and that it ends up providing no significant benefit, or even making things worst in some aspect. So we may try one way today and if we hit some significant issue we can still change. TL;DR my preferred option would be number (2) - it's the least initial effort, so the least risk, and deciding for (2) now won't make it any difficult in the future to switch to option (1) or option (3). I'm not pushing back on (2), I just think (1) is more convenient. Details below each option. > > So far the patch proposes three options: > > 1) All trademark-related tests should go in the tempest repo, in accordance > with the original resolution. This would mean that even projects that > have > never had tests in tempest would now have to add at least some of their > black-box tests to tempest. > > This option is a valid one, but I think it introduces too much extra work and testing complications for too little benefit. > The value of this option is that centralizes tests used for the Interop > program > in a location where interop-minded folks from the QA team can control > them. There are other ways this can be achieved - it is possible to mark tests so that team may require a +1 from interop/qa when specific tests are modified. > The > downside is that projects that so far have avoided having a dependency on > tempest will now lose some control over the black-box tests that they use > for > functional and integration that would now also be used for trademark > certification. > There's also concern for the review bandwidth of the QA team - we can't > expect > the QA team to be continually responsible for an ever-growing list of > projects > and their trademark tests. > If we restrict to interop tests, the review bandwidth issue is probably not so bad. The QA team would have to request the domain knowledge required for proper review from the respective teams anyways. There are other complications introduced though: - service clients and other common bits (config and so) would have to move to Tempest since we cannot have tempest depend on plugins. But then modifying those common bits on Tempest side would risk to break non-interop tests. Solution for that is to make all those bits stable interfaces for plugins - tempest would have to add new CI jobs to run the interop tests from add-on program on every tempest change so that the new code is tested on a regular basis. - heat tests are wrapped in a Tempest plugin but actually written in Gabbi so we would need to add Gabbi as a dependency to Tempest Nothing too terrible really, but I think it might not be worth the extra effort, especially now that teams available resources are getting thinner and thinner. > 2) All trademark-related tests for *add-on projects* should be sourced from > plugins external to tempest. > > I wouldn't go as far as saying they "should" be sourced. I think saying that they *may* be sourced from a plugin is enough. Apart from that this is my favourite option. The only thing required really is updating the resolution and we are ready to go. With all the plugins no in own branchless repositories, the usability concern is not so strong anymore really. > The value of this option is it allows project teams to retain control over > these tests. Other value is given by simplicity, least changes to implement and low risk. > The potential problem with it is that individual project teams are > not necessarily reviewing test changes with an eye for interop concerns > and so > could inadvertently change the behavior of the trademark-verification > tools. > > 3) All trademark-related tests should go in a single separate tempest > plugin. > I definitely oppose this change. It requires a lot of up-front effort for no value. A variation may be to have an interop plugin where new interop tests go, which would reduce the initial effort to zero, but I think the result would be a bit confusing with interop tests being partly in Tempest, partly in project owned plugins and partly in the new plugin. > > This has the value of giving the QA and Interop teams control over > interop-related tests while also making clear the distinction between tests > used for trademark verification and tests used for CI. Matt's argument > against > this is that there actually is very little distinction between those two > cases, > and that a given test could have many different applications. > +1 on Matt's comment! Also the CI and ACL for an interop plugin might be rather complicated. The only way this might work would be if the interop team wrote their own independent set of tests used only for interop purposes. But a great advantage of using CI tests for interop purposes is that the interop tests are executed all the time and they just work. Andrea Frittoli (andreaf) > > Other ideas that have been thrown around are: > > * Maintaining a branch in the tempest repo that Interop tests are pulled > from. > > * Tagging Interop-related tests with decorators to make it clear that they > need > to be handled carefully. > > At the heart of the issue is the perception that projects that keep their > integration tests within the tempest tree are somehow blessed, maybe by > the QA > team or by the TC. It would be nice to try to clarify what technical > and political > reasons we have for why different projects have tests in different places - > review bandwidth of the QA team, ownership/control by the project teams, > technical interdependency between certain projects, or otherwise. > > Ultimately, as Jeremy said in the comments on the resolution patch, the > recommendation should be one that works best for the QA and Interop teams. > So > far we've heard from Matt and Mark expressing moderate support for option > 2. > We'd like to hear more from those teams about how they see this working, > especially with regard to concerns about the quality and stability > standards > that out-of-tree tests may be held to. We additionally need input from the > whole community on how maintaining trademark-related tests in tempest will > affect you if you don't already have your tests there. We'd especially > like to > address any perceptions of favoritism or exclusionism that stem from these > issues. > > And to quickly clear up one detail before it makes it onto this thread, the > Queens Community Goal about splitting tempest plugins out of the main > project's > tree[3] is entirely about addressing technical problems related to > packaging for > existing tempest plugins, it's not a decree about what should live > within the tempest > repository nor does it have anything to do with the Interop program. > > As I'm not deeply steeped in the history of either the Interop or QA teams > I am > sure I've misrepresented some details here, I'm sorry about that. But we'd > like > to get this resolution moving forward and we're currently stuck, so this > thread > is intended to gather enough community input to get unstuck and avoid > letting > this proposal become stale. Please respond to this thread or comment on the > resolution proposal[1] if you have any thoughts. > > Colleen > > [1] https://review.openstack.org/#/c/521602 > [2] > https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html > [3] > https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Jan 19 16:50:45 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 19 Jan 2018 10:50:45 -0600 Subject: [openstack-dev] [release][puppet] Tarballs missing for some puppet modules in pike In-Reply-To: <20180119143457.GB30356@sm-xps> References: <20180119134330.GA30356@sm-xps> <20180119143457.GB30356@sm-xps> Message-ID: <20180119165043.GA11289@sm-xps> > > Just an update - it does look like it was just the one job failure. There are > multiple puppet-* releases done as part of the one job, and it appears they are > processed in alphabetically order. So this last time it got as far as > puppet-swift (at least further along than puppet-horizon) before it hit this > timeout. > > I'm fairly confident once we get the job to run again it should make it through > these last few releases. > Jeremy was able to re-queue the job, and it appears everything completed as expected. I've taken a quick look through the tarballs, and I think everything is there. Please take a look and let me know if you see anything unusual. From andrea.frittoli at gmail.com Fri Jan 19 17:08:50 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Fri, 19 Jan 2018 17:08:50 +0000 Subject: [openstack-dev] [gate][devstack][neutron][qa][release] Switch to lib/neutron in gate In-Reply-To: References: Message-ID: On Wed, Jan 17, 2018 at 7:27 PM Ihar Hrachyshka wrote: > Hi all, > Hi! > > tl;dr I propose to switch to lib/neutron devstack library in Queens. I > ask for buy-in to the plan from release and QA teams, something that > infra asked me to do. > > === > > Last several cycles we were working on getting lib/neutron - the new > in-tree devstack library to deploy neutron services - ready to deploy > configurations we may need in our gates. May I ask the reason for hosting this in the neutron tree? > Some pieces of the work > involved can be found in: > > https://review.openstack.org/#/q/topic:new-neutron-devstack-in-gate > > I am happy to announce that the work finally got to the point where we > can consistently pass both devstack-gate and neutron gates: > > (devstack-gate) https://review.openstack.org/436798 Both legacy and new style (zuulv3) jobs rely on the same test matrix code, so your change would impact both worlds consistently, which is good. > > (neutron) https://review.openstack.org/441579 > > One major difference between the old lib/neutron-legacy library and > the new lib/neutron one is that service names for neutron are > different. For example, q-svc is now neutron-api, q-dhcp is now > neutron-dhcp, etc. (In case you wonder, this q- prefix links us back > to times when Neutron was called Quantum.) The way lib/neutron is > designed is that whenever a single q-* service name is present in > ENABLED_SERVICES, the old lib/neutron-legacy code is triggered to > deploy services. > > Service name changes are a large part of the work. The way the > devstack-gate change linked above is designed is that it changes names > for deployed neutron services starting from Queens (current master), > so old branches and grenade jobs are not affected by the change. > Any other change worth mentioning? > > While we validated the change switching to new names against both > devstack-gate and neutron gates that should cover 90% of our neutron > configurations, and followed up with several projects that - we > induced - may be affected by the change - there is always a chance > that some job in some project gate would fail because of it, and we > would need to push a (probably rather simple) follow-up to unbreak the > affected job. Due to the nature of the work, the span of impact, and > the fact that infra repos are not easily gated against with Depends-On > links, we may need to live with the risk. > > Of course, there are several aspects of the project life involved, > including QA and release delivery efforts. I was advised to reach out > to both of those teams to get a buy-in to proceed with the move. If we > have support for the switch now, as per Clark, infra is ready to > support the switch. > > Note that the effort span several cycles, partially due to low review > velocity in several affected repos (devstack, devstack-gate), > partially because new changes in all affected repos were pulling us > back from the end goal. This is one of the reasons why I would like us > to do the switch sooner rather than later, since chasing this moving > goalpost became rather burdensome. > > What are QA and release team thoughts on the switch? Are we ready to > do it in next weeks? > If understood properly it would still be possible to use the old names right? Some jobs may not rely on test matrix and just hard code the list of services. Such jobs would be broken otherwise. What's the planned way forward towards removing the legacy lib? Andrea Frittoli (andreaf) > > Thanks for attention, > Ihar > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.frittoli at gmail.com Fri Jan 19 17:17:22 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Fri, 19 Jan 2018 17:17:22 +0000 Subject: [openstack-dev] [Openstack-sigs] [Openstack-operators] [QA] Proposal for a QA SIG In-Reply-To: References: <0873dec8-624d-b32a-5608-74cc74c02005@openstack.org> Message-ID: Hello everyone, After a long holiday break I would like to resume work on bringing the QA SIG to life. I proposed a QA SIG session [0] for the next PTG, but I'm not sure the right audience will be in Dublin. Could you please reply if you are interested but won't be in Dublin or add your name to the etherpad if you plan to be there and attend? If we have enough attendance in Dublin we can kick off there - otherwise I will setup a meeting with all interested parties (IRC meeting probably, but other options are possible). Thank you! Andrea Frittoli (andreaf) [0] https://etherpad.openstack.org/p/qa-rocky-ptg On Mon, Nov 20, 2017 at 9:15 AM Thierry Carrez wrote: > Rochelle Grober wrote: > > Thierry Carrez wrote: > >> One question I have is whether we'd need to keep the "QA" project team > at > >> all. Personally I think it would create confusion to keep it around, > for no gain. > >> SIGs code contributors get voting rights for the TC anyway, and SIGs > are free > >> to ask for space at the PTG... so there is really no reason (imho) to > keep a > >> "QA" project team in parallel to the SIG ? > > > > Well, you can get rid of the "QA Project Team" but you would then need > to replace it with something like the Tempest Project, or perhaps the Test > Project. You still need a PTL and cores to write, review and merge tempest > fixes and upgrades, along with some of the tests. The Interop Guideline > tests are part of Tempest because being there provides oversight on the > style and quality of the code of those tests. We still need that. > > SIGs can totally produce some code (and have review teams), but I agree > that in this case this code is basically a part of "the product" (rather > than a tool produced by guild of practitioners) and therefore makes > sense to be kept in an upstream project team. Let's keep things the way > they are, while we work out other changes that may trigger other > organizational shuffles (like reusing our project infrastructure beyond > just OpenStack). > > I wonder if we should not call the SIG under a different name to reduce > the confusion between QA-the-project-team and QA-the-SIG. Collaborative > Testing SIG? > > -- > Thierry Carrez (ttx) > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Fri Jan 19 17:23:45 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 19 Jan 2018 11:23:45 -0600 Subject: [openstack-dev] Many timeouts in zuul gates for TripleO In-Reply-To: References: Message-ID: <1fed6c03-a7a7-cbcb-3e3e-16638f482af0@nemebean.com> On 01/18/2018 09:45 AM, Emilien Macchi wrote: > On Thu, Jan 18, 2018 at 6:34 AM, Or Idgar wrote: >> Hi, >> we're encountering many timeouts for zuul gates in TripleO. >> For example, see >> http://logs.openstack.org/95/508195/28/check-tripleo/tripleo-ci-centos-7-ovb-ha-oooq/c85fcb7/. >> >> rechecks won't help and sometimes specific gate is end successfully and >> sometimes not. >> The problem is that after recheck it's not always the same gate which is >> failed. >> >> Is there someone who have access to the servers load to see what cause this? >> alternatively, is there something we can do in order to reduce the running >> time for each gate? > > We're migrating to RDO Cloud for OVB jobs: > https://review.openstack.org/#/c/526481/ > It's a work in progress but will help a lot for OVB timeouts on RH1. > > I'll let the CI folks comment on that topic. > I noticed that the timeouts on rh1 have been especially bad as of late so I did a little testing and found that it did seem to be running more slowly than it should. After some investigation I found that 6 of our compute nodes have warning messages that the cpu was throttled due to high temperature. I've disabled 4 of them that had a lot of warnings. The other 2 only had a handful of warnings so I'm hopeful we can leave them active without affecting job performance too much. It won't accomplish much if we disable the overheating nodes only to overload the remaining ones. I'll follow up with our hardware people and see if we can determine why these specific nodes are overheating. They seem to be running 20 degrees C hotter than the rest of the nodes. From amoralej at redhat.com Fri Jan 19 17:29:58 2018 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Fri, 19 Jan 2018 18:29:58 +0100 Subject: [openstack-dev] [release][puppet] Tarballs missing for some puppet modules in pike In-Reply-To: <20180119165043.GA11289@sm-xps> References: <20180119134330.GA30356@sm-xps> <20180119143457.GB30356@sm-xps> <20180119165043.GA11289@sm-xps> Message-ID: On Fri, Jan 19, 2018 at 5:50 PM, Sean McGinnis wrote: > > > > Just an update - it does look like it was just the one job failure. > There are > > multiple puppet-* releases done as part of the one job, and it appears > they are > > processed in alphabetically order. So this last time it got as far as > > puppet-swift (at least further along than puppet-horizon) before it hit > this > > timeout. > > > > I'm fairly confident once we get the job to run again it should make it > through > > these last few releases. > > > > Jeremy was able to re-queue the job, and it appears everything completed as > expected. I've taken a quick look through the tarballs, and I think > everything > is there. Please take a look and let me know if you see anything unusual. > > Yeah, everything looks ok to me now. Thanks for your help, Alfredo > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Fri Jan 19 17:54:57 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 19 Jan 2018 18:54:57 +0100 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 15 January 2018 Message-ID: # Keystone Team Update - Week of 15 January 2018 ## News ### Core team changes We added a new core reviewer! Thanks Gage Hugo for all your hard work and for stepping up to take on more responsibility! We also lost some core members: Boris Bobrov, Steve Martinelli, Brant Knudson and Brad Topol have stepped down from core membership after having made enormous contributions over the years. We're grateful to them for everything they've done to make keystone better and welcome them back any time. ### Proposed community goals for Rocky There are five community goals[1][2][3][4][5] proposed for Rocky that are under discussion. In the meeting this week we had some confusion and conerns over whether the proposed goal about pagination links[3] would apply to us. We don't paginate anything in keystone, so the goal wouldn't apply to us. The one that would potentially apply to keystone is about mutable configuration[5]. If you have thoughts on any of these potential community goals, including whether the team has the capacity to take on this work, make your voice heard on the reviews. ### PTG Planning We still need to put some thought into our agenda for the PTG. Add your ideas to the etherpad[6] and also add your name if you're going to be attending so that we can organize a team dinner. I noticed that no one requested a BM/VM room for the cross-project days of the PTG[7]. If we want to organize discussions with those teams we might want to start thinking about that now, but we will be able to book rooms spontaneously if we want to. [1] https://review.openstack.org/513875 [2] https://review.openstack.org/532361 [3] https://review.openstack.org/532627 [4] https://review.openstack.org/533544 [5] https://review.openstack.org/534605 [6] https://etherpad.openstack.org/p/keystone-rocky-ptg [7] http://lists.openstack.org/pipermail/openstack-dev/2018-January/126335.html ## Recently Merged Changes Search query: https://goo.gl/hdD9Kw We merged 38 changes this week. Lots of these were major stepping stones for our new features. ## Changes that need Attention Search query: https://goo.gl/h9knRA There are 55 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. Please prioritize reviews for python-keystoneclient and our major feature initiatives (see below). ## Milestone Outlook https://releases.openstack.org/queens/schedule.html The non-client freeze was yesterday. Keystonemiddleware[8] and oslo.policy[9] were released in time. Unfortunately we dropped the ball on keystoneauth and there are some important changes we want to get in for this release. The release team has graciously granted us an exception but we'll have to make sure these changes are merged by Monday. Client and feature freeze is next week on THURSDAY[10]. Please prioritize reviews for python-keystoneclient[11] and our major feature initiatives[12][13][14]. [8] https://review.openstack.org/#/c/531423/ [9] https://review.openstack.org/#/c/531734/ [10] http://lists.openstack.org/pipermail/openstack-dev/2018-January/126351.html [11] https://review.openstack.org/#/q/project:openstack/python-keystoneclient+is:open [12] https://review.openstack.org/#/q/is:open+topic:bp/system-scope [13] https://review.openstack.org/#/q/is:open+topic:bp/unified-limits [14] https://review.openstack.org/#/q/is:open+topic:bp/application-credentials ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From mbooth at redhat.com Fri Jan 19 18:11:05 2018 From: mbooth at redhat.com (Matthew Booth) Date: Fri, 19 Jan 2018 18:11:05 +0000 Subject: [openstack-dev] [nova] Local disk serial numbers series for reviewers: update 9th Jan In-Reply-To: References: Message-ID: On 9 January 2018 at 15:28, Matthew Booth wrote: > In summary, the patch series is here: > > https://review.openstack.org/#/q/status:open+project:opensta > ck/nova+branch:master+topic:bp/local-disk-serial-numbers > > The bottom 3 patches, which add BDM.uuid have landed. The next 3 currently > have a single +2. Since I last posted I have found and fixed a problem in > swap_volume, which added 2 more patches to the series. There are currently > 13 outstanding patches in the series. > > The following 6 patches are the 'crux' patches. The others in the series > are related fixes/cleanups (mostly renaming things and fixing tests) which > I've moved into separate patches to reduce noise. > > Add DriverLocalImageBlockDevice: > https://review.openstack.org/#/c/526347/6 > This now has a +2! Add local_root to block_device_info: > https://review.openstack.org/#/c/529029/6 > > Pass DriverBlockDevice to driver.attach_volume > https://review.openstack.org/#/c/528363/ > > Expose volume host type and path independent of libvirt config > https://review.openstack.org/#/c/530786/ > > Don't generate fake disk_info in swap_volume > https://review.openstack.org/#/c/530787/ > > Local disk serial numbers for the libvirt driver > https://review.openstack.org/#/c/529380/ > These remain the crux patches. Some of the simpler cleanup patches have also attracted +2s. As I've had to rebase the series due to merge conflicts a couple of times, I've moved these to the bottom so they can potentially go straight in. Cleanup patches with a single +2 already: https://review.openstack.org/#/c/531179/ https://review.openstack.org/#/c/526346/ https://review.openstack.org/#/c/528362/ Thanks, Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.page at ubuntu.com Fri Jan 19 18:19:04 2018 From: james.page at ubuntu.com (James Page) Date: Fri, 19 Jan 2018 18:19:04 +0000 Subject: [openstack-dev] [ptg] Dublin PTG proposed track schedule In-Reply-To: <27e2401b-9753-51b9-f465-eeb0281dc350@openstack.org> References: <27e2401b-9753-51b9-f465-eeb0281dc350@openstack.org> Message-ID: Hi Thierry On Thu, 18 Jan 2018 at 10:20 Thierry Carrez wrote: > Hi everyone, > > Here is the proposed pre-allocated track schedule for the Dublin PTG: > > > https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307&single=true > > The proposed allocation takes into account the estimated group size and > number of days that was communicated to Kendall and I by the team PTL. > We'd like to publish this schedule on the event website ASAP, so please > check that it still matches your needs (number of days, room size vs. > expected attendance) and does not create too many conflicts. There are > lots of constraints, so we can't promise we'll accommodate your remarks, > but we'll do our best. > OpenStack Charms team preference would be to have our dedicated room Wed/Thurs; in Denver we participated in some of the cross projects discussions such as fast forward upgrades, and I'd not want to have to fragment our time as a team to accommodate that if possible please! Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From honza at redhat.com Fri Jan 19 18:43:42 2018 From: honza at redhat.com (Honza Pokorny) Date: Fri, 19 Jan 2018 14:43:42 -0400 Subject: [openstack-dev] [tripleo][ui] Dependency management Message-ID: <20180119184342.bz5yzdn7t35xkzqu@localhost.localdomain> We've recently discovered an issue with the way we handle dependencies for tripleo-ui. This is an explanation of the problem, and a proposed solution. I'm looking for feedback. Before the upgrade to zuul v3 in TripleO CI, we had two types of jobs for tripleo-ui: * Native npm jobs * Undercloud integration jobs After the upgrade, the integration jobs went away. Our goal is to add them back. There is a difference in how these two types of jobs handle dependencies. Native npm jobs use the "npm install" command to acquire packages, and undercloud jobs use RPMs. The tripleo-ui project uses a separate RPM for dependencies called openstack-tripleo-ui-deps. Because of the requirement to use a separate RPM for dependencies, there is some extra work needed when a new dependency is introduced, or an existing one is upgraded. Once the patch that introduces the dependency is merged, we have to increment the version of the -deps package, and rebuild it. It then shows up in the yum repos used by the undercloud. To make matters worse, we recently upgraded our infrastructure to nodejs 8.9.4 and npm 5.6.0 (latest stable). This makes it so we can't write "purist" patches that simply introduce a new dependency to package.json, and nothing more. The code that uses the new dependency must be included. I tend to think that each commit should work on its own so this can be seen as a plus. This presents a problem: you can't get a patch that introduces a new dependency merged because it's not included in the RPM needed by the undercloud ci job. So, here is a proposal on how that might work: 1. Submit a patch for review that introduces the dependency, along with code changes to support it and validate its inclusion 2. Native npm jobs will pass 3. Undercloud gate job will fail because the dependency isn't in -deps RPM 4. We ask RDO to review for licensing 5. Once reviewed, new -deps package is built 6. Recheck 7. All jobs pass There is the obvious issue with building an RPM based on an unmerged patch. What do you think? Is that possible? Any other solutions? Honza Pokorny From jrist at redhat.com Fri Jan 19 18:49:29 2018 From: jrist at redhat.com (Jason E. Rist) Date: Fri, 19 Jan 2018 11:49:29 -0700 Subject: [openstack-dev] [tripleo][ui] Dependency management In-Reply-To: <20180119184342.bz5yzdn7t35xkzqu@localhost.localdomain> References: <20180119184342.bz5yzdn7t35xkzqu@localhost.localdomain> Message-ID: On 01/19/2018 11:43 AM, Honza Pokorny wrote: > We've recently discovered an issue with the way we handle dependencies for > tripleo-ui.  This is an explanation of the problem, and a proposed solution. > I'm looking for feedback. > > Before the upgrade to zuul v3 in TripleO CI, we had two types of jobs for > tripleo-ui: > > * Native npm jobs > * Undercloud integration jobs > > After the upgrade, the integration jobs went away.  Our goal is to add them > back. > > There is a difference in how these two types of jobs handle dependencies. > Native npm jobs use the "npm install" command to acquire packages, and > undercloud jobs use RPMs.  The tripleo-ui project uses a separate RPM for > dependencies called openstack-tripleo-ui-deps. > > Because of the requirement to use a separate RPM for dependencies, there is some > extra work needed when a new dependency is introduced, or an existing one is > upgraded.  Once the patch that introduces the dependency is merged, we have to > increment the version of the -deps package, and rebuild it.  It then shows up in > the yum repos used by the undercloud. > > To make matters worse, we recently upgraded our infrastructure to nodejs 8.9.4 > and npm 5.6.0 (latest stable).  This makes it so we can't write "purist" patches > that simply introduce a new dependency to package.json, and nothing more.  The > code that uses the new dependency must be included.  I tend to think that each > commit should work on its own so this can be seen as a plus. > > This presents a problem: you can't get a patch that introduces a new dependency > merged because it's not included in the RPM needed by the undercloud ci job. > > So, here is a proposal on how that might work: > > 1. Submit a patch for review that introduces the dependency, along with code >    changes to support it and validate its inclusion > 2. Native npm jobs will pass > 3. Undercloud gate job will fail because the dependency isn't in -deps RPM > 4. We ask RDO to review for licensing > 5. Once reviewed, new -deps package is built > 6. Recheck > 7. All jobs pass > > There is the obvious issue with building an RPM based on an unmerged patch. > > What do you think?  Is that possible?  Any other solutions? > > Honza Pokorny > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > I can't think of another way that would work better at the moment. I'd love if we had even less complexity here, but for the time being just any working solution is a positive.  Thanks for proposing, and +1 to your solution. -J From hongbin.lu at huawei.com Fri Jan 19 19:20:11 2018 From: hongbin.lu at huawei.com (Hongbin Lu) Date: Fri, 19 Jan 2018 19:20:11 +0000 Subject: [openstack-dev] [api-wg] [api] [cinder] [nova] Support specify action name in request url In-Reply-To: References: Message-ID: <0957CD8F4B55C0418161614FEC580D6B281983DD@YYZEML701-CHM.china.huawei.com> I remembered there are several discussions about action APIs in the past. This is one discussion I can find: http://lists.openstack.org/pipermail/openstack-dev/2016-December/109136.html . An obvious alternative is to expose each action with an independent API endpoint. For example: * POST /servers//start: Start a server * POST /servers//stop: Stop a server * POST /servers//reboot: Reboot a server * POST /servers//pause: Pause a server Several people pointed out the pros and cons of either approach and other alternatives [1] [2] [3]. Eventually, we (OpenStack Zun team) have adopted the alternative approach [4] above and it works very well from my perspective. However, I understand that there is no consensus on this approach within the OpenStack community. [1] http://lists.openstack.org/pipermail/openstack-dev/2016-December/109178.html [2] http://lists.openstack.org/pipermail/openstack-dev/2016-December/109208.html [3] http://lists.openstack.org/pipermail/openstack-dev/2016-December/109248.html [4] https://developer.openstack.org/api-ref/application-container/#manage-containers Best regards, Hongbin From: TommyLike Hu [mailto:tommylikehu at gmail.com] Sent: January-18-18 5:07 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [api-wg] [api] [cinder] [nova] Support specify action name in request url Hey all, Recently We found an issue related to our OpenStack action APIs. We usually expose our OpenStack APIs by registering them to our API Gateway (for instance Kong [1]), but it becomes very difficult when regarding to action APIs. We can not register and control them seperately because them all share the same request url which will be used as the identity in the gateway service, not say rate limiting and other advanced gateway features, take a look at the basic resources in OpenStack 1. Server: "/servers/{server_id}/action" 35+ APIs are include. 2. Volume: "/volumes/{volume_id}/action" 14 APIs are include. 3. Other resource We have tried to register different interfaces with same upstream url, such as: api gateway: /version/resource_one/action/action1 => upstream: /version/resource_one/action api gateway: /version/resource_one/action/action2 => upstream: /version/resource_one/action But it's not secure enough cause we can pass action2 in the request body while invoking /action/action1, also, try to read the full body for route is not supported by most of the api gateways(maybe plugins) and will have a performance impact when proxy. So my question is do we have any solution or suggestion for this case? Could we support specify action name both in request body and url such as: URL:/volumes/{volume_id}/action BODY:{'extend':{}} and: URL:/volumes/{volume_id}/action/extend BODY: {'extend':{}} Thanks Tommy [1]: https://github.com/Kong/kong -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Fri Jan 19 19:42:17 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Fri, 19 Jan 2018 14:42:17 -0500 Subject: [openstack-dev] [kuryr][libnetwork] Release kuryr-libnetwork 1.x for Queens Message-ID: Hi Kuryr team, I think Kuryr-libnetwork is ready to move out of beta status. I propose to make the first 1.x release of Kuryr-libnetwork for Queens and cut a stable branch on it. What do you think about this proposal? Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Fri Jan 19 20:10:53 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Fri, 19 Jan 2018 20:10:53 +0000 Subject: [openstack-dev] [magnum] [ironic] Why does magnum create instances with ports using 'fixed-ips' ? Message-ID: <7D38D6DF-A5D6-4F29-8554-03B9D2FDCF77@windriver.com> Hey there, We have just recently integrated MAGNUM into our OpenStack Distribution. QUESTION: When MAGNUM is creating the ‘instances’ for the COE master and minion nodes, WHY does it create the instances with ports using ‘fixed-ips’ ? - instead of just letting the instance’s port dhcp for its ip-address ? I am asking this question because: · we have also integrated IRONIC into our OpenStack Distribution, and o currently support the simple (somewhat non-multi-tenant) networking approach i.e. § ironic-provisioning-net TENANT NETWORK, used to network boot the IRONIC Instances, is owned by ADMIN but shared so TENANTS can create IRONIC instances, § AND, we do NOT support the functionality to have IRONIC update the adjacent switch configuration in order to move the IRONIC instance on to a different (TENANT-owned) TENANT NETWORK after the instance is created. o so it is SORT OF multi-tenant in the sense that any TENANT can create an IRONIC instance, HOWEVER the IRONIC instances of all tenants are all on the same TENANT NETWORK · In this environment, When we use MAGNUM to create IRONIC COE Nodes o it ONLY works if the ADMIN creates the MAGNUM Cluster, o it does NOT work if a TENANT creates the MAGNUM Cluster, § because a TENANT can NOT create an instance port with ‘fixed-ips’ on a TENANT NETWORK that is not owned by himself. appreciate any info on this, Greg. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Fri Jan 19 20:34:55 2018 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Fri, 19 Jan 2018 12:34:55 -0800 Subject: [openstack-dev] [gate][devstack][neutron][qa][release] Switch to lib/neutron in gate In-Reply-To: References: Message-ID: Hi Andrea, thanks for taking time to reply. I left some answers inline. On Fri, Jan 19, 2018 at 9:08 AM, Andrea Frittoli wrote: > > > On Wed, Jan 17, 2018 at 7:27 PM Ihar Hrachyshka wrote: >> >> Hi all, > > > Hi! >> >> >> tl;dr I propose to switch to lib/neutron devstack library in Queens. I >> ask for buy-in to the plan from release and QA teams, something that >> infra asked me to do. >> >> === >> >> Last several cycles we were working on getting lib/neutron - the new >> in-tree devstack library to deploy neutron services - ready to deploy >> configurations we may need in our gates. > > > May I ask the reason for hosting this in the neutron tree? Sorry for wording it in a misleading way. The lib/neutron library is in *devstack* tree: https://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/neutron So in terms of deployment dependencies, there are no new repositories to fetch from or gate against. > >> >> Some pieces of the work >> involved can be found in: >> >> https://review.openstack.org/#/q/topic:new-neutron-devstack-in-gate >> >> I am happy to announce that the work finally got to the point where we >> can consistently pass both devstack-gate and neutron gates: >> >> (devstack-gate) https://review.openstack.org/436798 > > > Both legacy and new style (zuulv3) jobs rely on the same test matrix code, > so your change would impact both worlds consistently, which is good. > >> >> >> (neutron) https://review.openstack.org/441579 >> >> One major difference between the old lib/neutron-legacy library and >> the new lib/neutron one is that service names for neutron are >> different. For example, q-svc is now neutron-api, q-dhcp is now >> neutron-dhcp, etc. (In case you wonder, this q- prefix links us back >> to times when Neutron was called Quantum.) The way lib/neutron is >> designed is that whenever a single q-* service name is present in >> ENABLED_SERVICES, the old lib/neutron-legacy code is triggered to >> deploy services. >> >> Service name changes are a large part of the work. The way the >> devstack-gate change linked above is designed is that it changes names >> for deployed neutron services starting from Queens (current master), >> so old branches and grenade jobs are not affected by the change. > > > Any other change worth mentioning? > The new library is a lot more simplified and opinionated and has fewer knobs and branching that is not very useful for majority of users. lib/neutron-legacy was always known for its complicated configuration. We hope that adopting the new library will unify and simplify neutron configuration across different jobs and setups. >From consumer perspective, nothing should change expect service names. Some localrc files may need adoption if they rely on old arcane knobs. It can be done during transition phase since old service names are expected to work. >> >> >> While we validated the change switching to new names against both >> devstack-gate and neutron gates that should cover 90% of our neutron >> configurations, and followed up with several projects that - we >> induced - may be affected by the change - there is always a chance >> that some job in some project gate would fail because of it, and we >> would need to push a (probably rather simple) follow-up to unbreak the >> affected job. Due to the nature of the work, the span of impact, and >> the fact that infra repos are not easily gated against with Depends-On >> links, we may need to live with the risk. >> >> Of course, there are several aspects of the project life involved, >> including QA and release delivery efforts. I was advised to reach out >> to both of those teams to get a buy-in to proceed with the move. If we >> have support for the switch now, as per Clark, infra is ready to >> support the switch. >> >> Note that the effort span several cycles, partially due to low review >> velocity in several affected repos (devstack, devstack-gate), >> partially because new changes in all affected repos were pulling us >> back from the end goal. This is one of the reasons why I would like us >> to do the switch sooner rather than later, since chasing this moving >> goalpost became rather burdensome. >> >> What are QA and release team thoughts on the switch? Are we ready to >> do it in next weeks? > > > If understood properly it would still be possible to use the old names > right? > Some jobs may not rely on test matrix and just hard code the list of > services. > Such jobs would be broken otherwise. > > What's the planned way forward towards removing the legacy lib? Yes, they should still work. My plan is to complete switch of devstack-gate to new names; once we are sure all works as expected, we can proceed with replacing all q-* service names still captured by codesearch.openstack.org with new names; finally, remove lib/neutron-legacy in Rocky. (Note that the old library already issues a deprecation warning since Newton: https://review.openstack.org/#/c/315806/) Ihar From sean.mcginnis at gmx.com Fri Jan 19 21:15:42 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 19 Jan 2018 15:15:42 -0600 Subject: [openstack-dev] [release] Lib branching for stable/queens Message-ID: <20180119211542.GA12408@sm-xps> Happy Friday all. Now that we are past the non-client lib freeze, we will need to have stable/queens branches for those libs. For all libraries that did not miss the freeze, I will be proposing the patches to get those stable branches created. This should have been enforced as part of the last deliverable request, but I don't think we had quite everything in place for branch creation. Going forward as we do the client library releases and then the service releases, please make sure your patch requesting the release includes creating the stable/queens branching. If there is any reason for me to hold off on this for a library your team manages, please let me know ASAP. Upcoming service project branching ================================== I mentioned this in the countdown email, but to increase the odds that someone actually sees it - if your project follows the cycle-with-milestones release model, please check membership of your $project-release group. The members of the group can be found by filtering for the group here: https://review.openstack.org/#/admin/groups/ This group should be limited to those aware of the restrictions as we wrap up the end of the cycle to make sure only release critical things are allowed to be merged into stable/queens as we finalize things for the final release. As always, just let me know if there are any questions. -- Sean McGinnis (smcginnis) From whayutin at redhat.com Fri Jan 19 23:45:42 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 19 Jan 2018 18:45:42 -0500 Subject: [openstack-dev] Many timeouts in zuul gates for TripleO In-Reply-To: <1fed6c03-a7a7-cbcb-3e3e-16638f482af0@nemebean.com> References: <1fed6c03-a7a7-cbcb-3e3e-16638f482af0@nemebean.com> Message-ID: On Fri, Jan 19, 2018 at 12:23 PM, Ben Nemec wrote: > > > On 01/18/2018 09:45 AM, Emilien Macchi wrote: > >> On Thu, Jan 18, 2018 at 6:34 AM, Or Idgar wrote: >> >>> Hi, >>> we're encountering many timeouts for zuul gates in TripleO. >>> For example, see >>> http://logs.openstack.org/95/508195/28/check-tripleo/tripleo >>> -ci-centos-7-ovb-ha-oooq/c85fcb7/. >>> >>> rechecks won't help and sometimes specific gate is end successfully and >>> sometimes not. >>> The problem is that after recheck it's not always the same gate which is >>> failed. >>> >>> Is there someone who have access to the servers load to see what cause >>> this? >>> alternatively, is there something we can do in order to reduce the >>> running >>> time for each gate? >>> >> >> We're migrating to RDO Cloud for OVB jobs: >> https://review.openstack.org/#/c/526481/ >> It's a work in progress but will help a lot for OVB timeouts on RH1. >> >> I'll let the CI folks comment on that topic. >> >> > I noticed that the timeouts on rh1 have been especially bad as of late so > I did a little testing and found that it did seem to be running more slowly > than it should. After some investigation I found that 6 of our compute > nodes have warning messages that the cpu was throttled due to high > temperature. I've disabled 4 of them that had a lot of warnings. The other > 2 only had a handful of warnings so I'm hopeful we can leave them active > without affecting job performance too much. It won't accomplish much if we > disable the overheating nodes only to overload the remaining ones. > > I'll follow up with our hardware people and see if we can determine why > these specific nodes are overheating. They seem to be running 20 degrees C > hotter than the rest of the nodes. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > For the latest discussion and to-do's before rh1 ovb jobs are migrated to rdo-cloud look here [1]. TLDR is that we're looking for a run of seven days where the jobs are passing at around 80% or better in check. We've reported a number of issues w/ the environment, and AFAIK everything is now resolved just recently. [1] https://trello.com/c/wGUUEqty/384-steps-needed-to-migrate-ovb-to-rdo-cloud -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Sat Jan 20 02:38:47 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Fri, 19 Jan 2018 21:38:47 -0500 Subject: [openstack-dev] Many timeouts in zuul gates for TripleO In-Reply-To: <1fed6c03-a7a7-cbcb-3e3e-16638f482af0@nemebean.com> References: <1fed6c03-a7a7-cbcb-3e3e-16638f482af0@nemebean.com> Message-ID: <20180120023847.GA12633@localhost.localdomain> On Fri, Jan 19, 2018 at 11:23:45AM -0600, Ben Nemec wrote: > > > On 01/18/2018 09:45 AM, Emilien Macchi wrote: > > On Thu, Jan 18, 2018 at 6:34 AM, Or Idgar wrote: > > > Hi, > > > we're encountering many timeouts for zuul gates in TripleO. > > > For example, see > > > http://logs.openstack.org/95/508195/28/check-tripleo/tripleo-ci-centos-7-ovb-ha-oooq/c85fcb7/. > > > > > > rechecks won't help and sometimes specific gate is end successfully and > > > sometimes not. > > > The problem is that after recheck it's not always the same gate which is > > > failed. > > > > > > Is there someone who have access to the servers load to see what cause this? > > > alternatively, is there something we can do in order to reduce the running > > > time for each gate? > > > > We're migrating to RDO Cloud for OVB jobs: > > https://review.openstack.org/#/c/526481/ > > It's a work in progress but will help a lot for OVB timeouts on RH1. > > > > I'll let the CI folks comment on that topic. > > > > I noticed that the timeouts on rh1 have been especially bad as of late so I > did a little testing and found that it did seem to be running more slowly > than it should. After some investigation I found that 6 of our compute > nodes have warning messages that the cpu was throttled due to high > temperature. I've disabled 4 of them that had a lot of warnings. The other > 2 only had a handful of warnings so I'm hopeful we can leave them active > without affecting job performance too much. It won't accomplish much if we > disable the overheating nodes only to overload the remaining ones. > > I'll follow up with our hardware people and see if we can determine why > these specific nodes are overheating. They seem to be running 20 degrees C > hotter than the rest of the nodes. > Did tripleo-test-cloud-rh1 get new kernels applied for meltdown / spectre, possible that is impacting performance too? -Paul From mordred at inaugust.com Sat Jan 20 17:22:57 2018 From: mordred at inaugust.com (Monty Taylor) Date: Sat, 20 Jan 2018 11:22:57 -0600 Subject: [openstack-dev] [sdk][masakari][tricircle] Inclusion of SDK classes in openstacksdk tree Message-ID: <02a1cd17-46da-845d-4ea9-4eddf00dbded@inaugust.com> Hey everybody, Wanted to send a quick note to let people know that all OpenStack services are more than welcome to put any openstacksdk proxy and resource classes they have directly into the openstacksdk tree. Looking through codesearch.openstack.org, masakariclient and tricircle each have SDK-related classes in their trees. You don't HAVE to put the code into openstacksdk. In fact, I wrote a patch for masakariclient to register the classes with openstack.connection.Connection: https://review.openstack.org/#/c/534883/ But I wanted to be clear that the code is **welcome** directly in tree, and that anyone working on an OpenStack service is welcome to put support code directly in the openstacksdk tree. Monty PS. Joe - you've also got some classes in the tricircle test suite extending the network service. I haven't followed all the things ... are the tricircle network extensions done as neutron plugins now? (It looks like they are) If so, why don't we figure out getting your network resources in-tree as well. From irenab.dev at gmail.com Sun Jan 21 07:13:47 2018 From: irenab.dev at gmail.com (Irena Berezovsky) Date: Sun, 21 Jan 2018 09:13:47 +0200 Subject: [openstack-dev] [kuryr][libnetwork] Release kuryr-libnetwork 1.x for Queens In-Reply-To: References: Message-ID: +1 On Fri, Jan 19, 2018 at 9:42 PM, Hongbin Lu wrote: > Hi Kuryr team, > > I think Kuryr-libnetwork is ready to move out of beta status. I propose to > make the first 1.x release of Kuryr-libnetwork for Queens and cut a stable > branch on it. What do you think about this proposal? > > Best regards, > Hongbin > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Sun Jan 21 14:19:21 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Sun, 21 Jan 2018 08:19:21 -0600 Subject: [openstack-dev] [keystone] priority review etherpad Message-ID: <014cb24b-9f0d-e800-db87-979bc85c4741@gmail.com> Happy NFL Divisional Playoff Day, We're getting down to the wire and I decided to make an etherpad to track feature reviews that we need to land before feature freeze [0]. I'll attempt to keep it updated the best I can. If you're looking for things to review, it's a great place to start. If I missed something that needs to be added to the list, please let me know. Thanks [0] https://etherpad.openstack.org/p/keystone-queens-release-sprint -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From saverio.proto at switch.ch Sun Jan 21 17:09:03 2018 From: saverio.proto at switch.ch (Saverio Proto) Date: Sun, 21 Jan 2018 18:09:03 +0100 Subject: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID In-Reply-To: <1516295114-sup-7111@lrrr.local> References: <054e9a0c-1c99-51dd-9e66-8adb3c910802@switch.ch> <1515696336-sup-7054@lrrr.local> <7142b655-abca-ed58-bc4c-07fa51cb401c@switch.ch> <1515771070-sup-7997@lrrr.local> <07ee3262-8aec-a4c4-f981-bc448afab0ba@switch.ch> <96f2a7d8-ea7c-5530-7975-62b477982f03@switch.ch> <1516293565-sup-9123@lrrr.local> <1516295114-sup-7111@lrrr.local> Message-ID: Hello, I figured out a bug is already open since a long time :( https://bugs.launchpad.net/oslo.log/+bug/1564931 And there is already a review: https://review.openstack.org/#/c/367514/ it looks like the review was not merged, and it went to abandoned because of no progress on it for long time. I rebased that code on the current master: https://review.openstack.org/536149 Saverio On 18.01.18 18:14, Doug Hellmann wrote: > Excerpts from Doug Hellmann's message of 2018-01-18 11:45:28 -0500: >> Excerpts from Saverio Proto's message of 2018-01-18 14:49:21 +0100: >>> Hello all, >>> >>> well this oslo.log library looks like a core thing that is used by >>> multiple projects. I feel scared hearing that bugs opened on that >>> project are probably just ignored. >>> >>> should I reach out to the current PTL of OSLO ? >>> https://github.com/openstack/governance/blob/master/reference/projects.yaml#L2580 >>> >>> ChangBo Guo are you reading this thread ? Do you think this is a bug or >>> a missing feature ? And moreover is really nobody looking at these >>> oslo.log bugs ? >> >> The Oslo team is small, but we do pay attention to bug reports. I >> don't think this issue is going to rise to the level of "drop what >> you're doing and help because the world is on fire", so I think >> Sean is just encouraging you to have a little patience. >> >> Please do go ahead and open a bug and attach (or paste into the >> description) an example of what the log output for your service looks >> like. >> >> Doug > > Earlier in the thread you mentioned running the newton versions of > neutron and oslo.log. The newton release has been marked end-of-life > and is not supported by the community any longer. You may find > support from your vendor, but if you're deploying on your own we'll > have to work something else out. If we determine that this is a bug > in the newton version of the library I won't have any way to give > you a new release because the branch is closed. > > It should be possible for you to update just oslo.log to a more > recent (and supported), although to do so you would have to get the > package separately or build your own and that may complicate your > deployment. > > More recent versions of the JSON formatter change the structure of > the data to include the entire context (including the request id) > in a separate key. Are you updating to newton as part of upgrading > further than that? If so, we probably want to wait to debug this > until you hit the latest supported version you're planning to deploy, > in case the problem is already fixed there. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- SWITCH Saverio Proto, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 1573 saverio.proto at switch.ch, http://www.switch.ch http://www.switch.ch/stories From lbragstad at gmail.com Sun Jan 21 18:12:09 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Sun, 21 Jan 2018 12:12:09 -0600 Subject: [openstack-dev] [keystone] priority review etherpad In-Reply-To: <014cb24b-9f0d-e800-db87-979bc85c4741@gmail.com> References: <014cb24b-9f0d-e800-db87-979bc85c4741@gmail.com> Message-ID: <3a19f88f-2bb5-1291-92d7-809a76521f76@gmail.com> Actually - we've tracked this kind of work with etherpad in the past and it becomes cumbersome and duplicates information after a while. Instead, I built a dashboard to do this [0]. Please refer to that for the last and greatest things to review. [0] https://goo.gl/NWdAH7 On 01/21/2018 08:19 AM, Lance Bragstad wrote: > Happy NFL Divisional Playoff Day, > > We're getting down to the wire and I decided to make an etherpad to > track feature reviews that we need to land before feature freeze [0]. > I'll attempt to keep it updated the best I can. If you're looking for > things to review, it's a great place to start. If I missed something > that needs to be added to the list, please let me know. > > Thanks > > [0] https://etherpad.openstack.org/p/keystone-queens-release-sprint > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From joehuang at huawei.com Mon Jan 22 01:08:10 2018 From: joehuang at huawei.com (joehuang) Date: Mon, 22 Jan 2018 01:08:10 +0000 Subject: [openstack-dev] [sdk][masakari][tricircle] Inclusion of SDK classes in openstacksdk tree In-Reply-To: <02a1cd17-46da-845d-4ea9-4eddf00dbded@inaugust.com> References: <02a1cd17-46da-845d-4ea9-4eddf00dbded@inaugust.com> Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF56567198@DGGEML501-MBS.china.huawei.com> Hello, Monty, Tricircle did not develop any extra Neutron network resources, Tricircle provide plugins under Neutron, and same support resources as Neutron have. To ease the management of multiple Neutron servers, one Tricircle Admin API is provided to manage the resource routings between local neutron(s) and central neutron, it's one standalone service, and only for cloud administrator, therefore python-tricircleclient adn CLI were developed to support these administration functions. do you mean to put Tricircle Admin API sdk under openstacksdk tree? Best Regards Chaoyi Huang (joehuang) ________________________________________ From: Monty Taylor [mordred at inaugust.com] Sent: 21 January 2018 1:22 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [sdk][masakari][tricircle] Inclusion of SDK classes in openstacksdk tree Hey everybody, Wanted to send a quick note to let people know that all OpenStack services are more than welcome to put any openstacksdk proxy and resource classes they have directly into the openstacksdk tree. Looking through codesearch.openstack.org, masakariclient and tricircle each have SDK-related classes in their trees. You don't HAVE to put the code into openstacksdk. In fact, I wrote a patch for masakariclient to register the classes with openstack.connection.Connection: https://review.openstack.org/#/c/534883/ But I wanted to be clear that the code is **welcome** directly in tree, and that anyone working on an OpenStack service is welcome to put support code directly in the openstacksdk tree. Monty PS. Joe - you've also got some classes in the tricircle test suite extending the network service. I haven't followed all the things ... are the tricircle network extensions done as neutron plugins now? (It looks like they are) If so, why don't we figure out getting your network resources in-tree as well. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From yikunkero at gmail.com Mon Jan 22 03:10:36 2018 From: yikunkero at gmail.com (Yikun Jiang) Date: Mon, 22 Jan 2018 11:10:36 +0800 Subject: [openstack-dev] [cinder] [nova] [db] Questions about metadata update concurrently Message-ID: Hey all, Recently we found an issue again, related to our nova metadata/system_metadata update. This bug already track in this link: https://bugs.launchpad. net/nova/+bug/1650188 . 1. For metadata update. As we know, the current mechism of metdata as below sequence diagram. [image: 内嵌图片 1] Step [1]: We get instance object(include metadata) in API, we call this object as *metadata(A)* link: https://github.com/openstack/nova/blob/6d227722d4287726e144e4cf928c8e6ae52a6a4c/nova/api/openstack/compute/server_metadata.py#L105 Step [2]: Update metadata object using metadata(A), and we plus the increacement change. link: https://github.com/openstack/nova/blob/6d227722d4287726e144e4cf928c8e6ae52a6a4c/nova/compute/api.py#L4014-L4022 Step [3]: When instance.save, we get instance *metadata(B)* in DB link: https://github.com/openstack/nova/blob/6d227722d4287726e144e4cf928c8e6ae52a6a4c/nova/db/sqlalchemy/api.py#L2819 Step [4]: flush change in db: *metadata(A) + increacement - metadata(B)* link: https://github.com/openstack/nova/blob/6d227722d4287726e144e4cf928c8e6ae52a6a4c/nova/db/sqlalchemy/api.py#L2750-L2778 As note on diagram: In normal case: if metadata(A) == metadata(B), there is no impact on result. However, in concurrent case: we udpate metadata concurrently, metadata(A) != metadata(B) in some case, because we perhaps already change the metadata between API step [1] and step [3] DB, we will lose these changes. some discussions in bug link, the ETAG, generation ID and CAS are mentioned in our discussion. *1. ETAG*: the way to avoid API race condition, if check etag failed will give back a 409 confict. see: https://review.openstack.org/#/c/328399/ *2. Generation ID:* like resource provider done. see: https://github.com/openstack/nova/blob/6d227722d4287726e144e4cf928c8e6ae52a6a4c/nova/objects/resource_provider.py#L281 but it seems we should add this filed the instance object? * 3. CAS: *do some pre condition check when we write the record in db. see instance done: https://review.openstack.org/#/c/202593/ 2. For system_metadata update. it has same problem with metadata update, but the difference is the "first get" stuff doesn't happen in API, maybe in compute or virt driver. That is, the ETAG, the way to avoid API race condition, is not suitable for system_metadata. Any idea on this issue, pls feel free to append your comments, we need your advice and help. Regards, Yikun ---------------------------------------- Jiang Yikun(Kero) Mail: yikunkero at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Nova Metadata update.jpg Type: image/jpeg Size: 157958 bytes Desc: not available URL: From amotoki at gmail.com Mon Jan 22 05:40:49 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 22 Jan 2018 14:40:49 +0900 Subject: [openstack-dev] [horizon][packaging] django-openstack-auth retirement Message-ID: Hi, packaging teams and operators This mail is the announcement of retirement of django-openstack-auth python package in the Queens release. Horizon team merged the code of django-openstack-auth into the horizon repo mainly from the maintenance reason. For more detail, see the blueprint https://blueprints.launchpad.net/horizon/+spec/merge-openstack-auth. [To packaging teams] Ensure not to install django-openstack-auth in Queens horizon package. "openstack_auth" python module is now provided by horizon instead of django_openstack_auth. [To operators] If you install horizon and django-openstack-auth by using pip (instead of distribution packages), please uninstall django-openstack-auth python package before upgrading horizon. Otherwise, "openstack_auth" module is maintained by both horizon and django-openstack-auth after upgrading horizon and it confuses the pip file management, while horizon works. If you have questions, feel to reach the horizon team. Thanks, Akihiro From jpichon at redhat.com Mon Jan 22 09:30:38 2018 From: jpichon at redhat.com (Julie Pichon) Date: Mon, 22 Jan 2018 09:30:38 +0000 Subject: [openstack-dev] [tripleo][ui] Dependency management In-Reply-To: <20180119184342.bz5yzdn7t35xkzqu@localhost.localdomain> References: <20180119184342.bz5yzdn7t35xkzqu@localhost.localdomain> Message-ID: On 19 January 2018 at 18:43, Honza Pokorny wrote: > We've recently discovered an issue with the way we handle dependencies for > tripleo-ui. This is an explanation of the problem, and a proposed solution. > I'm looking for feedback. > > Before the upgrade to zuul v3 in TripleO CI, we had two types of jobs for > tripleo-ui: > > * Native npm jobs > * Undercloud integration jobs > > After the upgrade, the integration jobs went away. Our goal is to add them > back. > > There is a difference in how these two types of jobs handle dependencies. > Native npm jobs use the "npm install" command to acquire packages, and > undercloud jobs use RPMs. The tripleo-ui project uses a separate RPM for > dependencies called openstack-tripleo-ui-deps. > > Because of the requirement to use a separate RPM for dependencies, there is some > extra work needed when a new dependency is introduced, or an existing one is > upgraded. Once the patch that introduces the dependency is merged, we have to > increment the version of the -deps package, and rebuild it. It then shows up in > the yum repos used by the undercloud. > > To make matters worse, we recently upgraded our infrastructure to nodejs 8.9.4 > and npm 5.6.0 (latest stable). This makes it so we can't write "purist" patches > that simply introduce a new dependency to package.json, and nothing more. The > code that uses the new dependency must be included. I tend to think that each > commit should work on its own so this can be seen as a plus. > > This presents a problem: you can't get a patch that introduces a new dependency > merged because it's not included in the RPM needed by the undercloud ci job. > > So, here is a proposal on how that might work: > > 1. Submit a patch for review that introduces the dependency, along with code > changes to support it and validate its inclusion > 2. Native npm jobs will pass > 3. Undercloud gate job will fail because the dependency isn't in -deps RPM > 4. We ask RDO to review for licensing > 5. Once reviewed, new -deps package is built > 6. Recheck > 7. All jobs pass Perhaps there should be a step after 3 or 4 to have the patch normally reviewed, and wait for it to have two +2s before building the new package? Otherwise we may end up with wasted work to get a new package ready for dependencies that were eventually dismissed. Julie > There is the obvious issue with building an RPM based on an unmerged patch. > > What do you think? Is that possible? Any other solutions? > > Honza Pokorny From dtantsur at redhat.com Mon Jan 22 10:10:23 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 22 Jan 2018 11:10:23 +0100 Subject: [openstack-dev] [release] Lib branching for stable/queens In-Reply-To: <20180119211542.GA12408@sm-xps> References: <20180119211542.GA12408@sm-xps> Message-ID: <4576cc3a-461d-f73f-dd65-57db6cd4a844@redhat.com> Hi! Unfortunately, it seems that we've discovered a critical issue in one of our libs (sushy) right after branching :( What's the procedure for emergency fixes to stable/queens right now? On 01/19/2018 10:15 PM, Sean McGinnis wrote: > Happy Friday all. > > Now that we are past the non-client lib freeze, we will need to have > stable/queens branches for those libs. For all libraries that did not miss the > freeze, I will be proposing the patches to get those stable branches created. > > This should have been enforced as part of the last deliverable request, but I > don't think we had quite everything in place for branch creation. Going forward > as we do the client library releases and then the service releases, please make > sure your patch requesting the release includes creating the stable/queens > branching. > > If there is any reason for me to hold off on this for a library your team > manages, please let me know ASAP. > > > Upcoming service project branching > ================================== > > I mentioned this in the countdown email, but to increase the odds that someone > actually sees it - if your project follows the cycle-with-milestones release > model, please check membership of your $project-release group. The members of > the group can be found by filtering for the group here: > > https://review.openstack.org/#/admin/groups/ > > This group should be limited to those aware of the restrictions as we wrap up > the end of the cycle to make sure only release critical things are allowed to > be merged into stable/queens as we finalize things for the final release. > > As always, just let me know if there are any questions. > > -- > Sean McGinnis (smcginnis) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From fungi at yuggoth.org Mon Jan 22 11:30:12 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 22 Jan 2018 11:30:12 +0000 Subject: [openstack-dev] [horizon][packaging] django-openstack-auth retirement In-Reply-To: References: Message-ID: <20180122113012.xe42fi24v3ljm7rz@yuggoth.org> On 2018-01-22 14:40:49 +0900 (+0900), Akihiro Motoki wrote: [...] > If you install horizon and django-openstack-auth by using pip (instead > of distribution packages), please uninstall django-openstack-auth > python package before upgrading horizon. > Otherwise, "openstack_auth" module is maintained by both horizon and > django-openstack-auth after upgrading horizon and it confuses the pip > file management, while horizon works. [...] If we were already publishing Horizon to PyPI, we could have a new (and final) major version of DOA as a transitional package to stop providing any module itself and depend on the new version of Horizon which provides that module instead. I suppose without Horizon on PyPI, documentation of the issue is the most we can do for this situation. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From oidgar at redhat.com Mon Jan 22 11:55:25 2018 From: oidgar at redhat.com (Or Idgar) Date: Mon, 22 Jan 2018 13:55:25 +0200 Subject: [openstack-dev] Many timeouts in zuul gates for TripleO In-Reply-To: <20180120023847.GA12633@localhost.localdomain> References: <1fed6c03-a7a7-cbcb-3e3e-16638f482af0@nemebean.com> <20180120023847.GA12633@localhost.localdomain> Message-ID: Hi, Still having timeouts but now in tripleo-heat-templates experimental gates (tripleo-ci-centos-7-ovb-fakeha-caserver and tripleo-ci-centos-7-ovb-ha-tempest-oooq). see examples: http://logs.openstack.org/31/518331/23/experimental-tripleo/tripleo-ci-centos-7-ovb-fakeha-caserver/7502e82/ http://logs.openstack.org/31/518331/23/experimental-tripleo/tripleo-ci-centos-7-ovb-ha-tempest-oooq/46e8e0d/ Anyone have an idea what we can do to fix it? Thanks, Idgar On Sat, Jan 20, 2018 at 4:38 AM, Paul Belanger wrote: > On Fri, Jan 19, 2018 at 11:23:45AM -0600, Ben Nemec wrote: > > > > > > On 01/18/2018 09:45 AM, Emilien Macchi wrote: > > > On Thu, Jan 18, 2018 at 6:34 AM, Or Idgar wrote: > > > > Hi, > > > > we're encountering many timeouts for zuul gates in TripleO. > > > > For example, see > > > > http://logs.openstack.org/95/508195/28/check-tripleo/ > tripleo-ci-centos-7-ovb-ha-oooq/c85fcb7/. > > > > > > > > rechecks won't help and sometimes specific gate is end successfully > and > > > > sometimes not. > > > > The problem is that after recheck it's not always the same gate > which is > > > > failed. > > > > > > > > Is there someone who have access to the servers load to see what > cause this? > > > > alternatively, is there something we can do in order to reduce the > running > > > > time for each gate? > > > > > > We're migrating to RDO Cloud for OVB jobs: > > > https://review.openstack.org/#/c/526481/ > > > It's a work in progress but will help a lot for OVB timeouts on RH1. > > > > > > I'll let the CI folks comment on that topic. > > > > > > > I noticed that the timeouts on rh1 have been especially bad as of late > so I > > did a little testing and found that it did seem to be running more slowly > > than it should. After some investigation I found that 6 of our compute > > nodes have warning messages that the cpu was throttled due to high > > temperature. I've disabled 4 of them that had a lot of warnings. The > other > > 2 only had a handful of warnings so I'm hopeful we can leave them active > > without affecting job performance too much. It won't accomplish much if > we > > disable the overheating nodes only to overload the remaining ones. > > > > I'll follow up with our hardware people and see if we can determine why > > these specific nodes are overheating. They seem to be running 20 > degrees C > > hotter than the rest of the nodes. > > > Did tripleo-test-cloud-rh1 get new kernels applied for meltdown / spectre, > possible that is impacting performance too? > > -Paul > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Or Idgar -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Jan 22 12:10:28 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 22 Jan 2018 06:10:28 -0600 Subject: [openstack-dev] [release] Lib branching for stable/queens In-Reply-To: <4576cc3a-461d-f73f-dd65-57db6cd4a844@redhat.com> References: <20180119211542.GA12408@sm-xps> <4576cc3a-461d-f73f-dd65-57db6cd4a844@redhat.com> Message-ID: <20180122121027.GA32428@sm-xps> On Mon, Jan 22, 2018 at 11:10:23AM +0100, Dmitry Tantsur wrote: > Hi! > > Unfortunately, it seems that we've discovered a critical issue in one of our > libs (sushy) right after branching :( What's the procedure for emergency > fixes to stable/queens right now? > We can do another release. Just propose the new release with the new hash in the deliverables/queens/sushy.yaml file. The one trickier part you have to keep in mind is that it will need to be on the stable/queens branch. So you should be merging your fix into master, then backporting to stable/queens like you would do for any normal stable branch fixes. Sean From balazs.gibizer at ericsson.com Mon Jan 22 12:20:31 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 22 Jan 2018 13:20:31 +0100 Subject: [openstack-dev] [nova] Notification update week 4 Message-ID: <1516623631.3778.6@smtp.office365.com> Hi, Here is the status update / focus settings mail for w4. Bugs ---- [High] https://bugs.launchpad.net/nova/+bug/1742962 nova functional test does not triggered on notification sample only changes During the zuul v3 migration the project-config generated based on the zuul v2 jobs. It contained a proper definition of when nova wants to trigger the functional job. Unfortunately this job definition does not override the openstack-tox-functional job definition from the openstack-zuul-jobs repo. This caused that the openstack-tox-functional (and functional-py35) jobs were not triggered for certain commits. The fix is to create a nova specific tox-functional job in tree. Patches has been proposed: * https://review.openstack.org/#/c/533210/ Make sure that functional test triggered on sample changes * https://review.openstack.org/#/c/533608/ Moving nova functional test def to in tree In general we have to review all nova jobs in the project-config and move those in-tree that try to override parameters of the job definitions in openstack-zuul-jobs repo. [High] https://bugs.launchpad.net/nova/+bug/1737201 TypeError when sending notification during attach_interface Fix merged to master. Backports have been proposed: * Pike: https://review.openstack.org/#/c/531745/ * Queens: https://review.openstack.org/#/c/531746/ [High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations fail to complete with versioned notifications if payload contains unset non-nullable fields Patch has been proposed: https://review.openstack.org/#/c/529194/ Dan left feedback on it and I accept his comment that this is mostly papering over a problem that we don't fully understand how can happen in the first place. In the other hand I don't know how can we figure out what happend. So if somebody has an idea then don't hesistate to tell me. This bug is still stuck. [Low] https://bugs.launchpad.net/nova/+bug/1487038 nova.exception._cleanse_dict should use oslo_utils.strutils._SANITIZE_KEYS Old abandoned patches exist but need somebody to pick them up: * https://review.openstack.org/#/c/215308/ * https://review.openstack.org/#/c/388345/ Versioned notification transformation ------------------------------------- Thanks for Takashi we have multiple patches needing only a second +2: * https://review.openstack.org/#/c/482148 Transform instance-evacuate notification * https://review.openstack.org/#/c/465081 Transform instance.resize_prep notification * https://review.openstack.org/#/c/482557 Transform instance.resize_confirm notification Also there are patches ready for cores to review: * https://review.openstack.org/#/c/403660 Transform instance.exists notification * https://review.openstack.org/#/c/410297 Transform missing delete notifications * https://review.openstack.org/#/c/476459 Send soft_delete from context manager Introduce instance.lock and instance.unlock notifications ----------------------------------------------------------- A specless bp has been proposed to the Rocky cycle https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances Some preliminary discussion happened in an earlier patch https://review.openstack.org/#/c/526251/ Add the user id and project id of the user initiated the instance action to the notification -------------------------------------------------------------------------------------------- A new bp has been proposed https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications As the user who initiates the instance action (e.g. reboot) could be different from the user owning the instance it would make sense to include the user_id and project_id of the action initiatior to the versioned instance action notifications as well. Factor out duplicated notification sample ----------------------------------------- https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open We have to be carefull to approve these type of commits until the solution for https://bugs.launchpad.net/nova/+bug/1742962 merged as functional tests could be broken silently. Weekly meeting -------------- The next meeting will be held on 23th of January on #openstack-meeting-4 https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180123T170000 Cheers, gibi -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at johngarbutt.com Mon Jan 22 12:45:48 2018 From: john at johngarbutt.com (John Garbutt) Date: Mon, 22 Jan 2018 12:45:48 +0000 Subject: [openstack-dev] [ironic] Remove in-tree policy and config? Message-ID: Hi, While I was looking at the traits work, I noticed we still have policy and config in tree for ironic and ironic inspector: http://git.openstack.org/cgit/openstack/ironic/tree/etc/ironic/policy.json.sample http://git.openstack.org/cgit/openstack/ironic/tree/etc/ironic/ironic.conf.sample http://git.openstack.org/cgit/openstack/ironic/tree/etc/ironic/policy.json And in a similar way: http://git.openstack.org/cgit/openstack/ironic-inspector/tree/policy.yaml.sample http://git.openstack.org/cgit/openstack/ironic-inspector/tree/example.conf There is an argument that says we shouldn't force operators to build a full environment to generate these, but this has been somewhat superseded by us having good docs: https://docs.openstack.org/ironic/latest/configuration/sample-config.html https://docs.openstack.org/ironic/latest/configuration/sample-policy.html https://docs.openstack.org/ironic-inspector/latest/configuration/sample-config.html https://docs.openstack.org/ironic-inspector/latest/configuration/sample-policy.html It could look something like this (but with the tests working...): https://review.openstack.org/#/c/536349 What do you all think? Thanks, johnthetubaguy -------------- next part -------------- An HTML attachment was scrubbed... URL: From mariusc at redhat.com Mon Jan 22 13:04:29 2018 From: mariusc at redhat.com (Marius Cornea) Date: Mon, 22 Jan 2018 14:04:29 +0100 Subject: [openstack-dev] [tripleo] tripleo-upgrade pike branch In-Reply-To: References: Message-ID: On Fri, Jan 19, 2018 at 4:21 PM, Wesley Hayutin wrote: > Thanks Marius for sending this out and kicking off a conversation. > > On Tue, Jan 2, 2018 at 12:56 PM, Marius Cornea wrote: >> >> Hi everyone and Happy New Year! >> >> As the migration of tripleo-upgrade repo to the openstack namespace is >> now complete I think it's the time to create a Pike branch to capture >> the current state so we can use it for Pike testing and keep the >> master branch for Queens changes. The update/upgrade steps are >> changing between versions and the aim of branching the repo is to keep >> the update/upgrade steps clean per branch to avoid using conditionals >> based on release. Also tripleo-upgrade should be compatible with >> different tools used for deployment(tripleo-quickstart, infrared, >> manual deployments) which use different vars for the version release >> so in case of using conditionals we would need extra steps to >> normalize these variables. > > > I understand the desire to create a branch to protect the work that has been > done previously. > The interesting thing is that you guys are proposing to use a branched > ansible role with > a branchless upstream project. I want to make sure we have enough review so > that we don't hit issues > in the future. Maybe that is OK, but I have at least one concern. > > My conern is about gating the tripleo-upgrade role and it's branches. When > tripleo-quickstart is changed > which is branchless we will be have to kick off a job for each > tripleo-upgrade branch? That immediately doubles > the load on gates. I think it would probably be the same when using multiple release conditionals, we'd still have to trigger one job/release if we wanted full coverage. > It's extemely important to properly gate this role against the versions of > TripleO and OSP. I see very limited > check jobs and gate jobs on tripleo-upgrades atm. I have only found [1]. > I think we need to see some external and internal > jobs checking and gating this role with comments posted to changes. > > [1] > https://review.rdoproject.org/jenkins/job/gate-tripleo-ci-centos-7-containers-multinode-upgrades-pike/ > > >> >> >> I wanted to bring this topic up for discussion to see if branching is >> the proper thing to do here. >> >> Thanks, >> Marius >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dtantsur at redhat.com Mon Jan 22 13:33:30 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 22 Jan 2018 14:33:30 +0100 Subject: [openstack-dev] [ironic] Remove in-tree policy and config? In-Reply-To: References: Message-ID: +1 I would really hate to not have a place to reference people to, but the documentation now provides such place. Keeping the examples up-to-date is quite annoying, I'm all for dropping them. On 01/22/2018 01:45 PM, John Garbutt wrote: > Hi, > > While I was looking at the traits work, I noticed we still have policy and > config in tree for ironic and ironic inspector: > > http://git.openstack.org/cgit/openstack/ironic/tree/etc/ironic/policy.json.sample > http://git.openstack.org/cgit/openstack/ironic/tree/etc/ironic/ironic.conf.sample > http://git.openstack.org/cgit/openstack/ironic/tree/etc/ironic/policy.json > > And in a similar way: > http://git.openstack.org/cgit/openstack/ironic-inspector/tree/policy.yaml.sample > http://git.openstack.org/cgit/openstack/ironic-inspector/tree/example.conf > > There is an argument that says we shouldn't force operators to build a full > environment to generate these, but this has been somewhat superseded by us > having good docs: > > https://docs.openstack.org/ironic/latest/configuration/sample-config.html > https://docs.openstack.org/ironic/latest/configuration/sample-policy.html > https://docs.openstack.org/ironic-inspector/latest/configuration/sample-config.html > https://docs.openstack.org/ironic-inspector/latest/configuration/sample-policy.html > > It could look something like this (but with the tests working...): > https://review.openstack.org/#/c/536349 > > What do you all think? > > Thanks, > johnthetubaguy > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From pshchelokovskyy at mirantis.com Mon Jan 22 13:35:49 2018 From: pshchelokovskyy at mirantis.com (Pavlo Shchelokovskyy) Date: Mon, 22 Jan 2018 15:35:49 +0200 Subject: [openstack-dev] [ironic] Remove in-tree policy and config? In-Reply-To: References: Message-ID: John, +1. The sample file in etc/ironic/ in our repo is actually (almost) always out of sync with the one in the docs - the docs one is re-generated on each commit with new/changed stuff from third-party libs, the version in etc/ironic is only updated manually when someone remembers it (as people usually tend to limit the changes to this file in their commit to relevant ones). Cheers, On Mon, Jan 22, 2018 at 2:45 PM, John Garbutt wrote: > Hi, > > While I was looking at the traits work, I noticed we still have policy and > config in tree for ironic and ironic inspector: > > http://git.openstack.org/cgit/openstack/ironic/tree/etc/ > ironic/policy.json.sample > http://git.openstack.org/cgit/openstack/ironic/tree/etc/ > ironic/ironic.conf.sample > http://git.openstack.org/cgit/openstack/ironic/tree/etc/ironic/policy.json > > And in a similar way: > http://git.openstack.org/cgit/openstack/ironic-inspector/ > tree/policy.yaml.sample > http://git.openstack.org/cgit/openstack/ironic-inspector/tree/example.conf > > There is an argument that says we shouldn't force operators to build a > full environment to generate these, but this has been somewhat superseded > by us having good docs: > > https://docs.openstack.org/ironic/latest/configuration/sample-config.html > https://docs.openstack.org/ironic/latest/configuration/sample-policy.html > https://docs.openstack.org/ironic-inspector/latest/ > configuration/sample-config.html > https://docs.openstack.org/ironic-inspector/latest/ > configuration/sample-policy.html > > It could look something like this (but with the tests working...): > https://review.openstack.org/#/c/536349 > > What do you all think? > > Thanks, > johnthetubaguy > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Dr. Pavlo Shchelokovskyy Senior Software Engineer Mirantis Inc www.mirantis.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mariusc at redhat.com Mon Jan 22 13:48:31 2018 From: mariusc at redhat.com (Marius Cornea) Date: Mon, 22 Jan 2018 14:48:31 +0100 Subject: [openstack-dev] [tripleo] tripleo-upgrade pike branch In-Reply-To: References: Message-ID: On Fri, Jan 19, 2018 at 4:47 PM, John Trowbridge wrote: > > > On Fri, Jan 19, 2018 at 10:21 AM, Wesley Hayutin > wrote: >> >> Thanks Marius for sending this out and kicking off a conversation. >> >> On Tue, Jan 2, 2018 at 12:56 PM, Marius Cornea wrote: >>> >>> Hi everyone and Happy New Year! >>> >>> As the migration of tripleo-upgrade repo to the openstack namespace is >>> now complete I think it's the time to create a Pike branch to capture >>> the current state so we can use it for Pike testing and keep the >>> master branch for Queens changes. The update/upgrade steps are >>> changing between versions and the aim of branching the repo is to keep >>> the update/upgrade steps clean per branch to avoid using conditionals >>> based on release. Also tripleo-upgrade should be compatible with >>> different tools used for deployment(tripleo-quickstart, infrared, >>> manual deployments) which use different vars for the version release >>> so in case of using conditionals we would need extra steps to >>> normalize these variables. >> >> >> I understand the desire to create a branch to protect the work that has >> been done previously. >> The interesting thing is that you guys are proposing to use a branched >> ansible role with >> a branchless upstream project. I want to make sure we have enough review >> so that we don't hit issues >> in the future. Maybe that is OK, but I have at least one concern. >> >> My conern is about gating the tripleo-upgrade role and it's branches. >> When tripleo-quickstart is changed >> which is branchless we will be have to kick off a job for each >> tripleo-upgrade branch? That immediately doubles >> the load on gates. > > > I do not think CI repos should be branched. Even more than the concern Wes > brought up about a larger gate matrix. Think > about how much would need to get backported. To start you would just have > the 2 branches, but eventually you will have 3. > Likely all 3 will have slight differences in how different pieces of the > upgrade are called (otherwise why branch), so when > you need to fix something on all branches the backports have a high > potential to be non-trivial too. Once we release we expect the upgrade/update process to be stable and no changes required to the process so I expect the backports to be minimal, mostly for scenarios that we missed in testing at release time. > Release conditionals are not perfect, but I dont think compatibility is > really a major issue. Just document how to set the > release, and the different CI tools that use your role will just have to > adapt to that. >> >> >> It's extemely important to properly gate this role against the versions of >> TripleO and OSP. I see very limited >> check jobs and gate jobs on tripleo-upgrades atm. I have only found [1]. >> I think we need to see some external and internal >> jobs checking and gating this role with comments posted to changes. >> >> [1] >> https://review.rdoproject.org/jenkins/job/gate-tripleo-ci-centos-7-containers-multinode-upgrades-pike/ >> >> >>> >>> >>> I wanted to bring this topic up for discussion to see if branching is >>> the proper thing to do here. >>> >>> Thanks, >>> Marius >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From lyarwood at redhat.com Mon Jan 22 14:22:12 2018 From: lyarwood at redhat.com (Lee Yarwood) Date: Mon, 22 Jan 2018 14:22:12 +0000 Subject: [openstack-dev] [nova] Native QEMU LUKS decryption review overview ahead of FF Message-ID: <20180122142212.2fqjvquljpji6kph@lyarwood.usersys.redhat.com> Hello, With M3 and FF rapidly approaching this week I wanted to post a brief overview of the QEMU native LUKS series. The full series is available on the following topic, I'll go into more detail on each of the changes below: https://review.openstack.org/#/q/topic:bp/libvirt-qemu-native-luks+status:open libvirt: Collocate encryptor and volume driver calls https://review.openstack.org/#/c/460243/ (Missing final +2 and +W) This refactor of the Libvirt driver connect and disconnect volume code has the added benefit of also correcting a number of bugs around the attaching and detaching of os-brick encryptors. IMHO this would be useful in Queens even if the rest of the series doesn't land. libvirt: Introduce disk encryption config classes https://review.openstack.org/#/c/464008/ (Missing final +2 and +W) This is the most straight forward change of the series and simply introduces the required config classes to wire up native LUKS decryption within the domain XML of an instance. Hopefully nothing controversial. libvirt: QEMU native LUKS decryption for encrypted volumes https://review.openstack.org/#/c/523958/ (Missing both +2s and +W) This change carries the bulk of the implementation, wiring up encrypted volumes during their initial attachment. The commit message has a detailed run down of the various upgrade and LM corner cases we attempt to handle here, such as LM from a P to Q compute, detaching a P attached encrypted volume after upgrading to Q etc. Upgrade and LM testing is enabled by the following changes: fixed_key: Use a single hardcoded key across devstack deployments https://review.openstack.org/#/c/536343/ compute: Introduce an encrypted volume LM test https://review.openstack.org/#/c/536177/ This is being tested by tempest-dsvm-multinode-live-migration and grenade-dsvm-neutron-multinode-live-migration in the following DNM Nova change, enabling volume backed LM tests: DNM: Test LM with encrypted volumes https://review.openstack.org/#/c/536350/ Hopefully that covers everything but please feel free to ping if you would like more detail, background etc. Thanks in advance, Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From doug at doughellmann.com Mon Jan 22 14:33:26 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 22 Jan 2018 09:33:26 -0500 Subject: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID In-Reply-To: References: <054e9a0c-1c99-51dd-9e66-8adb3c910802@switch.ch> <1515696336-sup-7054@lrrr.local> <7142b655-abca-ed58-bc4c-07fa51cb401c@switch.ch> <1515771070-sup-7997@lrrr.local> <07ee3262-8aec-a4c4-f981-bc448afab0ba@switch.ch> <96f2a7d8-ea7c-5530-7975-62b477982f03@switch.ch> <1516293565-sup-9123@lrrr.local> <1516295114-sup-7111@lrrr.local> Message-ID: <1516630943-sup-4108@lrrr.local> Excerpts from Saverio Proto's message of 2018-01-21 18:09:03 +0100: > Hello, > > I figured out a bug is already open since a long time :( > https://bugs.launchpad.net/oslo.log/+bug/1564931 > > And there is already a review: > https://review.openstack.org/#/c/367514/ > > it looks like the review was not merged, and it went to abandoned > because of no progress on it for long time. > > I rebased that code on the current master: > https://review.openstack.org/536149 That patch is not needed on master. We added the full context as a nested value under the "context" key when http://git.openstack.org/cgit/openstack/oslo.log/commit/oslo_log/formatters.py?id=1b012d0fc6811f00e032e52ed586fe37e157584d landed. That change was released as part of 3.35.0 and as the test in https://review.openstack.org/#/c/536164/ shows the request_id and global_request_id values are present in the output. Before that change (and after, as part of our effort to maintain backwards compatibility) the values from the context were added to the "extra" section of the output message. That behavior is present in master: http://git.openstack.org/cgit/openstack/oslo.log/tree/oslo_log/formatters.py?h=master#n246 pike: http://git.openstack.org/cgit/openstack/oslo.log/tree/oslo_log/formatters.py?h=stable%2Fpike#n300 ocata: http://git.openstack.org/cgit/openstack/oslo.log/tree/oslo_log/formatters.py?h=stable%2Focata#n156 It also appears to be present for newton, the version you said you are using: http://git.openstack.org/cgit/openstack/oslo.log/tree/oslo_log/formatters.py?h=newton-eol#n153 Have you looked at the "extras" section of the log output? Could you provide some sample log output? Doug > > Saverio > > On 18.01.18 18:14, Doug Hellmann wrote: > > Excerpts from Doug Hellmann's message of 2018-01-18 11:45:28 -0500: > >> Excerpts from Saverio Proto's message of 2018-01-18 14:49:21 +0100: > >>> Hello all, > >>> > >>> well this oslo.log library looks like a core thing that is used by > >>> multiple projects. I feel scared hearing that bugs opened on that > >>> project are probably just ignored. > >>> > >>> should I reach out to the current PTL of OSLO ? > >>> https://github.com/openstack/governance/blob/master/reference/projects.yaml#L2580 > >>> > >>> ChangBo Guo are you reading this thread ? Do you think this is a bug or > >>> a missing feature ? And moreover is really nobody looking at these > >>> oslo.log bugs ? > >> > >> The Oslo team is small, but we do pay attention to bug reports. I > >> don't think this issue is going to rise to the level of "drop what > >> you're doing and help because the world is on fire", so I think > >> Sean is just encouraging you to have a little patience. > >> > >> Please do go ahead and open a bug and attach (or paste into the > >> description) an example of what the log output for your service looks > >> like. > >> > >> Doug > > > > Earlier in the thread you mentioned running the newton versions of > > neutron and oslo.log. The newton release has been marked end-of-life > > and is not supported by the community any longer. You may find > > support from your vendor, but if you're deploying on your own we'll > > have to work something else out. If we determine that this is a bug > > in the newton version of the library I won't have any way to give > > you a new release because the branch is closed. > > > > It should be possible for you to update just oslo.log to a more > > recent (and supported), although to do so you would have to get the > > package separately or build your own and that may complicate your > > deployment. > > > > More recent versions of the JSON formatter change the structure of > > the data to include the entire context (including the request id) > > in a separate key. Are you updating to newton as part of upgrading > > further than that? If so, we probably want to wait to debug this > > until you hit the latest supported version you're planning to deploy, > > in case the problem is already fixed there. > > > > Doug > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From daniel.mellado.es at ieee.org Mon Jan 22 14:46:54 2018 From: daniel.mellado.es at ieee.org (Daniel Mellado) Date: Mon, 22 Jan 2018 15:46:54 +0100 Subject: [openstack-dev] [kuryr][libnetwork] Release kuryr-libnetwork 1.x for Queens In-Reply-To: References: Message-ID: +1 El 21/1/18 a las 8:13, Irena Berezovsky escribió: > +1 > > On Fri, Jan 19, 2018 at 9:42 PM, Hongbin Lu > wrote: > > Hi Kuryr team, > > I think Kuryr-libnetwork is ready to move out of beta status. I > propose to make the first 1.x release of Kuryr-libnetwork for > Queens and cut a stable branch on it. What do you think about this > proposal? > > Best regards, > Hongbin > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From MM9745 at att.com Mon Jan 22 14:49:15 2018 From: MM9745 at att.com (MCEUEN, MATT) Date: Mon, 22 Jan 2018 14:49:15 +0000 Subject: [openstack-dev] [openstack-helm] Rocky PTG planning Message-ID: <7C64A75C21BB8D43BD75BB18635E4D89654BD9C8@MOSTLS1MSGUSRFF.ITServices.sbc.com> I’ve created an etherpad [1] to capture/plan topics for the OpenStack-Helm team to discuss at the Rocky PTG. Please add on more topics we should discuss in Dublin – it’s a non-prioritized list; we can prioritize if needed beforehand. [1]: https://etherpad.openstack.org/p/openstack-helm-ptg-rocky Thanks! Matt McEuen -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruby.loo at intel.com Mon Jan 22 14:59:00 2018 From: ruby.loo at intel.com (Loo, Ruby) Date: Mon, 22 Jan 2018 14:59:00 +0000 Subject: [openstack-dev] [ironic] FFE - Requesting FFE for Routed Networks support. In-Reply-To: <9b4d2edd-e718-09f3-13f0-638d5f4351a6@redhat.com> References: <1516182841.12010.13.camel@redhat.com> <9b4d2edd-e718-09f3-13f0-638d5f4351a6@redhat.com> Message-ID: <48B0A753-F9E4-461A-9E6A-23A82A3466B6@intel.com> /me +1 too. --ruby On 2018-01-17, 10:05 AM, "Dmitry Tantsur" wrote: Hi! I'm essentially +1 on granting this FFE, as it's a low-risk work for a great feature. See one comment inline. On 01/17/2018 10:54 AM, Harald Jensås wrote: > Requesting FFE for Routed Network support in networking-baremetal. > ------------------------------------------------------------------- > > > # Pros > ------ > With the patches up for review[7] we have a working ml2 agent; > __depends on neutron fix__; and mechanism driver combination that > enables support to bind ports on neutron routed networks. > > Specifically we report the bridge_mappings data to neutron, which > enable the _find_candidate_subnets() method in neutron ipam[1] to > succeed in finding a candidate subnet available to the ironic node when > ports on routed segments are bound. > > This functionality will allow users to take advantage of the > functionality added in DHCP Agent[2] which enables the DHCP agent to > service other subnets on the network via DHCP relay. For Ironic this > means we can support deploying nodes on a remote L3 network, e.g > different datacenter or different rack/rack-row. > > > > # Cons > ------ > Integration with placement does not currently work. > > Neutron uses Nova host-aggregates in combination with Placement. > Specifically hosts are added to a host-aggregate for segments based on > SEGMENT_HOST_MAPPING. Ironic nodes cannot currently be added to host- > aggregates in Nova. Because of this the following will appear in the > neutron logs when ironic-neutron agent is started: > RESP BODY: {"itemNotFound": {"message": "Compute host id> could not be found.", "code": 404}} > > Also the placement api cannot be used to find good candidate ironic > nodes with a baremetal port on the correct segment. This will have to be worked around by the operator via capabilities and flavor properties or manual additions to resource providers in placement. > > Depending on the direction of other projects, neutron and nova, the way > placement will finally work is not certain. > > Either the nova work [3] and [4], or a neutron change to use placement > only or a fallback to placement in neutron would be possible. In either > case there should be no need to change the networking-baremetal agent > or mechanism driver. > > > # Risks > ------- > Unless this bug[5] is fixed we might break the current baremetal > mechanism driver functionality. I have proposed a patch[6] to neutron > that fix the issue. In case no fix lands for this neutron bug soon we > should probably push these changes to Rocky. Let's add Depends-On to the first patch in the chain to make sure your patches don't merge until the fix is merged. > > > # Core reviewers > ---------------- > Julia Kreger, Sam Betts > > > > > [1] https://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/ip > am_backend_mixin.py#n697 > [2] https://review.openstack.org/#/c/468744/ > [3] https://review.openstack.org/#/c/421009/ > [4] https://review.openstack.org/#/c/421011/ > [5] https://bugs.launchpad.net/neutron/+bug/1743579 > [6] https://review.openstack.org/#/c/534449/ > [7] https://review.openstack.org/#/q/project:openstack/networking-barem > etal > > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gr at ham.ie Mon Jan 22 15:04:31 2018 From: gr at ham.ie (Graham Hayes) Date: Mon, 22 Jan 2018 15:04:31 +0000 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: References: Message-ID: On 19/01/18 16:27, Andrea Frittoli wrote: > > Thanks for the summary! > > To be honest I don't see why this decision has to be difficult to take. I think the confusion comes from one thing being decided already, and now conflicting direction is being given to teams, without anyone updating the Governance repo. (It is not as if there was not plenty of warning, I have been raising it as an issue for over a year) > Nothing we decide today is written in stone and the main risk ahead of > us is to take > a decision that requires a lot of upfront work and that it ends up > providing no > significant benefit, or even making things worst in some aspect. So we > may try one way > today and if we hit some significant issue we can still change. > > TL;DR my preferred option would be number (2) - it's the least initial > effort, so the > least risk, and deciding for (2) now won't make it any difficult in the > future to switch > to option (1) or option (3). I'm not pushing back on (2), I just think > (1) is more convenient. > Details below each option. >   > > > So far the patch proposes three options: > > 1) All trademark-related tests should go in the tempest repo, in > accordance >    with the original resolution. This would mean that even projects > that have >    never had tests in tempest would now have to add at least some of > their >    black-box tests to tempest. > > > This option is a valid one, but I think it introduces too much extra > work and > testing complications for too little benefit. What it does do is *guarantee* that the InterOp suite will work, as it will be CI'd. I see these programs as important enough that we should CI the tooling used for them, but I seem to be in a minority. >   > > The value of this option is that centralizes tests used for the > Interop program > in a location where interop-minded folks from the QA team can > control them. > > > There are other ways this can be achieved - it is possible to mark tests > so that > team may require a +1 from interop/qa when specific tests are modified. Is there? AFAIK gerrit does not do per file path permissions, so unless we have a job that just checks the votes on a patch, and passes or fails if a test changes (which would be awful for the teams) we cannot do that. >   > > The > downside is that projects that so far have avoided having a > dependency on > tempest will now lose some control over the black-box tests that > they use for > functional and integration that would now also be used for trademark > certification. > There's also concern for the review bandwidth of the QA team - we > can't expect > the QA team to be continually responsible for an ever-growing list > of projects > and their trademark tests. > > > If we restrict to interop tests, the review bandwidth issue is probably > not so bad. > The QA team would have to request the domain knowledge required for proper > review from the respective teams anyways. > > There are other complications introduced though: > > - service clients and other common bits (config and so) would have to > move to >   Tempest since we cannot have tempest depend on plugins. But then modifying >   those common bits on Tempest side would risk to break non-interop tests. >   Solution for that is to make all those bits stable interfaces for plugins Is this not already the case? e.g. the neutron plugin uses the nova service client already. This would also help for the neutron plugin which is currently importing DNS service clients from the designate-tempest-plugin repo - having them in the tempest repo would allow them to to be more stable, and remove the extra dependency. >   > - tempest would have to add new CI jobs to run the interop tests from add-on >   program on every tempest change so that the new code is tested on a > regular >   basis. That is a good thing, and we should probably do that for the other 2 options as well... > > - heat tests are wrapped in a Tempest plugin but actually written in > Gabbi so we >   would need to add Gabbi as a dependency to Tempest > > Nothing too terrible really, but I think it might not be worth the extra > effort, especially > now that teams available resources are getting thinner and thinner. > > > 2) All trademark-related tests for *add-on projects* should be > sourced from >    plugins external to tempest. > > I wouldn't go as far as saying they "should" be sourced. I think saying > that they > *may* be sourced from a plugin is enough. Apart from that this is my > favourite > option. The only thing required really is updating the resolution and we are > ready to go. > > With all the plugins no in own branchless repositories, the usability > concern is > not so strong anymore really. >   > > The value of this option is it allows project teams to retain > control over > these tests. > > > Other value is given by simplicity, least changes to implement and low risk. I do think if we do this for add-ons, we should be doing it for other programs as well, so that resolution will just be deleted,and it will allow other teams to have the same control, and further reduce the review load for QA. >   > > The potential problem with it is that individual project teams are > not necessarily reviewing test changes with an eye for interop > concerns and so > could inadvertently change the behavior of the > trademark-verification tools. This is a big issue, and I think it is overlooked. So, thus far, we have had 3 responses from people working on the QA tooling, with one for Option 1, one for Option 2, and one for Option 1 if Heat + Designate are now part of "core". If I start migrating tooling to tempest from designate, will it be -2'd? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From Greg.Waines at windriver.com Mon Jan 22 15:59:21 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Mon, 22 Jan 2018 15:59:21 +0000 Subject: [openstack-dev] [masakari] Questions on masakari CLI for hosts and segments Message-ID: masakari segment-create --name segment-1 --recovery-method auto --service-type xyz For ‘service-type’, · what are the semantics of this parameter ? · what are the allowed values ? · what is a typical or example value ? masakari host-create --name devstack-masakari --type xyz --control-attributes xyz --segment-id segment-1 For ‘type’, * what are the semantics of this parameter ? * what are the allowed values ? * what is a typical or example value ? For ‘control-attributes, * what are the semantics of this parameter ? * what are the allowed values ? * what is a typical or example value ? And what are the semantics of Masakari Failover Segments ? My guess is that · hosts belong to one and only one masakari segment · when a host fails, the VMs formerly running on that host will ONLY be recovered to other hosts within the same segment Correct ? Anything else ? Greg. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.frittoli at gmail.com Mon Jan 22 16:00:42 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Mon, 22 Jan 2018 16:00:42 +0000 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: References: Message-ID: On Mon, Jan 22, 2018 at 3:05 PM Graham Hayes wrote: > > > On 19/01/18 16:27, Andrea Frittoli wrote: > > > > Thanks for the summary! > > > > To be honest I don't see why this decision has to be difficult to take. > > I think the confusion comes from one thing being decided already, and > now conflicting direction is being given to teams, without anyone > updating the Governance repo. > > (It is not as if there was not plenty of warning, I have been raising it > as an issue for over a year) > > > Nothing we decide today is written in stone and the main risk ahead of > > us is to take > > a decision that requires a lot of upfront work and that it ends up > > providing no > > significant benefit, or even making things worst in some aspect. So we > > may try one way > > today and if we hit some significant issue we can still change. > > > > TL;DR my preferred option would be number (2) - it's the least initial > > effort, so the > > least risk, and deciding for (2) now won't make it any difficult in the > > future to switch > > to option (1) or option (3). I'm not pushing back on (2), I just think > > (1) is more convenient. > > Details below each option. > > > > > > > > So far the patch proposes three options: > > > > 1) All trademark-related tests should go in the tempest repo, in > > accordance > > with the original resolution. This would mean that even projects > > that have > > never had tests in tempest would now have to add at least some of > > their > > black-box tests to tempest. > > > > > > This option is a valid one, but I think it introduces too much extra > > work and > > testing complications for too little benefit. > > What it does do is *guarantee* that the InterOp suite will work, as it > will be CI'd. I see these programs as important enough that we should CI > the tooling used for them, but I seem to be in a minority. > Add-ons intreoperability tests will be CI'd for every change in Tempest as long as they are executed in a job that runs on every change in Tempest. This can be achieve regardless of the location of the tests and having the tests in the Tempest tree is not by itself a guarantee that they will be executed against every change. > > > > > > > The value of this option is that centralizes tests used for the > > Interop program > > in a location where interop-minded folks from the QA team can > > control them. > > > > > > There are other ways this can be achieved - it is possible to mark tests > > so that > > team may require a +1 from interop/qa when specific tests are modified. > > Is there? AFAIK gerrit does not do per file path permissions, so unless > we have a job that just checks the votes on a patch, and passes or fails > if a test changes (which would be awful for the teams) we cannot do > that. > If we really want to enforce having a vote from someone it may be tricky, yes, but I don't think enforcement is what we need, rather awareness, For governance and project-config patches reviewed always ask for a +1 from the project PTL where relevant, and add-on project reviewers could do the same. To help building awareness we could have automation in place to post a comment to Gerrit, like we do for elastic recheck. We could do that on every change to the plugin in the beginning and include a link to the interoperability recommendation to help reviewers in their job. > > > > > > > The > > downside is that projects that so far have avoided having a > > dependency on > > tempest will now lose some control over the black-box tests that > > they use for > > functional and integration that would now also be used for trademark > > certification. > > There's also concern for the review bandwidth of the QA team - we > > can't expect > > the QA team to be continually responsible for an ever-growing list > > of projects > > and their trademark tests. > > > > > > If we restrict to interop tests, the review bandwidth issue is probably > > not so bad. > > The QA team would have to request the domain knowledge required for > proper > > review from the respective teams anyways. > > > > There are other complications introduced though: > > > > - service clients and other common bits (config and so) would have to > > move to > > Tempest since we cannot have tempest depend on plugins. But then > modifying > > those common bits on Tempest side would risk to break non-interop > tests. > > Solution for that is to make all those bits stable interfaces for > plugins > > Is this not already the case? e.g. the neutron plugin uses the nova > service client already. > What I was saying is that Tempest cannot depend on a Tempest plugin, so service clients should be in Tempest. Which is absolutely fine, I'm was just listing out the work that needs to be done if we move add-on interoperability tests into Tempest. > > This would also help for the neutron plugin which is currently importing > DNS service clients from the designate-tempest-plugin repo - having them > in the tempest repo would allow them to to be more stable, and remove > the extra dependency. > > > > > - tempest would have to add new CI jobs to run the interop tests from > add-on > > program on every tempest change so that the new code is tested on a > > regular > > basis. > > That is a good thing, and we should probably do that for the other 2 > options as well... > > > > > - heat tests are wrapped in a Tempest plugin but actually written in > > Gabbi so we > > would need to add Gabbi as a dependency to Tempest > > > > Nothing too terrible really, but I think it might not be worth the extra > > effort, especially > > now that teams available resources are getting thinner and thinner. > > > > > > 2) All trademark-related tests for *add-on projects* should be > > sourced from > > plugins external to tempest. > > > > I wouldn't go as far as saying they "should" be sourced. I think saying > > that they > > *may* be sourced from a plugin is enough. Apart from that this is my > > favourite > > option. The only thing required really is updating the resolution and we > are > > ready to go. > > > > With all the plugins no in own branchless repositories, the usability > > concern is > > not so strong anymore really. > > > > > > The value of this option is it allows project teams to retain > > control over > > these tests. > > > > > > Other value is given by simplicity, least changes to implement and low > risk. > > I do think if we do this for add-ons, we should be doing it for other > programs as well, so that resolution will just be deleted,and it will > allow other teams to have the same control, and further reduce the > review load for QA. > If you mean to move interoperability tests only, this would create an artificial split in tests and it would not have a real impact on the review load of the QA team. If you mean moving all tests, that would take a while since we have a number of stable interfaces (including config) that we cannot just remove. Besides tests are an integral part of Tempest I would not move them out. > > > > > > The potential problem with it is that individual project teams are > > not necessarily reviewing test changes with an eye for interop > > concerns and so > > could inadvertently change the behavior of the > > trademark-verification tools. > > This is a big issue, and I think it is overlooked. > > > > So, thus far, we have had 3 responses from people working on the QA > tooling, with one for Option 1, one for Option 2, and one for Option 1 > if Heat + Designate are now part of "core". > > If I start migrating tooling to tempest from designate, will it be -2'd? > I still don't understand why you'd want to invest much of your and of the QA team time in moving things around now - I think it would be too much of trying to solve a problem before it exists. But if the majority agrees this is the best course of action I won't oppose it. I would start with an etherpad or spec or something with a checklist of things to be done and expected target state in terms of what's where and what's tested where. The plan of what needs to be tested where has to be done regardless of the location of the tests. Andrea Frittoli (andreaf) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfidente at redhat.com Mon Jan 22 16:34:17 2018 From: gfidente at redhat.com (Giulio Fidente) Date: Mon, 22 Jan 2018 17:34:17 +0100 Subject: [openstack-dev] [tripleo] FFE nfs_ganesha integration Message-ID: <7dfdaada-bfae-f4f5-b8d9-e541757585e2@redhat.com> hi, I would like to request an FFE for the integration of nfs_ganesha, which will provide a better user experience to manila users This work was slown down by a few factors: - it depended on the migration of tripleo to the newer Ceph version (luminous), which happened during the queens cycle - it depended on some additional functionalities to be implemented in ceph-ansible which were only recently been made available to tripleo/ci - it proposes the addition of on an additional (and optional) network (storagenfs) so that guests don't need connectivity to the ceph frontend network to be able to use the cephfs shares The submissions are on review and partially testable in CI [1]. If accepted, I'd like to reassign the blueprint [2] back to the queens cycle, as it was initially. Thanks 1. https://review.openstack.org/#/q/status:open+topic:bp/nfs-ganesha 2. https://blueprints.launchpad.net/tripleo/+spec/nfs-ganesha -- Giulio Fidente GPG KEY: 08D733BA From whayutin at redhat.com Mon Jan 22 17:20:50 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 22 Jan 2018 12:20:50 -0500 Subject: [openstack-dev] Many timeouts in zuul gates for TripleO In-Reply-To: References: <1fed6c03-a7a7-cbcb-3e3e-16638f482af0@nemebean.com> <20180120023847.GA12633@localhost.localdomain> Message-ID: On Mon, Jan 22, 2018 at 6:55 AM, Or Idgar wrote: > Hi, > Still having timeouts but now in tripleo-heat-templates experimental gates > (tripleo-ci-centos-7-ovb-fakeha-caserver and tripleo-ci-centos-7-ovb-ha- > tempest-oooq). > > see examples: > http://logs.openstack.org/31/518331/23/experimental- > tripleo/tripleo-ci-centos-7-ovb-fakeha-caserver/7502e82/ > http://logs.openstack.org/31/518331/23/experimental- > tripleo/tripleo-ci-centos-7-ovb-ha-tempest-oooq/46e8e0d/ > > Anyone have an idea what we can do to fix it? > > Thanks, > Idgar > > On Sat, Jan 20, 2018 at 4:38 AM, Paul Belanger > wrote: > >> On Fri, Jan 19, 2018 at 11:23:45AM -0600, Ben Nemec wrote: >> > >> > >> > On 01/18/2018 09:45 AM, Emilien Macchi wrote: >> > > On Thu, Jan 18, 2018 at 6:34 AM, Or Idgar wrote: >> > > > Hi, >> > > > we're encountering many timeouts for zuul gates in TripleO. >> > > > For example, see >> > > > http://logs.openstack.org/95/508195/28/check-tripleo/tripleo >> -ci-centos-7-ovb-ha-oooq/c85fcb7/. >> > > > >> > > > rechecks won't help and sometimes specific gate is end successfully >> and >> > > > sometimes not. >> > > > The problem is that after recheck it's not always the same gate >> which is >> > > > failed. >> > > > >> > > > Is there someone who have access to the servers load to see what >> cause this? >> > > > alternatively, is there something we can do in order to reduce the >> running >> > > > time for each gate? >> > > >> > > We're migrating to RDO Cloud for OVB jobs: >> > > https://review.openstack.org/#/c/526481/ >> > > It's a work in progress but will help a lot for OVB timeouts on RH1. >> > > >> > > I'll let the CI folks comment on that topic. >> > > >> > >> > I noticed that the timeouts on rh1 have been especially bad as of late >> so I >> > did a little testing and found that it did seem to be running more >> slowly >> > than it should. After some investigation I found that 6 of our compute >> > nodes have warning messages that the cpu was throttled due to high >> > temperature. I've disabled 4 of them that had a lot of warnings. The >> other >> > 2 only had a handful of warnings so I'm hopeful we can leave them active >> > without affecting job performance too much. It won't accomplish much >> if we >> > disable the overheating nodes only to overload the remaining ones. >> > >> > I'll follow up with our hardware people and see if we can determine why >> > these specific nodes are overheating. They seem to be running 20 >> degrees C >> > hotter than the rest of the nodes. >> > >> Did tripleo-test-cloud-rh1 get new kernels applied for meltdown / spectre, >> possible that is impacting performance too? >> >> -Paul >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > Best regards, > Or Idgar > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > FYI.. we created a lp to track decommissioning the ovb jobs on rh1 and moving them to third party ci. Up for comments https://bugs.launchpad.net/tripleo/+bug/1744763 -------------- next part -------------- An HTML attachment was scrubbed... URL: From saverio.proto at switch.ch Mon Jan 22 17:45:15 2018 From: saverio.proto at switch.ch (Saverio Proto) Date: Mon, 22 Jan 2018 18:45:15 +0100 Subject: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID In-Reply-To: <1516630943-sup-4108@lrrr.local> References: <054e9a0c-1c99-51dd-9e66-8adb3c910802@switch.ch> <1515696336-sup-7054@lrrr.local> <7142b655-abca-ed58-bc4c-07fa51cb401c@switch.ch> <1515771070-sup-7997@lrrr.local> <07ee3262-8aec-a4c4-f981-bc448afab0ba@switch.ch> <96f2a7d8-ea7c-5530-7975-62b477982f03@switch.ch> <1516293565-sup-9123@lrrr.local> <1516295114-sup-7111@lrrr.local> <1516630943-sup-4108@lrrr.local> Message-ID: Hello Doug, in the extra session I see just {"project": "unknown", "version": "unknown"} here a full line from nova-api: {"thread_name": "MainThread", "extra": {"project": "unknown", "version": "unknown"}, "process": 31142, "relative_created": 3459415335.4091644, "module": "wsgi", "message": "2001:xxx:xxxx:8100::80,2001:xxx:xxxx:81ff::b0 \"GET /v2/64b5b50eb21d4efe9783eb1d81a9ec65/os-services HTTP/1.1\" status: 200 len: 1812 time: 0.1813300", "hostname": "nova-0", "filename": "wsgi.py", "levelno": 20, "lineno": 555, "asctime": "2018-01-22 18:37:02,312", "msg": "2001:xxx:xxxx:8100::80,2001:xxx:xxxx:81ff::b0 \"GET /v2/64b5b50eb21d4efe9783eb1d81a9ec65/os-services HTTP/1.1\" status: 200 len: 1812 time: 0.1813300", "args": [], "process_name": "MainProcess", "name": "nova.osapi_compute.wsgi.server", "thread": 140414249163824, "created": 1516642622.312235, "traceback": null, "msecs": 312.23511695861816, "funcname": "handle_one_response", "pathname": "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", "levelname": "INFO"} thank you Saverio On 22.01.18 15:33, Doug Hellmann wrote: > Excerpts from Saverio Proto's message of 2018-01-21 18:09:03 +0100: >> Hello, >> >> I figured out a bug is already open since a long time :( >> https://bugs.launchpad.net/oslo.log/+bug/1564931 >> >> And there is already a review: >> https://review.openstack.org/#/c/367514/ >> >> it looks like the review was not merged, and it went to abandoned >> because of no progress on it for long time. >> >> I rebased that code on the current master: >> https://review.openstack.org/536149 > > That patch is not needed on master. > > We added the full context as a nested value under the "context" key when > http://git.openstack.org/cgit/openstack/oslo.log/commit/oslo_log/formatters.py?id=1b012d0fc6811f00e032e52ed586fe37e157584d > landed. That change was released as part of 3.35.0 and as the test in > https://review.openstack.org/#/c/536164/ shows the request_id and > global_request_id values are present in the output. > > Before that change (and after, as part of our effort to maintain > backwards compatibility) the values from the context were added to the > "extra" section of the output message. That behavior is present in > > master: http://git.openstack.org/cgit/openstack/oslo.log/tree/oslo_log/formatters.py?h=master#n246 > pike: http://git.openstack.org/cgit/openstack/oslo.log/tree/oslo_log/formatters.py?h=stable%2Fpike#n300 > ocata: http://git.openstack.org/cgit/openstack/oslo.log/tree/oslo_log/formatters.py?h=stable%2Focata#n156 > > It also appears to be present for newton, the version you said you are > using: > > http://git.openstack.org/cgit/openstack/oslo.log/tree/oslo_log/formatters.py?h=newton-eol#n153 > > Have you looked at the "extras" section of the log output? > > Could you provide some sample log output? > > Doug > >> >> Saverio >> >> On 18.01.18 18:14, Doug Hellmann wrote: >>> Excerpts from Doug Hellmann's message of 2018-01-18 11:45:28 -0500: >>>> Excerpts from Saverio Proto's message of 2018-01-18 14:49:21 +0100: >>>>> Hello all, >>>>> >>>>> well this oslo.log library looks like a core thing that is used by >>>>> multiple projects. I feel scared hearing that bugs opened on that >>>>> project are probably just ignored. >>>>> >>>>> should I reach out to the current PTL of OSLO ? >>>>> https://github.com/openstack/governance/blob/master/reference/projects.yaml#L2580 >>>>> >>>>> ChangBo Guo are you reading this thread ? Do you think this is a bug or >>>>> a missing feature ? And moreover is really nobody looking at these >>>>> oslo.log bugs ? >>>> >>>> The Oslo team is small, but we do pay attention to bug reports. I >>>> don't think this issue is going to rise to the level of "drop what >>>> you're doing and help because the world is on fire", so I think >>>> Sean is just encouraging you to have a little patience. >>>> >>>> Please do go ahead and open a bug and attach (or paste into the >>>> description) an example of what the log output for your service looks >>>> like. >>>> >>>> Doug >>> >>> Earlier in the thread you mentioned running the newton versions of >>> neutron and oslo.log. The newton release has been marked end-of-life >>> and is not supported by the community any longer. You may find >>> support from your vendor, but if you're deploying on your own we'll >>> have to work something else out. If we determine that this is a bug >>> in the newton version of the library I won't have any way to give >>> you a new release because the branch is closed. >>> >>> It should be possible for you to update just oslo.log to a more >>> recent (and supported), although to do so you would have to get the >>> package separately or build your own and that may complicate your >>> deployment. >>> >>> More recent versions of the JSON formatter change the structure of >>> the data to include the entire context (including the request id) >>> in a separate key. Are you updating to newton as part of upgrading >>> further than that? If so, we probably want to wait to debug this >>> until you hit the latest supported version you're planning to deploy, >>> in case the problem is already fixed there. >>> >>> Doug >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- SWITCH Saverio Proto, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 1573 saverio.proto at switch.ch, http://www.switch.ch http://www.switch.ch/stories From dtantsur at redhat.com Mon Jan 22 17:53:02 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 22 Jan 2018 18:53:02 +0100 Subject: [openstack-dev] [ironic] FFE - Requesting FFE for Routed Networks support. In-Reply-To: <1516182841.12010.13.camel@redhat.com> References: <1516182841.12010.13.camel@redhat.com> Message-ID: <9ea1d719-be4a-ab62-1b21-07a549bce607@redhat.com> This FFE was approved on today's meeting. Please note that the deadline to merging it is Fri, Feb 2nd. On 01/17/2018 10:54 AM, Harald Jensås wrote: > Requesting FFE for Routed Network support in networking-baremetal. > ------------------------------------------------------------------- > > > # Pros > ------ > With the patches up for review[7] we have a working ml2 agent; > __depends on neutron fix__; and mechanism driver combination that > enables support to bind ports on neutron routed networks. > > Specifically we report the bridge_mappings data to neutron, which > enable the _find_candidate_subnets() method in neutron ipam[1] to > succeed in finding a candidate subnet available to the ironic node when > ports on routed segments are bound. > > This functionality will allow users to take advantage of the > functionality added in DHCP Agent[2] which enables the DHCP agent to > service other subnets on the network via DHCP relay. For Ironic this > means we can support deploying nodes on a remote L3 network, e.g > different datacenter or different rack/rack-row. > > > > # Cons > ------ > Integration with placement does not currently work. > > Neutron uses Nova host-aggregates in combination with Placement. > Specifically hosts are added to a host-aggregate for segments based on > SEGMENT_HOST_MAPPING. Ironic nodes cannot currently be added to host- > aggregates in Nova. Because of this the following will appear in the > neutron logs when ironic-neutron agent is started: > RESP BODY: {"itemNotFound": {"message": "Compute host id> could not be found.", "code": 404}} > > Also the placement api cannot be used to find good candidate ironic > nodes with a baremetal port on the correct segment. This will have to be worked around by the operator via capabilities and flavor properties or manual additions to resource providers in placement. > > Depending on the direction of other projects, neutron and nova, the way > placement will finally work is not certain. > > Either the nova work [3] and [4], or a neutron change to use placement > only or a fallback to placement in neutron would be possible. In either > case there should be no need to change the networking-baremetal agent > or mechanism driver. > > > # Risks > ------- > Unless this bug[5] is fixed we might break the current baremetal > mechanism driver functionality. I have proposed a patch[6] to neutron > that fix the issue. In case no fix lands for this neutron bug soon we > should probably push these changes to Rocky. > > > # Core reviewers > ---------------- > Julia Kreger, Sam Betts > > > > > [1] https://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/ip > am_backend_mixin.py#n697 > [2] https://review.openstack.org/#/c/468744/ > [3] https://review.openstack.org/#/c/421009/ > [4] https://review.openstack.org/#/c/421011/ > [5] https://bugs.launchpad.net/neutron/+bug/1743579 > [6] https://review.openstack.org/#/c/534449/ > [7] https://review.openstack.org/#/q/project:openstack/networking-barem > etal > > From dtantsur at redhat.com Mon Jan 22 18:07:12 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 22 Jan 2018 19:07:12 +0100 Subject: [openstack-dev] [ironic] Deadlines, feature freeze and exceptions Message-ID: <127ec621-971a-dbc4-4966-e46005838d1f@redhat.com> Hi all! We're near the end of the cycle. Here are some important dates to be mindful of: Thu, Jan 25th - final Queens releases of python-ironicclient and python-ironic-inspector-client. Any features that land after that point will get to Rocky, no exceptions. Yes, even if the API itself lands in ironic in Queens. Thu, Jan 25th - begin of the feature freeze for ironic and many other projects. No features should land after that date without getting a formal exception first - please pay attention when approving the patches. A procedure for a feature freeze exception is outlined below. Fri, Feb 2nd - hard feature freeze. All features with an exception must land by this point. Starting with Monday and until the branching we only land bug fixes and documentation updates to master. Thu, Feb 8th - final feature releases for all remaining projects and creation of stable/queens. At this point the feature freeze is lifted and master is opened for Rocky development. Now, how to request a feature freeze exception. Please: * Send a email to this mailing list, with [ironic] and FFE in its subject. * Outline the reason why you think the feature should go to Queens, and what are the downsides. Keep in mind that no API additions will get client support after this Thursday. * Evaluate and explain the risks of landing the feature so late in the cycle. * Finally, please find at least two cores that agree to review your changes during the feature freeze window, and include their names. This is important, we don't need FFEs that won't get reviewed. Happy hacking, Dmitry From sean.mcginnis at gmx.com Mon Jan 22 20:51:14 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 22 Jan 2018 14:51:14 -0600 Subject: [openstack-dev] [Release-job-failures] Release of openstack/networking-ovn failed In-Reply-To: References: Message-ID: <20180122205113.GA24256@sm-xps> On Mon, Jan 22, 2018 at 08:42:05PM +0000, zuul at openstack.org wrote: > Build failed. > > - release-openstack-python http://logs.openstack.org/17/17fe24c0449ef2067ed7c7e0e51397711becef0d/release/release-openstack-python/1b3f482/ : POST_FAILURE in 7m 59s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > This looks like a transient failure with an issue pulling packages from pypi. http://logs.openstack.org/17/17fe24c0449ef2067ed7c7e0e51397711becef0d/release/release-openstack-python/1b3f482/job-output.txt.gz#_2018-01-22_20_41_09_050952 I will see if someone can reenqueue this to rerun the jobs, and it that is not possible we can revert and recommit the release patch. Sean From jim at jimrollenhagen.com Mon Jan 22 20:55:17 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Mon, 22 Jan 2018 15:55:17 -0500 Subject: [openstack-dev] [ironic] Remove in-tree policy and config? In-Reply-To: References: Message-ID: Huge +1, I didn't realize this was in docs now. We can finally stop doing it manually \o/ // jim On Mon, Jan 22, 2018 at 7:45 AM, John Garbutt wrote: > Hi, > > While I was looking at the traits work, I noticed we still have policy and > config in tree for ironic and ironic inspector: > > http://git.openstack.org/cgit/openstack/ironic/tree/etc/ > ironic/policy.json.sample > http://git.openstack.org/cgit/openstack/ironic/tree/etc/ > ironic/ironic.conf.sample > http://git.openstack.org/cgit/openstack/ironic/tree/etc/ironic/policy.json > > And in a similar way: > http://git.openstack.org/cgit/openstack/ironic-inspector/ > tree/policy.yaml.sample > http://git.openstack.org/cgit/openstack/ironic-inspector/tree/example.conf > > There is an argument that says we shouldn't force operators to build a > full environment to generate these, but this has been somewhat superseded > by us having good docs: > > https://docs.openstack.org/ironic/latest/configuration/sample-config.html > https://docs.openstack.org/ironic/latest/configuration/sample-policy.html > https://docs.openstack.org/ironic-inspector/latest/ > configuration/sample-config.html > https://docs.openstack.org/ironic-inspector/latest/ > configuration/sample-policy.html > > It could look something like this (but with the tests working...): > https://review.openstack.org/#/c/536349 > > What do you all think? > > Thanks, > johnthetubaguy > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From edmondsw at us.ibm.com Mon Jan 22 21:04:41 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Mon, 22 Jan 2018 16:04:41 -0500 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 15 January 2018 In-Reply-To: References: Message-ID: welcome, Gage! Congrats! Boris, Steve, Brant, Brad... you are and will be missed. W. Matthew Edmonds Sr. Software Engineer, IBM Power Systems Email: edmondsw at us.ibm.com Phone: (919) 543-7538 / Tie-Line: 441-7538 From: Colleen Murphy To: "OpenStack Development Mailing List (not for usage questions)" Date: 01/19/2018 12:55 PM Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 15 January 2018 # Keystone Team Update - Week of 15 January 2018 ## News ### Core team changes We added a new core reviewer! Thanks Gage Hugo for all your hard work and for stepping up to take on more responsibility! We also lost some core members: Boris Bobrov, Steve Martinelli, Brant Knudson and Brad Topol have stepped down from core membership after having made enormous contributions over the years. We're grateful to them for everything they've done to make keystone better and welcome them back any time. ### Proposed community goals for Rocky There are five community goals[1][2][3][4][5] proposed for Rocky that are under discussion. In the meeting this week we had some confusion and conerns over whether the proposed goal about pagination links[3] would apply to us. We don't paginate anything in keystone, so the goal wouldn't apply to us. The one that would potentially apply to keystone is about mutable configuration[5]. If you have thoughts on any of these potential community goals, including whether the team has the capacity to take on this work, make your voice heard on the reviews. ### PTG Planning We still need to put some thought into our agenda for the PTG. Add your ideas to the etherpad[6] and also add your name if you're going to be attending so that we can organize a team dinner. I noticed that no one requested a BM/VM room for the cross-project days of the PTG[7]. If we want to organize discussions with those teams we might want to start thinking about that now, but we will be able to book rooms spontaneously if we want to. [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_513875&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=WZyK2gn2PyQ8sv1VSFd0wmEw1KZMyiZucOwFiQVLhy4&s=1DNtvlP9hEpIKo3pCN68D0H4y4F46EqgJGWo4yGs_x8&e= [2] https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_532361&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=WZyK2gn2PyQ8sv1VSFd0wmEw1KZMyiZucOwFiQVLhy4&s=JHB0pEqz9O1eVNcmGvRbRrbhuqEt8ZYG8DwwU59szN0&e= [3] https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_532627&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=WZyK2gn2PyQ8sv1VSFd0wmEw1KZMyiZucOwFiQVLhy4&s=J8PVKWbG2l-pwPXNV9WGgjmRpDYPIxkE7r-NsNCr6DA&e= [4] https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_533544&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=WZyK2gn2PyQ8sv1VSFd0wmEw1KZMyiZucOwFiQVLhy4&s=G76CnD1Jab84Ng4YY3YDClK5_tuxwL5SIAmq7ZOpess&e= [5] https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_534605&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=WZyK2gn2PyQ8sv1VSFd0wmEw1KZMyiZucOwFiQVLhy4&s=2sGC1vqKoMD5TPi6CepPxg_Px2gF0udR01_JRoze6IU&e= [6] https://urldefense.proofpoint.com/v2/url?u=https-3A__etherpad.openstack.org_p_keystone-2Drocky-2Dptg&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=WZyK2gn2PyQ8sv1VSFd0wmEw1KZMyiZucOwFiQVLhy4&s=AK0hgcNrTZn050SWudyNJMXhB3nq5Hvm1YK8EP_TIGU&e= [7] https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_pipermail_openstack-2Ddev_2018-2DJanuary_126335.html&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=WZyK2gn2PyQ8sv1VSFd0wmEw1KZMyiZucOwFiQVLhy4&s=ojAJr5Ffr8mZcDP98KavDm9eEI4WEKflLHDa6CTETbI&e= ## Recently Merged Changes Search query: https://urldefense.proofpoint.com/v2/url?u=https-3A__goo.gl_hdD9Kw&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=WZyK2gn2PyQ8sv1VSFd0wmEw1KZMyiZucOwFiQVLhy4&s=46ojFnkizMRWuTYLT_r1aH2xUaWypLtFZWh08wDI2uM&e= We merged 38 changes this week. Lots of these were major stepping stones for our new features. ## Changes that need Attention Search query: https://urldefense.proofpoint.com/v2/url?u=https-3A__goo.gl_h9knRA&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=WZyK2gn2PyQ8sv1VSFd0wmEw1KZMyiZucOwFiQVLhy4&s=xeWa9B-DRwjQ781pFfjgfna8ogWhj3TYKU8NtF61HZA&e= There are 55 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. Please prioritize reviews for python-keystoneclient and our major feature initiatives (see below). ## Milestone Outlook https://urldefense.proofpoint.com/v2/url?u=https-3A__releases.openstack.org_queens_schedule.html&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=WZyK2gn2PyQ8sv1VSFd0wmEw1KZMyiZucOwFiQVLhy4&s=J_Rk2L2A1vCYrHjw7DundONStEvmISiZe9LgUaIu5FA&e= The non-client freeze was yesterday. Keystonemiddleware[8] and oslo.policy[9] were released in time. Unfortunately we dropped the ball on keystoneauth and there are some important changes we want to get in for this release. The release team has graciously granted us an exception but we'll have to make sure these changes are merged by Monday. Client and feature freeze is next week on THURSDAY[10]. Please prioritize reviews for python-keystoneclient[11] and our major feature initiatives[12][13][14]. [8] https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_531423_&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=WZyK2gn2PyQ8sv1VSFd0wmEw1KZMyiZucOwFiQVLhy4&s=g5ROLhyB5ZBIjvPnJG6wNqc5F21GF2sy9TFs61Hjjxk&e= [9] https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_531734_&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=WZyK2gn2PyQ8sv1VSFd0wmEw1KZMyiZucOwFiQVLhy4&s=3s2caO5LHYR4CEty-2PAGLz8S9rHbiJdTXPx_JB8TGM&e= [10] https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_pipermail_openstack-2Ddev_2018-2DJanuary_126351.html&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=WZyK2gn2PyQ8sv1VSFd0wmEw1KZMyiZucOwFiQVLhy4&s=wKsXiVgI0i-B2U7W5ZLJFn31s2dtRGlSYo3E8XciT-w&e= [11] https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_q_project-3Aopenstack_python-2Dkeystoneclient-2Bis-3Aopen&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=WZyK2gn2PyQ8sv1VSFd0wmEw1KZMyiZucOwFiQVLhy4&s=t4LUVuzIRYX8lYtgQnJ_Bo2CGN9RxcOV171i2F-9oJ0&e= [12] https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_q_is-3Aopen-2Btopic-3Abp_system-2Dscope&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=WZyK2gn2PyQ8sv1VSFd0wmEw1KZMyiZucOwFiQVLhy4&s=Z1y5YD5_uC5yKyzZw5T2BKmsL_20ZAY6_8vV5b-paM4&e= [13] https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_q_is-3Aopen-2Btopic-3Abp_unified-2Dlimits&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=WZyK2gn2PyQ8sv1VSFd0wmEw1KZMyiZucOwFiQVLhy4&s=hjR68lJWsDg9HeBU9xl8d-z4OgzA6xzW6KaGe74dSa0&e= [14] https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_q_is-3Aopen-2Btopic-3Abp_application-2Dcredentials&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=WZyK2gn2PyQ8sv1VSFd0wmEw1KZMyiZucOwFiQVLhy4&s=qI2QARGj_YuoAxRJuBtEghKQd1xvrfxysQG52154l3M&e= ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://urldefense.proofpoint.com/v2/url?u=https-3A__etherpad.openstack.org_p_keystone-2Dteam-2Dnewsletter&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=WZyK2gn2PyQ8sv1VSFd0wmEw1KZMyiZucOwFiQVLhy4&s=BdfF_Fvo1AWJJ7LV0wL0AOexMh8zI91bHEi9vbeDruw&e= __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=WZyK2gn2PyQ8sv1VSFd0wmEw1KZMyiZucOwFiQVLhy4&s=wZqdP5rCmtgPv7g8Uu5bXqubTIARlQmbeUNLY35JZWQ&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From mark at stackhpc.com Mon Jan 22 21:11:49 2018 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 22 Jan 2018 21:11:49 +0000 Subject: [openstack-dev] [ironic] FFE request for node traits Message-ID: The node traits feature [1] is an essential priority for ironic in Queens, and is an important step in the continuing evolution of scheduling enabled by the placement API. Traits will allow us to move away from capability-based scheduling. Capabilities have several limitations for scheduling including depending on filters in nova-scheduler rather than allowing placement to select matching hosts. Several upcoming features depend on traits [2]. Landing node traits late in the cycle will lead to less time being available for testing, with a risk that the feature is release with defects. There are changes at most major levels in the code except the drivers, but these are for the most part fairly isolated from existing code. The current issues with the grenade CI job mean that upgrade code paths are not being exercised frequently, and could lead to additional test/bug fix load on the team later in the cycle. The node traits code patches are all in review [3], and are now generally getting positive reviews or minor negative feedback. rloo and TheJulia have kindly offered to review during the FFE window. [1] http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/node-traits.html [2] https://review.openstack.org/#/c/504952/7/specs/approved/config-template-traits.rst [3] https://review.openstack.org/#/q/topic:bug/1722194+(status:open) Thanks, Mark (mgoddard) -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Jan 22 22:20:41 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 22 Jan 2018 17:20:41 -0500 Subject: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID In-Reply-To: References: <054e9a0c-1c99-51dd-9e66-8adb3c910802@switch.ch> <1515696336-sup-7054@lrrr.local> <7142b655-abca-ed58-bc4c-07fa51cb401c@switch.ch> <1515771070-sup-7997@lrrr.local> <07ee3262-8aec-a4c4-f981-bc448afab0ba@switch.ch> <96f2a7d8-ea7c-5530-7975-62b477982f03@switch.ch> <1516293565-sup-9123@lrrr.local> <1516295114-sup-7111@lrrr.local> <1516630943-sup-4108@lrrr.local> Message-ID: <1516659378-sup-8232@lrrr.local> Excerpts from Saverio Proto's message of 2018-01-22 18:45:15 +0100: > Hello Doug, > > in the extra session I see just {"project": "unknown", "version": "unknown"} > > here a full line from nova-api: > > {"thread_name": "MainThread", "extra": {"project": "unknown", "version": > "unknown"}, "process": 31142, "relative_created": 3459415335.4091644, > "module": "wsgi", "message": > "2001:xxx:xxxx:8100::80,2001:xxx:xxxx:81ff::b0 \"GET > /v2/64b5b50eb21d4efe9783eb1d81a9ec65/os-services HTTP/1.1\" status: 200 > len: 1812 time: 0.1813300", "hostname": "nova-0", "filename": "wsgi.py", > "levelno": 20, "lineno": 555, "asctime": "2018-01-22 18:37:02,312", > "msg": "2001:xxx:xxxx:8100::80,2001:xxx:xxxx:81ff::b0 \"GET > /v2/64b5b50eb21d4efe9783eb1d81a9ec65/os-services HTTP/1.1\" status: 200 > len: 1812 time: 0.1813300", "args": [], "process_name": "MainProcess", > "name": "nova.osapi_compute.wsgi.server", "thread": 140414249163824, > "created": 1516642622.312235, "traceback": null, "msecs": > 312.23511695861816, "funcname": "handle_one_response", "pathname": > "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", "levelname": "INFO"} It looks like you're running into a limitation of the older version of the library where the context was only logged from openstack source code. This particular log message is coming from the eventlet library. Try running the script below and saving the output to a pastebin. Under the newton version of oslo.log, I get http://paste.openstack.org/show/650566/ and under the queens version I get http://paste.openstack.org/show/650569/ which shows me that the "extra" handling is working more or less the same way but the "context" handling is improved in the newer version (lots of the values are null because I don't fully set up the context, but the request_id field has a valid value). Doug #!/usr/bin/env python from __future__ import print_function import logging from oslo_context import context from oslo_log import formatters, log ch = logging.StreamHandler() ch.setLevel(logging.DEBUG) formatter = formatters.JSONFormatter() ch.setFormatter(formatter) LOG = logging.getLogger() LOG.setLevel(logging.DEBUG) LOG.addHandler(ch) ctx = context.RequestContext(request_id='the-request-id') LOG.debug('without extra') print() LOG.debug('with extra', extra={'context': ctx}) print() log.getLogger().debug('via KeywordArgumentAdapter', context=ctx) From mriedemos at gmail.com Mon Jan 22 23:09:31 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 22 Jan 2018 17:09:31 -0600 Subject: [openstack-dev] [nova] PTL Election Season In-Reply-To: References: Message-ID: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> On 1/15/2018 11:04 AM, Kendall Nelson wrote: > Election details: https://governance.openstack.org/election/ > > Please read the stipulations and timelines for candidates and electorate > contained in this governance documentation. > > Be aware, in the PTL elections if the program only has one candidate, > that candidate is acclaimed and there will be no poll. There will only > be a poll if there is more than one candidate stepping forward for a > program's PTL position. > > There will be further announcements posted to the mailing list as action > is required from the electorate or candidates. This email is for > information purposes only. > > If you have any questions which you feel affect others please reply to > this email thread. > To anyone that cares, I don't plan on running for Nova PTL again for the Rocky release. Queens was my fourth tour and it's definitely time for someone else to get the opportunity to lead here. I don't plan on going anywhere and I'll be here to help with any transition needed assuming someone else (or a couple of people hopefully) will run in the election. It's been a great experience and I thank everyone that has had to put up with me and my obsessive paperwork and process disorder in the meantime. -- Thanks, Matt From jungleboyj at gmail.com Mon Jan 22 23:46:39 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 22 Jan 2018 17:46:39 -0600 Subject: [openstack-dev] [nova] PTL Election Season In-Reply-To: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> References: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> Message-ID: <89c30068-c746-2a94-261b-86c5766ca4b4@gmail.com> On 1/22/2018 5:09 PM, Matt Riedemann wrote: > On 1/15/2018 11:04 AM, Kendall Nelson wrote: >> Election details: https://governance.openstack.org/election/ >> >> Please read the stipulations and timelines for candidates and >> electorate contained in this governance documentation. >> >> Be aware, in the PTL elections if the program only has one candidate, >> that candidate is acclaimed and there will be no poll. There will >> only be a poll if there is more than one candidate stepping forward >> for a program's PTL position. >> >> There will be further announcements posted to the mailing list as >> action is required from the electorate or candidates. This email is >> for information purposes only. >> >> If you have any questions which you feel affect others please reply >> to this email thread. >> > > To anyone that cares, I don't plan on running for Nova PTL again for > the Rocky release. Queens was my fourth tour and it's definitely time > for someone else to get the opportunity to lead here. I don't plan on > going anywhere and I'll be here to help with any transition needed > assuming someone else (or a couple of people hopefully) will run in > the election. It's been a great experience and I thank everyone that > has had to put up with me and my obsessive paperwork and process > disorder in the meantime. > Matt, Thanks for all you have done!  Many good things have been done during your tour! Jay From rosmaita.fossdev at gmail.com Tue Jan 23 00:07:35 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 22 Jan 2018 19:07:35 -0500 Subject: [openstack-dev] [glance] py27 gate situation Message-ID: Looks like something changed in a distro dependency over the weekend and the glance py27 gate is failing. I did a dist-upgrade in a new Ubuntu 16.04.3 vm, and was able to reproduce the failures locally. I'll continue looking, but it's EOD where I am, so I wanted to make sure this info is available to the people whose day is about to begin. The failures are confined to the py27 functional tests. Unit tests pass, as do all the py35 tests. The requirements team has merged a change making the cross-glance-py27 job non-voting: https://review.openstack.org/#/c/536082/ Thus, this issue isn't holding up requirements changes, but it's still pretty urgent for us to figure out because I don't like us running around naked with respect to requirements changes that could affect glance running under py27. Here's what I think we should do: (1) Sean has had a patch up for a while separating out the unit tests from the functional tests. I think it's a good idea. If you are aware of a reason why they should NOT be separated, please comment on the patch: https://review.openstack.org/#/c/474816/ I'd like to merge this soon so we can at least restore py27 unit tests to the requirements gate. We can always revert if it turns out that there is a really good reason for not separating out the functional tests. (2) I've got a patch up that depends on Sean's patch and restores the functional test gate jobs to the glance .zuul.yaml file (though it makes the py27 functional tests non-voting): https://review.openstack.org/#/c/536630/ (3) Continue to work on https://bugs.launchpad.net/glance/+bug/1744824 to figure out why the py27 functional tests are failing. As far as I can tell, it looks like a distro package issue. thanks, brian From mnaser at vexxhost.com Tue Jan 23 01:07:14 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 22 Jan 2018 20:07:14 -0500 Subject: [openstack-dev] [nova] PTL Election Season In-Reply-To: <89c30068-c746-2a94-261b-86c5766ca4b4@gmail.com> References: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> <89c30068-c746-2a94-261b-86c5766ca4b4@gmail.com> Message-ID: <499B55E7-1855-4755-9934-C0A4F103630C@vexxhost.com> > On Jan 22, 2018, at 6:46 PM, Jay S Bryant wrote: > > > >> On 1/22/2018 5:09 PM, Matt Riedemann wrote: >>> On 1/15/2018 11:04 AM, Kendall Nelson wrote: >>> Election details: https://governance.openstack.org/election/ >>> >>> Please read the stipulations and timelines for candidates and electorate contained in this governance documentation. >>> >>> Be aware, in the PTL elections if the program only has one candidate, that candidate is acclaimed and there will be no poll. There will only be a poll if there is more than one candidate stepping forward for a program's PTL position. >>> >>> There will be further announcements posted to the mailing list as action is required from the electorate or candidates. This email is for information purposes only. >>> >>> If you have any questions which you feel affect others please reply to this email thread. >>> >> >> To anyone that cares, I don't plan on running for Nova PTL again for the Rocky release. Queens was my fourth tour and it's definitely time for someone else to get the opportunity to lead here. I don't plan on going anywhere and I'll be here to help with any transition needed assuming someone else (or a couple of people hopefully) will run in the election. It's been a great experience and I thank everyone that has had to put up with me and my obsessive paperwork and process disorder in the meantime. >> > Matt, > > Thanks for all you have done! Many good things have been done during your tour! > > Jay > +1 Matt, you’ve been an excellent PTL for the Nova project. The work that you’ve helped lead with the the Nova team has improved our experience with operating Nova and made our lives easier. Your constant reaching out with operators to make sure that the changes align with what we expect is great. You’ll still be around but I want to extend my personal thank you! > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jichenjc at cn.ibm.com Tue Jan 23 01:19:07 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Tue, 23 Jan 2018 09:19:07 +0800 Subject: [openstack-dev] [nova] PTL Election Season In-Reply-To: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> References: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> Message-ID: Matt, really appreciate for your solid review and patience, learn a lot from your help :) Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Matt Riedemann To: openstack-dev at lists.openstack.org Date: 01/23/2018 07:10 AM Subject: Re: [openstack-dev] [nova] PTL Election Season On 1/15/2018 11:04 AM, Kendall Nelson wrote: > Election details: https://urldefense.proofpoint.com/v2/url?u=https-3A__governance.openstack.org_election_&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=die3e3sesoQt82yq45Rfi5gtzP2xWKhKgCoJZ8ZLIu0&s=xfpbcenWsmihBXKpCuJ72FnE5x4TQYhNub_PukbF-gA&e= > > Please read the stipulations and timelines for candidates and electorate > contained in this governance documentation. > > Be aware, in the PTL elections if the program only has one candidate, > that candidate is acclaimed and there will be no poll. There will only > be a poll if there is more than one candidate stepping forward for a > program's PTL position. > > There will be further announcements posted to the mailing list as action > is required from the electorate or candidates. This email is for > information purposes only. > > If you have any questions which you feel affect others please reply to > this email thread. > To anyone that cares, I don't plan on running for Nova PTL again for the Rocky release. Queens was my fourth tour and it's definitely time for someone else to get the opportunity to lead here. I don't plan on going anywhere and I'll be here to help with any transition needed assuming someone else (or a couple of people hopefully) will run in the election. It's been a great experience and I thank everyone that has had to put up with me and my obsessive paperwork and process disorder in the meantime. -- Thanks, Matt __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=die3e3sesoQt82yq45Rfi5gtzP2xWKhKgCoJZ8ZLIu0&s=1UlkBaPmZzs054ut14Rv9UdruUis5BdYkcOjJ9bd4nc&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From feilong at catalyst.net.nz Tue Jan 23 01:30:49 2018 From: feilong at catalyst.net.nz (Fei Long Wang) Date: Tue, 23 Jan 2018 14:30:49 +1300 Subject: [openstack-dev] [nova] PTL Election Season In-Reply-To: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> References: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> Message-ID: <2c8a14d5-58bb-b741-1536-b9cb9a8c5b0a@catalyst.net.nz> Matt, thank you for all you have done for Nova. It's a good journey to work with you in past 7 years. And I'm looking forward to meeting you in next summit. On 23/01/18 12:09, Matt Riedemann wrote: > On 1/15/2018 11:04 AM, Kendall Nelson wrote: >> Election details: https://governance.openstack.org/election/ >> >> Please read the stipulations and timelines for candidates and >> electorate contained in this governance documentation. >> >> Be aware, in the PTL elections if the program only has one candidate, >> that candidate is acclaimed and there will be no poll. There will >> only be a poll if there is more than one candidate stepping forward >> for a program's PTL position. >> >> There will be further announcements posted to the mailing list as >> action is required from the electorate or candidates. This email is >> for information purposes only. >> >> If you have any questions which you feel affect others please reply >> to this email thread. >> > > To anyone that cares, I don't plan on running for Nova PTL again for > the Rocky release. Queens was my fourth tour and it's definitely time > for someone else to get the opportunity to lead here. I don't plan on > going anywhere and I'll be here to help with any transition needed > assuming someone else (or a couple of people hopefully) will run in > the election. It's been a great experience and I thank everyone that > has had to put up with me and my obsessive paperwork and process > disorder in the meantime. > -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- From ghanshyammann at gmail.com Tue Jan 23 01:32:24 2018 From: ghanshyammann at gmail.com (Ghanshyam Mann) Date: Tue, 23 Jan 2018 07:02:24 +0530 Subject: [openstack-dev] [nova] PTL Election Season In-Reply-To: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> References: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> Message-ID: On Tue, Jan 23, 2018 at 4:39 AM, Matt Riedemann wrote: > On 1/15/2018 11:04 AM, Kendall Nelson wrote: >> >> Election details: https://governance.openstack.org/election/ >> >> Please read the stipulations and timelines for candidates and electorate >> contained in this governance documentation. >> >> Be aware, in the PTL elections if the program only has one candidate, that >> candidate is acclaimed and there will be no poll. There will only be a poll >> if there is more than one candidate stepping forward for a program's PTL >> position. >> >> There will be further announcements posted to the mailing list as action >> is required from the electorate or candidates. This email is for information >> purposes only. >> >> If you have any questions which you feel affect others please reply to >> this email thread. >> > > To anyone that cares, I don't plan on running for Nova PTL again for the > Rocky release. Queens was my fourth tour and it's definitely time for > someone else to get the opportunity to lead here. I don't plan on going > anywhere and I'll be here to help with any transition needed assuming > someone else (or a couple of people hopefully) will run in the election. > It's been a great experience and I thank everyone that has had to put up > with me and my obsessive paperwork and process disorder in the meantime. Thanks a lot Matt for all your hard work during day and night (accommodating almost all the TZ ). One of the best leadership i have experience and learnt a lot. Your activeness and help always make sure things are completed as planned. -gmann > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From soulxu at gmail.com Tue Jan 23 01:59:00 2018 From: soulxu at gmail.com (Alex Xu) Date: Tue, 23 Jan 2018 09:59:00 +0800 Subject: [openstack-dev] [nova] PTL Election Season In-Reply-To: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> References: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> Message-ID: Matt, thanks for your leading and help over the past years! Your obsessive paperwork is really helpful and I appreciate for that. 2018-01-23 7:09 GMT+08:00 Matt Riedemann : > On 1/15/2018 11:04 AM, Kendall Nelson wrote: > >> Election details: https://governance.openstack.org/election/ >> >> Please read the stipulations and timelines for candidates and electorate >> contained in this governance documentation. >> >> Be aware, in the PTL elections if the program only has one candidate, >> that candidate is acclaimed and there will be no poll. There will only be a >> poll if there is more than one candidate stepping forward for a program's >> PTL position. >> >> There will be further announcements posted to the mailing list as action >> is required from the electorate or candidates. This email is for >> information purposes only. >> >> If you have any questions which you feel affect others please reply to >> this email thread. >> >> > To anyone that cares, I don't plan on running for Nova PTL again for the > Rocky release. Queens was my fourth tour and it's definitely time for > someone else to get the opportunity to lead here. I don't plan on going > anywhere and I'll be here to help with any transition needed assuming > someone else (or a couple of people hopefully) will run in the election. > It's been a great experience and I thank everyone that has had to put up > with me and my obsessive paperwork and process disorder in the meantime. > > -- > > Thanks, > > Matt > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From manjeet.s.bhatia at intel.com Tue Jan 23 02:09:10 2018 From: manjeet.s.bhatia at intel.com (Bhatia, Manjeet S) Date: Tue, 23 Jan 2018 02:09:10 +0000 Subject: [openstack-dev] [neutron][l3][flavors][floatingip] Message-ID: Hi Neutrinos, I am working on L3 flavors driver implementation for ODL backend, In l3 Flavor's driver there is need to fetch flavors id on floatingip operations, So that if floatingip is not for association with router of that flavor, driver can ignore the operation and return, but I noticed there's router_id None In floatingip payload sent to driver in networking-odl by neutron. What I did was 1. Create an router of xyz flavor. 2. Added public-subnet interface to that router. 3. Created floatingip on that public network. I see None router_id being sent in payload [a]. for floatingip operation. I am not sure if this is intended, I think it is a bug otherwise I don't see Other way of discarding floating ip operation by l3 flavors driver if it is not gonna be associated with router of that flavor. [a]. http://paste.openstack.org/show/646543/ Thanks and Regards ! Manjeet Singh Bhatia -------------- next part -------------- An HTML attachment was scrubbed... URL: From feilong at catalyst.net.nz Tue Jan 23 02:14:42 2018 From: feilong at catalyst.net.nz (Fei Long Wang) Date: Tue, 23 Jan 2018 15:14:42 +1300 Subject: [openstack-dev] Unstable API in cloudci Message-ID: <2982957a-6a39-d59f-b70a-344cc3613020@catalyst.net.nz> Hi team, I think it's important to highlight this issue because it's affecting the whole team now. Recently (basically after the holiday or after the meltdown patching), we're experiencing an unstable cloudci. When you deploy a new cloudci env, you will see some random 500 or 503 error from different services. And so far, based on our investigation, it's caused by Keystone, and Keystone is experiencing a weird IO error. Now it's basically blocking Magnum and Octavia testing. We need to put some effort on this. -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- From feilong at catalyst.net.nz Tue Jan 23 02:21:38 2018 From: feilong at catalyst.net.nz (Fei Long Wang) Date: Tue, 23 Jan 2018 15:21:38 +1300 Subject: [openstack-dev] Unstable API in cloudci In-Reply-To: <2982957a-6a39-d59f-b70a-344cc3613020@catalyst.net.nz> References: <2982957a-6a39-d59f-b70a-344cc3613020@catalyst.net.nz> Message-ID: <4a33ba66-047e-aed3-fc54-37d5aab0b262@catalyst.net.nz> Sorry, wrong mail list. On 23/01/18 15:14, Fei Long Wang wrote: > Hi team, > > I think it's important to highlight this issue because it's affecting > the whole team now. Recently (basically after the holiday or after the > meltdown patching), we're experiencing an unstable cloudci. When you > deploy a new cloudci env, you will see some random 500 or 503 error from > different services. And so far, based on our investigation, it's caused > by Keystone, and Keystone is experiencing a weird IO error. Now it's > basically blocking Magnum and Octavia testing. We need to put some > effort on this. -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- From feilong at catalyst.net.nz Tue Jan 23 02:56:19 2018 From: feilong at catalyst.net.nz (Fei Long Wang) Date: Tue, 23 Jan 2018 15:56:19 +1300 Subject: [openstack-dev] [zaqar] Not run for PTL Message-ID: <6e8813b1-c05b-e729-75dd-7c9863fd0730@catalyst.net.nz> Hi team, I have been working on Zaqar for more than 4 years and serving the PTL for the past 5 cycles. I don't plan to run for Zaqar PTL again for the Rocky release. I think it's time for somebody else to lead the team for next milestone. It has been a great experience for me and thank you for all the support from the team and the whole community. I will still be around for sure. Thank you. -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- From yamamoto at midokura.com Tue Jan 23 03:12:25 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Tue, 23 Jan 2018 12:12:25 +0900 Subject: [openstack-dev] [neutron][l3][flavors][floatingip] In-Reply-To: References: Message-ID: a floating-ip is associated to a router only if it's associated to a fixed-ip. consider the case where there are two routers sharing a public network. On Tue, Jan 23, 2018 at 11:09 AM, Bhatia, Manjeet S wrote: > Hi Neutrinos, > > > > I am working on L3 flavors driver implementation for ODL backend, In l3 > Flavor’s driver there is need to fetch flavors id on floatingip operations, > > So that if floatingip is not for association with router of that flavor, > driver can ignore the operation and return, but I noticed there’s router_id > > None In floatingip payload sent to driver in networking-odl by neutron. > > > > What I did was > > > > 1. Create an router of xyz flavor. > > 2. Added public-subnet interface to that router. > > 3. Created floatingip on that public network. > > > > I see None router_id being sent in payload [a]. for floatingip operation. I > am not sure if this is intended, I think it is a bug otherwise I don’t see > > Other way of discarding floating ip operation by l3 flavors driver if it is > not gonna be associated with router of that flavor. > > > > > > [a]. http://paste.openstack.org/show/646543/ > > > > > > Thanks and Regards ! > > Manjeet Singh Bhatia > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From ed at leafe.com Tue Jan 23 04:04:01 2018 From: ed at leafe.com (Ed Leafe) Date: Mon, 22 Jan 2018 22:04:01 -0600 Subject: [openstack-dev] [nova] PTL Election Season In-Reply-To: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> References: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> Message-ID: <13012136-9521-423C-A02D-6A100E2B4C7B@leafe.com> On Jan 22, 2018, at 5:09 PM, Matt Riedemann wrote: > To anyone that cares, I don't plan on running for Nova PTL again for the Rocky release. Queens was my fourth tour and it's definitely time for someone else to get the opportunity to lead here. I don't plan on going anywhere and I'll be here to help with any transition needed assuming someone else (or a couple of people hopefully) will run in the election. It's been a great experience and I thank everyone that has had to put up with me and my obsessive paperwork and process disorder in the meantime. I still don't understand how anyone could do what you have done over these past two years and not a) had a stress-induced heart attack or b) gotten divorced. Thanks for the hard work! -- Ed Leafe From fungi at yuggoth.org Tue Jan 23 04:16:22 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 23 Jan 2018 04:16:22 +0000 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: References: Message-ID: <20180123041622.2m23r4kxhcviunmk@yuggoth.org> On 2018-01-22 15:04:31 +0000 (+0000), Graham Hayes wrote: > On 19/01/18 16:27, Andrea Frittoli wrote: [...] > > it is possible to mark tests so that team may require a +1 from > > interop/qa when specific tests are modified. > > Is there? AFAIK gerrit does not do per file path permissions, so > unless we have a job that just checks the votes on a patch, and > passes or fails if a test changes (which would be awful for the > teams) we cannot do that. [...] I read that as implying that it's possible _culturally_ to add code comments or similar to remind reviewers that they need to seek additional feedback under certain conditions. Just because a rule can't be blindly enforced with some technological solution like CI jobs or review system ACLs doesn't mean it's impossible. It does however almost certainly mean more work for (and an inevitable increase in mistakes made by) already overburdened reviewers. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From stendulker at gmail.com Tue Jan 23 05:56:34 2018 From: stendulker at gmail.com (Shivanand Tendulker) Date: Tue, 23 Jan 2018 11:26:34 +0530 Subject: [openstack-dev] [ironic] FFE request for node rescue feature Message-ID: Hi The rescue feature [1] is an high priority for ironic in Queens. The spec for the same was merged in Newton. This feature is necessary for users that lose regular access to their machine (e.g. lost passwords). Landing node rescue feature late in the cycle will lead to less time being available for testing, with a risk that the feature being released with defects. The code changes are fairly isolated from existing code to ensure it does not cause any regression. The Ironic side rescue code patches are all in review [2], and are now are getting positive reviews or minor negative feedback. dtantsur and TheJulia have kindly agreed to review the same during the FFE window. [1] https://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/implement-rescue-mode.html [2] https://review.openstack.org/#/q/topic:bug/1526449+(status:open+AND+project:openstack/ironic) Thanks and Regards, Shiv (stendulker) -------------- next part -------------- An HTML attachment was scrubbed... URL: From nakanishi.tomotaka at po.ntt-tx.co.jp Tue Jan 23 06:28:54 2018 From: nakanishi.tomotaka at po.ntt-tx.co.jp (=?UTF-8?B?5Lit6KW/IOaci+eUnw==?=) Date: Tue, 23 Jan 2018 15:28:54 +0900 Subject: [openstack-dev] [nova] Add scenario tests based on multiple cells environment In-Reply-To: References: <201711210837.vAL8bR47002080@ccmail03.silk.ntt-tx.co.jp> Message-ID: Hello This time we plan to post multiple patches. In order to put them together, we created BP I posted a test verifying the operation of the existing Nova API in a multi-cell environment. Please review. https://review.openstack.org/#/c/534116/ -- Tomotaka Nakanishi E-Mail : nakanishi.tomotaka at po.ntt-tx.co.jp On 2017/11/23 3:58, Matt Riedemann wrote: > On 11/21/2017 2:37 AM, koshiya maho wrote: >> Hi, all >> >> Multiple cells (Nova-Cells-v2) is supported in Pike release. >> It is necessary to confirm that existing APIs work appropriately in the multiple cells environment. >> We will post multiple patches, so I created BluePrint[1] to make it easier to keep track of those patches. >> Please check the contents and approve it. >> >> [1] https://blueprints.launchpad.net/nova/+spec/add-multiple-cells-scenario-tests >> >> Best regards, >> -- >> Maho Koshiya >> E-Mail : koshiya.maho at po.ntt-tx.co.jp >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > We don't really need a blueprint for this work. It would be good to know what gaps in existing testing you think exist. And where do you plan on implementing these tests? In the nova/tests/functional tree or somewhere else? We already have a lot of tests which are using a CellDatabase fixture which allow us to create multiple cell mappings for tests in the API. > > If you're considering Tempest, tests there wouldn't really be appropriate because to the end user of the API, they should have no idea if they are talking to a cloud with multiple cells or not, since it's really a deployment issue. > > What we don't have today in our CI testing, and that we need someone to work on, is running a devstack multi-node setup with at least two cells. This likely requires some work in the devstack-gate repo to configure devstack per node to tell it which cell it is. > > I encourage you to bring this up in a weekly cells v2 meeting for further discussion: > > http://eavesdrop.openstack.org/#Nova_Cellsv2_Meeting > From prometheanfire at gentoo.org Tue Jan 23 07:23:50 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Tue, 23 Jan 2018 01:23:50 -0600 Subject: [openstack-dev] [ALL][requirements] A freeze is coming and you should be prepared Message-ID: <20180123072350.2jby5zoeeyzaryv5@gentoo.org> Requirements is freezing Friday at 23:59:59 UTC so any last global-requrements updates that need to get in need to get in now. I'm afraid that my condition has left me cold to your pleas of mercy. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From Dinesh.Bhor at nttdata.com Tue Jan 23 07:56:14 2018 From: Dinesh.Bhor at nttdata.com (Bhor, Dinesh) Date: Tue, 23 Jan 2018 07:56:14 +0000 Subject: [openstack-dev] [masakari] Change service-type from "ha" to "instance-ha" Message-ID: Hi Masakari team, Below are the patches up for review to change the masakari service-type from “ha” to “instance-ha”: openstack/python-masakariclient : https://review.openstack.org/#/c/536666/1 openstack/masakari-monitors : https://review.openstack.org/#/c/536668/ openstack/masakari : https://review.openstack.org/#/c/536653/1 openstack/service-types-authority : https://review.openstack.org/#/c/534875/ We should go with below order to fix this issue: 1. Merge the python-masakariclient patch. 2. Release newer version of python-masakariclient library. 3. Bump the newer version of python-masakariclient to global-requirements. * The requirements freeze is near (coming Friday at 23:59:59 UTC) * http://lists.openstack.org/pipermail/openstack-dev/2018-January/126475.html 4. Bot job will propose a patch in masakari-monitors to update the python-masakariclient version. 5. Merge the bot job proposed patch. 6. Merge the masakari-monitors service-type patch. 7. Merge the masakari service-type patch. 8. Merge openstack/service-types-authority patch with updated service-type “instance-ha”. Please help to merge these patches asap. Thank you, Dinesh Bhor ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sxmatch1986 at gmail.com Tue Jan 23 08:10:51 2018 From: sxmatch1986 at gmail.com (hao wang) Date: Tue, 23 Jan 2018 16:10:51 +0800 Subject: [openstack-dev] [zaqar] Not run for PTL In-Reply-To: <6e8813b1-c05b-e729-75dd-7c9863fd0730@catalyst.net.nz> References: <6e8813b1-c05b-e729-75dd-7c9863fd0730@catalyst.net.nz> Message-ID: Thanks Feilong, it's very great to work together with you ! 2018-01-23 10:56 GMT+08:00 Fei Long Wang : > Hi team, > > I have been working on Zaqar for more than 4 years and serving the PTL > for the past 5 cycles. I don't plan to run for Zaqar PTL again for the > Rocky release. I think it's time for somebody else to lead the team for > next milestone. It has been a great experience for me and thank you for > all the support from the team and the whole community. I will still be > around for sure. Thank you. > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > -------------------------------------------------------------------------- > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > -------------------------------------------------------------------------- > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From natsume.takashi at lab.ntt.co.jp Tue Jan 23 08:14:45 2018 From: natsume.takashi at lab.ntt.co.jp (Takashi Natsume) Date: Tue, 23 Jan 2018 17:14:45 +0900 Subject: [openstack-dev] [nova] API reference verification for servers APIs (parameter and example) Message-ID: Hi, Nova developers. There was work verifying the Nova compute API Reference (*1, *2, *3, *4) before. But the verification has not been completed yet in "Servers" APIs (creating, updating, deleting a server, listing servers, etc.). *1: Convert API Reference to RST and host it in the Nova tree (partial) https://blueprints.launchpad.net/nova/+spec/api-ref-in-rst *2: Convert API Reference to RST and host it in the Nova tree (Ocata) https://blueprints.launchpad.net/nova/+spec/api-ref-in-rst-ocata *3: Convert API Reference to RST and host it in the Nova tree (Pike) https://blueprints.launchpad.net/nova/+spec/api-ref-in-rst-pike *4: NovaAPIRef https://wiki.openstack.org/wiki/NovaAPIRef I submitted the following patches to complete parameter and example verification in "Servers" APIs. api-ref: Parameter verification for servers.inc https://review.openstack.org/#/c/528201/ api-ref: Example verification for servers.inc https://review.openstack.org/#/c/529520/ Regards, Takashi Natsume NTT Software Innovation Center E-mail: natsume.takashi at lab.ntt.co.jp From rico.lin.guanyu at gmail.com Tue Jan 23 09:03:59 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 23 Jan 2018 17:03:59 +0800 Subject: [openstack-dev] [heat] Move meeting time to 14:00 UTC weekly Message-ID: Hi team As previously discussed in meeting, in order to cover as many members as possible with meeting schedule, we will move our meeting time one hour later. We keep looking for a perfect time for members(as much as we can get) to join, it appears current meeting time isn't that ideal. So starting this week, our meeting will change to 14:00 UTC weekly on Wednesday which is exactly one hour later than current meeting schedule Feel free to gives any feedback, otherwise see you around 14:00 UTC Wednesday -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From wangpeihuixyz at 126.com Tue Jan 23 09:08:32 2018 From: wangpeihuixyz at 126.com (Frank Wang) Date: Tue, 23 Jan 2018 17:08:32 +0800 (CST) Subject: [openstack-dev] [Neutron]The components timing problem Message-ID: <7f65a870.834d.1612246adcb.Coremail.wangpeihuixyz@126.com> Hi All, I'm really newbie about OpenStack Neutron, Please correct me if I say something wrong. There was a question I'd like to consult. AMQP is the messaging bus between neutron-server and *agents. we usually use rabbitmq as the back-end of messaging bus. The problem I encountered is the ovs agent raise an exception while reporting its own state to the server. Here is my guess, If I restart the controller node, what if the rabbitmq start early than neutron-server. I mean the ovs agent always trying to connect to the rabbitmq. It will report state to the server through RPC once the connection established. if the server is not ready at this time. Does it cause the agent exception? Any suggestion would be greatly appreciated! Thanks, Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From saverio.proto at switch.ch Tue Jan 23 09:21:37 2018 From: saverio.proto at switch.ch (Saverio Proto) Date: Tue, 23 Jan 2018 10:21:37 +0100 Subject: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID In-Reply-To: <1516659378-sup-8232@lrrr.local> References: <054e9a0c-1c99-51dd-9e66-8adb3c910802@switch.ch> <1515696336-sup-7054@lrrr.local> <7142b655-abca-ed58-bc4c-07fa51cb401c@switch.ch> <1515771070-sup-7997@lrrr.local> <07ee3262-8aec-a4c4-f981-bc448afab0ba@switch.ch> <96f2a7d8-ea7c-5530-7975-62b477982f03@switch.ch> <1516293565-sup-9123@lrrr.local> <1516295114-sup-7111@lrrr.local> <1516630943-sup-4108@lrrr.local> <1516659378-sup-8232@lrrr.local> Message-ID: <7b4c5530-55e9-2590-1b67-74b5ff938ef9@switch.ch> Hello Doug, I have run the script: here is my output: http://paste.openstack.org/show/650913/ At this point I have some questions. Can I upgrade just oslo.log library keeping the rest of the stuff in Newton ? The versions of oslo.log have a different numbering scheme than other openstack projects, so I cannot understand the versions compatibility. As far as I understand 3.34.0 should be enough for me ? : git tag --contains 1b012d0fc6811f00e032e52ed586fe37e157584d 3.34.0 3.35.0 3.36.0 thank you Saverio On 22.01.18 23:20, Doug Hellmann wrote: > Excerpts from Saverio Proto's message of 2018-01-22 18:45:15 +0100: >> Hello Doug, >> >> in the extra session I see just {"project": "unknown", "version": "unknown"} >> >> here a full line from nova-api: >> >> {"thread_name": "MainThread", "extra": {"project": "unknown", "version": >> "unknown"}, "process": 31142, "relative_created": 3459415335.4091644, >> "module": "wsgi", "message": >> "2001:xxx:xxxx:8100::80,2001:xxx:xxxx:81ff::b0 \"GET >> /v2/64b5b50eb21d4efe9783eb1d81a9ec65/os-services HTTP/1.1\" status: 200 >> len: 1812 time: 0.1813300", "hostname": "nova-0", "filename": "wsgi.py", >> "levelno": 20, "lineno": 555, "asctime": "2018-01-22 18:37:02,312", >> "msg": "2001:xxx:xxxx:8100::80,2001:xxx:xxxx:81ff::b0 \"GET >> /v2/64b5b50eb21d4efe9783eb1d81a9ec65/os-services HTTP/1.1\" status: 200 >> len: 1812 time: 0.1813300", "args": [], "process_name": "MainProcess", >> "name": "nova.osapi_compute.wsgi.server", "thread": 140414249163824, >> "created": 1516642622.312235, "traceback": null, "msecs": >> 312.23511695861816, "funcname": "handle_one_response", "pathname": >> "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", "levelname": "INFO"} > > It looks like you're running into a limitation of the older version of > the library where the context was only logged from openstack source > code. This particular log message is coming from the eventlet library. > > Try running the script below and saving the output to a pastebin. > > Under the newton version of oslo.log, I get > http://paste.openstack.org/show/650566/ and under the queens version I > get http://paste.openstack.org/show/650569/ which shows me that the > "extra" handling is working more or less the same way but the "context" > handling is improved in the newer version (lots of the values are null > because I don't fully set up the context, but the request_id field has a > valid value). > > Doug > > > #!/usr/bin/env python > > from __future__ import print_function > > import logging > > from oslo_context import context > from oslo_log import formatters, log > > > ch = logging.StreamHandler() > ch.setLevel(logging.DEBUG) > > formatter = formatters.JSONFormatter() > ch.setFormatter(formatter) > > LOG = logging.getLogger() > LOG.setLevel(logging.DEBUG) > LOG.addHandler(ch) > > ctx = context.RequestContext(request_id='the-request-id') > > LOG.debug('without extra') > print() > > LOG.debug('with extra', extra={'context': ctx}) > print() > > log.getLogger().debug('via KeywordArgumentAdapter', context=ctx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- SWITCH Saverio Proto, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 1573 saverio.proto at switch.ch, http://www.switch.ch http://www.switch.ch/stories From slawek at kaplonski.pl Tue Jan 23 09:27:52 2018 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdvbWlyIEthcMWCb8WEc2tp?=) Date: Tue, 23 Jan 2018 10:27:52 +0100 Subject: [openstack-dev] [Neutron]The components timing problem In-Reply-To: <7f65a870.834d.1612246adcb.Coremail.wangpeihuixyz@126.com> References: <7f65a870.834d.1612246adcb.Coremail.wangpeihuixyz@126.com> Message-ID: Hi, If both ovs agent and neutron-server reconnect to rabbitmq then it should report state properly again IMO. Can You maybe send more details about Your issue? What OpenStack version You are running, exact stack trace of exception which You get and so on. — Best regards Slawek Kaplonski slawek at kaplonski.pl > Wiadomość napisana przez Frank Wang w dniu 23.01.2018, o godz. 10:08: > > Hi All, > > I'm really newbie about OpenStack Neutron, Please correct me if I say something wrong. There was a question I'd like to consult. AMQP is the messaging bus between neutron-server and *agents. we usually use rabbitmq as the back-end of messaging bus. The problem I encountered is the ovs agent raise an exception while reporting its own state to the server. Here is my guess, If I restart the controller node, what if the rabbitmq start early than neutron-server. I mean the ovs agent always trying to connect to the rabbitmq. It will report state to the server through RPC once the connection established. if the server is not ready at this time. Does it cause the agent exception? Any suggestion would be greatly appreciated! > > > Thanks, > Frank > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From yikunkero at gmail.com Tue Jan 23 09:34:41 2018 From: yikunkero at gmail.com (Yikun Jiang) Date: Tue, 23 Jan 2018 17:34:41 +0800 Subject: [openstack-dev] [nova] PTL Election Season In-Reply-To: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> References: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> Message-ID: Matt, Thanks for your all works. As a beginner of Nova upstream player, really appreciate your patient review and warm help. : ) Regards, Yikun ---------------------------------------- Jiang Yikun(Kero) Mail: yikunkero at gmail.com 2018-01-23 7:09 GMT+08:00 Matt Riedemann : > On 1/15/2018 11:04 AM, Kendall Nelson wrote: > >> Election details: https://governance.openstack.org/election/ >> >> Please read the stipulations and timelines for candidates and electorate >> contained in this governance documentation. >> >> Be aware, in the PTL elections if the program only has one candidate, >> that candidate is acclaimed and there will be no poll. There will only be a >> poll if there is more than one candidate stepping forward for a program's >> PTL position. >> >> There will be further announcements posted to the mailing list as action >> is required from the electorate or candidates. This email is for >> information purposes only. >> >> If you have any questions which you feel affect others please reply to >> this email thread. >> >> > To anyone that cares, I don't plan on running for Nova PTL again for the > Rocky release. Queens was my fourth tour and it's definitely time for > someone else to get the opportunity to lead here. I don't plan on going > anywhere and I'll be here to help with any transition needed assuming > someone else (or a couple of people hopefully) will run in the election. > It's been a great experience and I thank everyone that has had to put up > with me and my obsessive paperwork and process disorder in the meantime. > > -- > > Thanks, > > Matt > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhengzhenyulixi at gmail.com Tue Jan 23 09:36:54 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Tue, 23 Jan 2018 17:36:54 +0800 Subject: [openstack-dev] [nova] PTL Election Season In-Reply-To: References: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> Message-ID: Thanks for all the help in these cycles :) On Tue, Jan 23, 2018 at 5:34 PM, Yikun Jiang wrote: > Matt, Thanks for your all works. > As a beginner of Nova upstream player, really appreciate your patient > review and warm help. : ) > > Regards, > Yikun > ---------------------------------------- > Jiang Yikun(Kero) > Mail: yikunkero at gmail.com > > 2018-01-23 7:09 GMT+08:00 Matt Riedemann : > >> On 1/15/2018 11:04 AM, Kendall Nelson wrote: >> >>> Election details: https://governance.openstack.org/election/ >>> >>> Please read the stipulations and timelines for candidates and electorate >>> contained in this governance documentation. >>> >>> Be aware, in the PTL elections if the program only has one candidate, >>> that candidate is acclaimed and there will be no poll. There will only be a >>> poll if there is more than one candidate stepping forward for a program's >>> PTL position. >>> >>> There will be further announcements posted to the mailing list as action >>> is required from the electorate or candidates. This email is for >>> information purposes only. >>> >>> If you have any questions which you feel affect others please reply to >>> this email thread. >>> >>> >> To anyone that cares, I don't plan on running for Nova PTL again for the >> Rocky release. Queens was my fourth tour and it's definitely time for >> someone else to get the opportunity to lead here. I don't plan on going >> anywhere and I'll be here to help with any transition needed assuming >> someone else (or a couple of people hopefully) will run in the election. >> It's been a great experience and I thank everyone that has had to put up >> with me and my obsessive paperwork and process disorder in the meantime. >> >> -- >> >> Thanks, >> >> Matt >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wangpeihuixyz at 126.com Tue Jan 23 09:44:52 2018 From: wangpeihuixyz at 126.com (Frank Wang) Date: Tue, 23 Jan 2018 17:44:52 +0800 (CST) Subject: [openstack-dev] [Neutron]The components timing problem In-Reply-To: References: <7f65a870.834d.1612246adcb.Coremail.wangpeihuixyz@126.com> Message-ID: Hi Sławomir, Thanks for you quick response. I'm using Ocata, Here is the exception stack. e: [Errno 111] ECONNREFUSED. Trying again in 14 seconds. Client port: None 2018-01-23 17:37:45.387 8636 ERROR oslo.messaging._drivers.impl_rabbit [-] [7b611083-b03c-4aaa-bb2f-a8f6cb89ab15] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 14 seconds. Client port: None 2018-01-23 17:37:45.395 8636 ERROR oslo.messaging._drivers.impl_rabbit [-] [bc243998-c484-47b3-8a5d-59104470c441] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 14 seconds. Client port: None 2018-01-23 17:37:45.402 8636 ERROR oslo.messaging._drivers.impl_rabbit [-] [f86f1d05-6f72-443a-967e-581c9c3af656] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 14 seconds. Client port: None 2018-01-23 17:37:51.927 8636 ERROR oslo.messaging._drivers.impl_rabbit [-] [4c876705-fd92-4b32-8c45-98ff1ed75846] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 14 seconds. Client port: None 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Failed reporting state! 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most recent call last): 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 312, in _report_state 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent True) 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/neutron/agent/rpc.py", line 87, in report_state 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent return method(context, 'report_state', **kwargs) 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 169, in call 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent retry=self.retry) 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 97, in _send 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent timeout=timeout, retry=retry) 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 458, in send 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent retry=retry) 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 447, in _send 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent result = self._waiter.wait(msg_id, timeout) 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 339, in wait 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent message = self.waiters.get(msg_id, timeout=timeout) 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 238, in get 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 'to message ID %s' % msg_id) 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent MessagingTimeout: Timed out waiting for a reply to message ID 5907b7ced96140c693a6fb6dd0698dbc 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 2018-01-23 17:37:53.808 8636 WARNING oslo.service.loopingcall [-] Function 'neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent._report_state' run outlasted interval by 30.00 sec 2018-01-23 17:37:59.227 8636 INFO oslo.messaging._drivers.impl_rabbit [-] [3e7c487b-9222-4eee-b879-a7981649d49c] Reconnected to AMQP server on 127.0.0.1:5672 via [amqp] client with port 51476. 2018-01-23 17:37:59.270 8636 INFO oslo.messaging._drivers.impl_rabbit [-] [a6262917-f316-43df-83fd-05af30a2b302] Reconnected to AMQP server on 127.0.0.1:5672 via [amqp] client with port 51478. 2018-01-23 17:37:59.308 8636 INFO oslo.messaging._drivers.impl_rabbit [-] [99ff11c3-1662-461 Thanks, Frank At 2018-01-23 17:27:52, "Sławomir Kapłoński" wrote: >Hi, > >If both ovs agent and neutron-server reconnect to rabbitmq then it should report state properly again IMO. >Can You maybe send more details about Your issue? What OpenStack version You are running, exact stack trace of exception which You get and so on. > >— >Best regards >Slawek Kaplonski >slawek at kaplonski.pl > > > >> Wiadomość napisana przez Frank Wang w dniu 23.01.2018, o godz. 10:08: >> >> Hi All, >> >> I'm really newbie about OpenStack Neutron, Please correct me if I say something wrong. There was a question I'd like to consult. AMQP is the messaging bus between neutron-server and *agents. we usually use rabbitmq as the back-end of messaging bus. The problem I encountered is the ovs agent raise an exception while reporting its own state to the server. Here is my guess, If I restart the controller node, what if the rabbitmq start early than neutron-server. I mean the ovs agent always trying to connect to the rabbitmq. It will report state to the server through RPC once the connection established. if the server is not ready at this time. Does it cause the agent exception? Any suggestion would be greatly appreciated! >> >> >> Thanks, >> Frank >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Jan 23 09:46:06 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 23 Jan 2018 09:46:06 +0000 (GMT) Subject: [openstack-dev] [nova] PTL Election Season In-Reply-To: <13012136-9521-423C-A02D-6A100E2B4C7B@leafe.com> References: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> <13012136-9521-423C-A02D-6A100E2B4C7B@leafe.com> Message-ID: On Mon, 22 Jan 2018, Ed Leafe wrote: > I still don't understand how anyone could do what you have done > over these past two years and not a) had a stress-induced heart > attack or b) gotten divorced. Indeed, Matt has been an amazing PTL. Thank you Matt for all your hard work. The quality of your attention is extraordinary. But anybody considering being the PTL for Nova should not use the volume that Matt worked as a model for how to do it. That's not behavior any of us should encourage or accept. Sure, we might say "it's open source, it's a lifestyle" but every style of life needs some balance. From the outside, it seemed to work okay for Matt but if it is used as a precedent that would not be good. Delegation is the usual strategy that people mention to avoid overcommitting but this can only work, in the current Nova model, if there are sufficient people around continuously, day in, day out, who can maintain the necessary awareness of the big picture. Might be better to start considering additional ways in which Nova can be decomposed. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From slawek at kaplonski.pl Tue Jan 23 09:58:58 2018 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdvbWlyIEthcMWCb8WEc2tp?=) Date: Tue, 23 Jan 2018 10:58:58 +0100 Subject: [openstack-dev] [Neutron]The components timing problem In-Reply-To: References: <7f65a870.834d.1612246adcb.Coremail.wangpeihuixyz@126.com> Message-ID: <59902E67-8E91-435A-8D9A-17CF68049635@kaplonski.pl> Hi, Problem which You have here is that Your agent can't connect to rabbitmq. Because of that rpc message with report_state is timeouted and that is the reason of this error. Is Your rabbitmq server on same host with ovs agent? Is rabbitmq server works properly? — Best regards Slawek Kaplonski slawek at kaplonski.pl > Wiadomość napisana przez Frank Wang w dniu 23.01.2018, o godz. 10:44: > > Hi Sławomir, > > Thanks for you quick response. I'm using Ocata, Here is the exception stack. > e: [Errno 111] ECONNREFUSED. Trying again in 14 seconds. Client port: None > 2018-01-23 17:37:45.387 8636 ERROR oslo.messaging._drivers.impl_rabbit [-] [7b611083-b03c-4aaa-bb2f-a8f6cb89ab15] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 14 seconds. Client port: None > 2018-01-23 17:37:45.395 8636 ERROR oslo.messaging._drivers.impl_rabbit [-] [bc243998-c484-47b3-8a5d-59104470c441] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 14 seconds. Client port: None > 2018-01-23 17:37:45.402 8636 ERROR oslo.messaging._drivers.impl_rabbit [-] [f86f1d05-6f72-443a-967e-581c9c3af656] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 14 seconds. Client port: None > 2018-01-23 17:37:51.927 8636 ERROR oslo.messaging._drivers.impl_rabbit [-] [4c876705-fd92-4b32-8c45-98ff1ed75846] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 14 seconds. Client port: None > 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Failed reporting state! > 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most recent call last): > 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 312, in _report_state > 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent True) > 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/neutron/agent/rpc.py", line 87, in report_state > 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent return method(context, 'report_state', **kwargs) > 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 169, in call > 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent retry=self.retry) > 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 97, in _send > 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent timeout=timeout, retry=retry) > 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 458, in send > 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent retry=retry) > 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 447, in _send > 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent result = self._waiter.wait(msg_id, timeout) > 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 339, in wait > 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent message = self.waiters.get(msg_id, timeout=timeout) > 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 238, in get > 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 'to message ID %s' % msg_id) > 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent MessagingTimeout: Timed out waiting for a reply to message ID 5907b7ced96140c693a6fb6dd0698dbc > 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent > 2018-01-23 17:37:53.808 8636 WARNING oslo.service.loopingcall [-] Function 'neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent._report_state' run outlasted interval by 30.00 sec > 2018-01-23 17:37:59.227 8636 INFO oslo.messaging._drivers.impl_rabbit [-] [3e7c487b-9222-4eee-b879-a7981649d49c] Reconnected to AMQP server on 127.0.0.1:5672 via [amqp] client with port 51476. > 2018-01-23 17:37:59.270 8636 INFO oslo.messaging._drivers.impl_rabbit [-] [a6262917-f316-43df-83fd-05af30a2b302] Reconnected to AMQP server on 127.0.0.1:5672 via [amqp] client with port 51478. > 2018-01-23 17:37:59.308 8636 INFO oslo.messaging._drivers.impl_rabbit [-] [99ff11c3-1662-461 > > Thanks, > Frank > At 2018-01-23 17:27:52, "Sławomir Kapłoński" wrote: > >Hi, > > > >If both ovs agent and neutron-server reconnect to rabbitmq then it should report state properly again IMO. > >Can You maybe send more details about Your issue? What OpenStack version You are running, exact stack trace of exception which You get and so on. > > > >— > >Best regards > >Slawek Kaplonski > >slawek at kaplonski.pl > > > > > > > >> Wiadomość napisana przez Frank Wang w dniu 23.01.2018, o godz. 10:08: > >> > >> Hi All, > >> > >> I'm really newbie about OpenStack Neutron, Please correct me if I say something wrong. There was a question I'd like to consult. AMQP is the messaging bus between neutron-server and *agents. we usually use rabbitmq as the back-end of messaging bus. The problem I encountered is the ovs agent raise an exception while reporting its own state to the server. Here is my guess, If I restart the controller node, what if the rabbitmq start early than neutron-server. I mean the ovs agent always trying to connect to the rabbitmq. It will report state to the server through RPC once the connection established. if the server is not ready at this time. Does it cause the agent exception? Any suggestion would be greatly appreciated! > >> > >> > >> Thanks, > >> Frank > >> > >> > >> > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > >__________________________________________________________________________ > >OpenStack Development Mailing List (not for usage questions) > >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From john at johngarbutt.com Tue Jan 23 10:00:02 2018 From: john at johngarbutt.com (John Garbutt) Date: Tue, 23 Jan 2018 10:00:02 +0000 Subject: [openstack-dev] [nova] PTL Election Season In-Reply-To: <13012136-9521-423C-A02D-6A100E2B4C7B@leafe.com> References: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> <13012136-9521-423C-A02D-6A100E2B4C7B@leafe.com> Message-ID: On 23 January 2018 at 04:04, Ed Leafe wrote: > On Jan 22, 2018, at 5:09 PM, Matt Riedemann wrote: > > To anyone that cares, I don't plan on running for Nova PTL again for the > Rocky release. Queens was my fourth tour and it's definitely time for > someone else to get the opportunity to lead here. I don't plan on going > anywhere and I'll be here to help with any transition needed assuming > someone else (or a couple of people hopefully) will run in the election. > It's been a great experience and I thank everyone that has had to put up > with me and my obsessive paperwork and process disorder in the meantime. > > I still don't understand how anyone could do what you have done over these > past two years and not a) had a stress-induced heart attack or b) gotten > divorced. > ++ Great work and amazing sticking power. I know I hit a brick wall after two seasons. johnthetubaguy -------------- next part -------------- An HTML attachment was scrubbed... URL: From surya.seetharaman9 at gmail.com Tue Jan 23 10:02:58 2018 From: surya.seetharaman9 at gmail.com (Surya Seetharaman) Date: Tue, 23 Jan 2018 11:02:58 +0100 Subject: [openstack-dev] [nova] PTL Election Season In-Reply-To: References: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> <13012136-9521-423C-A02D-6A100E2B4C7B@leafe.com> Message-ID: Thanks Matt, for patiently reviewing our patches, helping newbies like me integrate well into the Nova team and for being an amazing PTL. Best Regards, Surya. On Tue, Jan 23, 2018 at 11:00 AM, John Garbutt wrote: > On 23 January 2018 at 04:04, Ed Leafe wrote: > >> On Jan 22, 2018, at 5:09 PM, Matt Riedemann wrote: >> > To anyone that cares, I don't plan on running for Nova PTL again for >> the Rocky release. Queens was my fourth tour and it's definitely time for >> someone else to get the opportunity to lead here. I don't plan on going >> anywhere and I'll be here to help with any transition needed assuming >> someone else (or a couple of people hopefully) will run in the election. >> It's been a great experience and I thank everyone that has had to put up >> with me and my obsessive paperwork and process disorder in the meantime. >> >> I still don't understand how anyone could do what you have done over >> these past two years and not a) had a stress-induced heart attack or b) >> gotten divorced. >> > > ++ > Great work and amazing sticking power. > I know I hit a brick wall after two seasons. > > johnthetubaguy > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Regards, Surya. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Tue Jan 23 10:04:34 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 23 Jan 2018 11:04:34 +0100 Subject: [openstack-dev] [ironic] FFE request for node traits In-Reply-To: References: Message-ID: <325b156c-48c0-9b8d-6560-81867c2c4835@redhat.com> +1 on keeping moving forward with it. that's important for future nova work, as well as our deploy steps work. On 01/22/2018 10:11 PM, Mark Goddard wrote: > The node traits feature [1] is an essential priority for ironic in Queens, and > is an important step in the continuing evolution of scheduling enabled by the > placement API. Traits will allow us to move away from capability-based > scheduling. Capabilities have several limitations for scheduling including > depending on filters in nova-scheduler rather than allowing placement to select > matching hosts. Several upcoming features depend on traits [2]. > > Landing node traits late in the cycle will lead to less time being available for > testing, with a risk that the feature is release with defects. There are changes > at most major levels in the code except the drivers, but these are for the most > part fairly isolated from existing code. The current issues with the grenade CI > job mean that upgrade code paths are not being exercised frequently, and could > lead to additional test/bug fix load on the team later in the cycle. The node > traits code patches are all in review [3], and are now generally getting > positive reviews or minor negative feedback. > > rloo and TheJulia have kindly offered to review during the FFE window. > > [1] > http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/node-traits.html > [2] > https://review.openstack.org/#/c/504952/7/specs/approved/config-template-traits.rst > [3] https://review.openstack.org/#/q/topic:bug/1722194+(status:open) > > Thanks, > Mark (mgoddard) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From balazs.gibizer at ericsson.com Tue Jan 23 10:06:30 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 23 Jan 2018 11:06:30 +0100 Subject: [openstack-dev] [nova] PTL Election Season In-Reply-To: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> References: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> Message-ID: <1516701990.3778.8@smtp.office365.com> On Tue, Jan 23, 2018 at 12:09 AM, Matt Riedemann wrote: > On 1/15/2018 11:04 AM, Kendall Nelson wrote: >> Election details: https://governance.openstack.org/election/ >> >> Please read the stipulations and timelines for candidates and >> electorate contained in this governance documentation. >> >> Be aware, in the PTL elections if the program only has one >> candidate, that candidate is acclaimed and there will be no poll. >> There will only be a poll if there is more than one candidate >> stepping forward for a program's PTL position. >> >> There will be further announcements posted to the mailing list as >> action is required from the electorate or candidates. This email is >> for information purposes only. >> >> If you have any questions which you feel affect others please reply >> to this email thread. >> > > To anyone that cares, I don't plan on running for Nova PTL again for > the Rocky release. Queens was my fourth tour and it's definitely time > for someone else to get the opportunity to lead here. I don't plan on > going anywhere and I'll be here to help with any transition needed > assuming someone else (or a couple of people hopefully) will run in > the election. It's been a great experience and I thank everyone that > has had to put up with me and my obsessive paperwork and process > disorder in the meantime. > > -- > > Thanks, > > Matt Thank you Matt! You did an excellent job and helped the whole community to grow. Cheers, gibi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Tue Jan 23 10:15:11 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 23 Jan 2018 11:15:11 +0100 Subject: [openstack-dev] [ironic] FFE request for node rescue feature In-Reply-To: References: Message-ID: <2cc426ca-4133-c272-034e-fd2151f98b6b@redhat.com> I'm +1 on this, because the feature has been proposed for a while (has changed the contributor group at least once) and is needed for feature parity with virtual machines in nova. On 01/23/2018 06:56 AM, Shivanand Tendulker wrote: > Hi > > The rescue feature [1] is an high priority for ironic in Queens. The spec for > the same was merged in Newton. This feature is necessary for users that lose > regular access to their machine (e.g. lost passwords). > > Landing node rescue feature late in the cycle will lead to less time being > available for testing, with a risk that the feature being released with defects. > The code changes are fairly isolated from existing code to ensure it does not > cause any regression. The Ironic side rescue code patches are all in review [2], > and are now are getting positive reviews or minor negative feedback. > > dtantsur and TheJulia have kindly agreed to review the same during the FFE window. > > [1] > https://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/implement-rescue-mode.html > [2] > https://review.openstack.org/#/q/topic:bug/1526449+(status:open+AND+project:openstack/ironic) > > Thanks and Regards, > Shiv (stendulker) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From wangpeihuixyz at 126.com Tue Jan 23 10:17:00 2018 From: wangpeihuixyz at 126.com (Frank Wang) Date: Tue, 23 Jan 2018 18:17:00 +0800 (CST) Subject: [openstack-dev] [Neutron]The components timing problem In-Reply-To: <59902E67-8E91-435A-8D9A-17CF68049635@kaplonski.pl> References: <7f65a870.834d.1612246adcb.Coremail.wangpeihuixyz@126.com> <59902E67-8E91-435A-8D9A-17CF68049635@kaplonski.pl> Message-ID: <343b2b2f.8f55.16122855b55.Coremail.wangpeihuixyz@126.com> Hi, Yeah, rabbitmq server on the same host. rabbitmq service was stopped a little while to try to reproduce a problem. Do you mean report_state always running even the rabbitmq connection is down? It sounds unreasonable. Thanks, Frank At 2018-01-23 17:58:58, "Sławomir Kapłoński" wrote: >Hi, > >Problem which You have here is that Your agent can't connect to rabbitmq. Because of that rpc message with report_state is timeouted and that is the reason of this error. >Is Your rabbitmq server on same host with ovs agent? Is rabbitmq server works properly? > >— >Best regards >Slawek Kaplonski >slawek at kaplonski.pl > > > >> Wiadomość napisana przez Frank Wang w dniu 23.01.2018, o godz. 10:44: >> >> Hi Sławomir, >> >> Thanks for you quick response. I'm using Ocata, Here is the exception stack. >> e: [Errno 111] ECONNREFUSED. Trying again in 14 seconds. Client port: None >> 2018-01-23 17:37:45.387 8636 ERROR oslo.messaging._drivers.impl_rabbit [-] [7b611083-b03c-4aaa-bb2f-a8f6cb89ab15] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 14 seconds. Client port: None >> 2018-01-23 17:37:45.395 8636 ERROR oslo.messaging._drivers.impl_rabbit [-] [bc243998-c484-47b3-8a5d-59104470c441] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 14 seconds. Client port: None >> 2018-01-23 17:37:45.402 8636 ERROR oslo.messaging._drivers.impl_rabbit [-] [f86f1d05-6f72-443a-967e-581c9c3af656] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 14 seconds. Client port: None >> 2018-01-23 17:37:51.927 8636 ERROR oslo.messaging._drivers.impl_rabbit [-] [4c876705-fd92-4b32-8c45-98ff1ed75846] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 14 seconds. Client port: None >> 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Failed reporting state! >> 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most recent call last): >> 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 312, in _report_state >> 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent True) >> 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/neutron/agent/rpc.py", line 87, in report_state >> 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent return method(context, 'report_state', **kwargs) >> 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 169, in call >> 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent retry=self.retry) >> 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 97, in _send >> 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent timeout=timeout, retry=retry) >> 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 458, in send >> 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent retry=retry) >> 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 447, in _send >> 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent result = self._waiter.wait(msg_id, timeout) >> 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 339, in wait >> 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent message = self.waiters.get(msg_id, timeout=timeout) >> 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 238, in get >> 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 'to message ID %s' % msg_id) >> 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent MessagingTimeout: Timed out waiting for a reply to message ID 5907b7ced96140c693a6fb6dd0698dbc >> 2018-01-23 17:37:53.807 8636 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent >> 2018-01-23 17:37:53.808 8636 WARNING oslo.service.loopingcall [-] Function 'neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent._report_state' run outlasted interval by 30.00 sec >> 2018-01-23 17:37:59.227 8636 INFO oslo.messaging._drivers.impl_rabbit [-] [3e7c487b-9222-4eee-b879-a7981649d49c] Reconnected to AMQP server on 127.0.0.1:5672 via [amqp] client with port 51476. >> 2018-01-23 17:37:59.270 8636 INFO oslo.messaging._drivers.impl_rabbit [-] [a6262917-f316-43df-83fd-05af30a2b302] Reconnected to AMQP server on 127.0.0.1:5672 via [amqp] client with port 51478. >> 2018-01-23 17:37:59.308 8636 INFO oslo.messaging._drivers.impl_rabbit [-] [99ff11c3-1662-461 >> >> Thanks, >> Frank >> At 2018-01-23 17:27:52, "Sławomir Kapłoński" wrote: >> >Hi, >> > >> >If both ovs agent and neutron-server reconnect to rabbitmq then it should report state properly again IMO. >> >Can You maybe send more details about Your issue? What OpenStack version You are running, exact stack trace of exception which You get and so on. >> > >> >— >> >Best regards >> >Slawek Kaplonski >> >slawek at kaplonski.pl >> > >> > >> > >> >> Wiadomość napisana przez Frank Wang w dniu 23.01.2018, o godz. 10:08: >> >> >> >> Hi All, >> >> >> >> I'm really newbie about OpenStack Neutron, Please correct me if I say something wrong. There was a question I'd like to consult. AMQP is the messaging bus between neutron-server and *agents. we usually use rabbitmq as the back-end of messaging bus. The problem I encountered is the ovs agent raise an exception while reporting its own state to the server. Here is my guess, If I restart the controller node, what if the rabbitmq start early than neutron-server. I mean the ovs agent always trying to connect to the rabbitmq. It will report state to the server through RPC once the connection established. if the server is not ready at this time. Does it cause the agent exception? Any suggestion would be greatly appreciated! >> >> >> >> >> >> Thanks, >> >> Frank >> >> >> >> >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> >__________________________________________________________________________ >> >OpenStack Development Mailing List (not for usage questions) >> >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Tue Jan 23 10:23:36 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 23 Jan 2018 11:23:36 +0100 Subject: [openstack-dev] [ironic] FFE - classic drivers deprecation Message-ID: Hi all, I'm writing to request an FFE for the classic drivers deprecation work [1][2]. This is a part of the driver composition reform [3] - the effort started in Ocata to revamp bare metal drivers. The following changes are in scope of this FFE: 1. Provide an automatic migration to hardware types as part of 'ironic-dbsync online_data_migrations' 2. Update the CI to use hardware types 3. Issue a deprecation warning when loading classic drivers, and deprecate enabled_drivers option. Finishing it in Queens will allow us to stick to our schedule (outlined in [1]) to remove classic drivers in Rocky. Keeping two methods of loading drivers is a maintenance burden. Even worse, two sets of mostly equivalent drivers confuse users, and the confusion well increase as we introduce features (like rescue) that are only available for nodes using the new-style drivers. The downside of this work is that it introduces a non-trivial data migration close to the end of the cycle. Thus, it is designed [1][2] to not fail if the migration cannot fully succeed due to environmental reasons. rloo and stendulker were so kind to agree to review this work during the feature freeze window, if it gets an exception. Dmitry [1] http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/classic-drivers-future.html [2] https://review.openstack.org/536298 [3] http://specs.openstack.org/openstack/ironic-specs/specs/7.0/driver-composition-reform.html From akekane at redhat.com Tue Jan 23 10:30:42 2018 From: akekane at redhat.com (Abhishek Kekane) Date: Tue, 23 Jan 2018 16:00:42 +0530 Subject: [openstack-dev] [glance] py27 gate situation In-Reply-To: References: Message-ID: I have tried to debug this in my environment but so far not able to find the reason. While starting the api service from functional tests it fails to load middleware 'rootapp' from api-paste.ini What I have done is added below lines in tox.ini to enable debugging of functional tests [testenv:debug-functional] basepython = python2.7 setenv = TEST_PATH = ./glance/tests/functional commands = oslo_debug_helper {posargs} and added pdb at 'https://github.com/openstack/glance/blob/master/glance/ tests/functional/__init__.py#L770' While executing each functional test, it creates the temp directory at location /tmp/tmp* where it stores config, paste.ini and other required files. So far no luck to find the exact cause. Thanks & Regards, Abhishek On Tue, Jan 23, 2018 at 5:37 AM, Brian Rosmaita wrote: > Looks like something changed in a distro dependency over the weekend > and the glance py27 gate is failing. > > I did a dist-upgrade in a new Ubuntu 16.04.3 vm, and was able to > reproduce the failures locally. I'll continue looking, but it's EOD > where I am, so I wanted to make sure this info is available to the > people whose day is about to begin. The failures are confined to the > py27 functional tests. Unit tests pass, as do all the py35 tests. > > The requirements team has merged a change making the cross-glance-py27 > job non-voting: > https://review.openstack.org/#/c/536082/ > Thus, this issue isn't holding up requirements changes, but it's still > pretty urgent for us to figure out because I don't like us running > around naked with respect to requirements changes that could affect > glance running under py27. > > Here's what I think we should do: > > (1) Sean has had a patch up for a while separating out the unit tests > from the functional tests. I think it's a good idea. If you are > aware of a reason why they should NOT be separated, please comment on > the patch: > https://review.openstack.org/#/c/474816/ > I'd like to merge this soon so we can at least restore py27 unit tests > to the requirements gate. We can always revert if it turns out that > there is a really good reason for not separating out the functional > tests. > > (2) I've got a patch up that depends on Sean's patch and restores the > functional test gate jobs to the glance .zuul.yaml file (though it > makes the py27 functional tests non-voting): > https://review.openstack.org/#/c/536630/ > > (3) Continue to work on https://bugs.launchpad.net/glance/+bug/1744824 > to figure out why the py27 functional tests are failing. As far as I > can tell, it looks like a distro package issue. > > > thanks, > brian > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arxcruz at redhat.com Tue Jan 23 10:43:04 2018 From: arxcruz at redhat.com (Arx Cruz) Date: Tue, 23 Jan 2018 11:43:04 +0100 Subject: [openstack-dev] [tripleo] TripleO CI end of sprint status Message-ID: Hello, Sorry the delay... On January 17 we came the end of sprint using our new team structure, and here’s the highlights. Sprint Review: On this sprint, the team worked in the first steps to have metrics enabled in Tripleo Jobs. With this in place will be easier to identify places where we are seeing the code taking more time than usual, and will be easier to developers identify more easily where to focus. One can see the results of the sprint via https://tinyurl.com/yb4z5gd4 Ruck and Rover What is Ruck and Rover One person in our team is designated Ruck and another Rover, one is responsible to monitoring the CI, checking for failures, opening bugs, participate on meetings, and this is your focal point to any CI issues. The other person, is responsible to work on these bugs, fix problems and the rest of the team are focused on the sprint. For more information about our structure, check [1] List of bugs that Ruck and Rover were working on: - https://bugs.launchpad.net/tripleo/+bug/1741445 - tripleo-ci-centos-7-scenario002-multinode-oooq-container failing with timeout in deploy overcloud - https://bugs.launchpad.net/tripleo/+bug/1741850 - VolumeEncryptionTest is failing in tripleo-ci-centos-7-scenario002-multinode-oooq-container with request timeout - https://bugs.launchpad.net/tripleo/+bug/1742080 - Job tripleo-quickstart-gate-newton-delorean-quick-basic fails with missing ci_centos_libvirt.yml - https://bugs.launchpad.net/tripleo/+bug/1742435 - tripleo-quickstart-gate-master-delorean-quick-basic failing to parse jenkins env vars - https://bugs.launchpad.net/tripleo/+bug/1742465 - periodic-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset002-ocata is failing with Resource could not be found on mistral - https://bugs.launchpad.net/tripleo/+bug/1742557 - quickstart reproducer create script is getting skipped - https://bugs.launchpad.net/tripleo/+bug/1742528 - ovb jobs in rdo-cloud are not logging the overcloud nodes We also have our new Ruck and Rover for this week: - Ruck - John Trownbridge - trown|ruck - Rover - Wesley Hayutin - weshay|ruck If you have any questions and/or suggestions, please contact us [1] https://review.openstack.org/#/c/509280/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From arxcruz at redhat.com Tue Jan 23 10:43:58 2018 From: arxcruz at redhat.com (Arx Cruz) Date: Tue, 23 Jan 2018 11:43:58 +0100 Subject: [openstack-dev] [tripleo] TripleO CI Squad meeting Message-ID: Hello, Here’s the highlights from TripleO CI Squad meeting from January 18 - Roles - The Ruck and the Rover will be responsible for any CI problems, so if you have anything related to CI, please contact them. The rest of the team will work on the sprint - Ruck - John Trownbridge - trown|ruck - Rover - Wesley Hayutin - weshay|rover - Team - Arx Cruz - Ronelle Landy - Attila Darazs - Gabrielle Cerami - Rafael Folco - Matt Young - For this sprint 01/18/2018 - 01/31/2018 - The proposed topic is work on internal ci jobs - Promotion pipeline - OVB jobs - Upgrade jobs - The epic task with more information can be found here https://trello.com/c/rzHPI7kb - Tasks can be found in both the trello card above, or in the TripleO CI Squad trello board using the filter by label “Sprint 7” or clicking in this link here https://tinyurl.com/ya359l2l - Promotions - Master promoted 12 days ago - There are some issues opened and the team is working on that - Pike promoted 13 days ago - Newton promoted today If you have any questions, suggestions please let us know. Your feedback is very important to us! -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Tue Jan 23 11:12:57 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 24 Jan 2018 00:12:57 +1300 Subject: [openstack-dev] [faas] [qinling] project update - 3 Message-ID: Hi, all This project update is posted bi-weekly, but feel free to get in touch in #openstack-qinling anytime. - Function package md5 check. This feature allows user specify the md5 checksum for the code package when creating the function, so the function package could be verified after downloading. If CLI is used, the md5 checksum will be calculated automatically. - Function webhook. The user can expose a function to 3rd party service(e.g. GitHub) by creating webhook so that the function can be invoked without authentication. - [CLI] Support to download function code package. BTW, maybe some of you already know that Qinling team is applying to become an OpenStack official project[1], feel free to leave your comments in the application, any feedback and questions are welcomed. As usual, you can easily find previous emails below. [1]: https://review.openstack.org/#/c/533827/ Cheers, Lingxian Kong (Larry) ---------- Forwarded message ---------- From: Lingxian Kong Date: Mon, Jan 8, 2018 at 10:37 AM Subject: [openstack-dev] [faas] [qinling] project update - 2 To: OpenStack Development Mailing List Hi, all Happy new year! This project update is posted by-weekly, but feel free to get in touch in #openstack-qinling anytime. - Introduce etcd in qinling for distributed locking and storing the resources that need to be updated frequently. - Get function workers (admin only) - Support to detach function from underlying orchestrator (admin only) - Support positional args in users function - More unit tests and functional tests added - Powerful resource query filtering of qinling openstack CLI - Conveniently delete all executions of one or more functions in CLI You can find previous emails below. Have a good day :-) Cheers, Lingxian Kong (Larry) ---------- Forwarded message ---------- From: Lingxian Kong Date: Tue, Dec 12, 2017 at 10:18 PM Subject: [openstack-dev] [qinling] [faas] project update ​ - 1​ To: OpenStack Development Mailing List Hi, all Maybe there are aleady some people interested in faas implementation in openstack, and also deployed other openstack services to be integrated with (e.g. trigger function by object uploading in swift), Qinling is the thing you probably don't want to miss out. The main motivation I creatd Qinling project is from frequent requirements of our public cloud customers. For people who have not heard about Qinling before, please take a look at my presentation in Sydney Summit: https://youtu.be/NmCmOfRBlIU There is also a simple demo video: https://youtu.be/K2SiMZllN_A As the first project update email, I will just list the features implemented for now: - Python runtime - Sync/Async function execution - Job (invoke function on schedule) - Function defined in swift object storage service - Function defined in docker image - Easy to interact with openstack services in function - Function autoscaling based on request rate - RBAC operation - Function resource limitation - Simple documentation I will keep posting the project update by-weekly, but feel free to get in touch in #openstack-qinling anytime. -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.bourke at oracle.com Tue Jan 23 12:45:07 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Tue, 23 Jan 2018 12:45:07 +0000 Subject: [openstack-dev] [zuul] Cannot view log outputs in browser Message-ID: Apologies if this has been asked before. It seems as of late (I think since the roll out of zuul v3, I can't seem to view job outputs directly in my browser. E.g. when I click link[0], I have to download 'job-output.txt.gz', unzip it, rename the extension to '.html', and finally open it in a browser. I've tried this with both Chrome 63.0.3239.132 and Firefox 57.0.4, OS Ubuntu 16.04. Is anyone else seeing this issue? Thanks, -Paul [0] http://logs.openstack.org/71/535671/11/check/kolla-ansible-oraclelinux-source-ceph/e63c60c/job-output.txt.gz From lyarwood at redhat.com Tue Jan 23 13:44:49 2018 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 23 Jan 2018 13:44:49 +0000 Subject: [openstack-dev] [nova] Native QEMU LUKS decryption review overview ahead of FF In-Reply-To: <20180122142212.2fqjvquljpji6kph@lyarwood.usersys.redhat.com> References: <20180122142212.2fqjvquljpji6kph@lyarwood.usersys.redhat.com> Message-ID: <20180123134449.vn2jqwzdhibm72to@lyarwood.usersys.redhat.com> A breif progress update in-line below. On 22-01-18 14:22:12, Lee Yarwood wrote: > Hello, > > With M3 and FF rapidly approaching this week I wanted to post a brief > overview of the QEMU native LUKS series. > > The full series is available on the following topic, I'll go into more > detail on each of the changes below: > > https://review.openstack.org/#/q/topic:bp/libvirt-qemu-native-luks+status:open > > libvirt: Collocate encryptor and volume driver calls > https://review.openstack.org/#/c/460243/ (Missing final +2 and +W) > > This refactor of the Libvirt driver connect and disconnect volume code > has the added benefit of also correcting a number of bugs around the > attaching and detaching of os-brick encryptors. IMHO this would be > useful in Queens even if the rest of the series doesn't land. > > libvirt: Introduce disk encryption config classes > https://review.openstack.org/#/c/464008/ (Missing final +2 and +W) > > This is the most straight forward change of the series and simply > introduces the required config classes to wire up native LUKS decryption > within the domain XML of an instance. Hopefully nothing controversial. Both of these have landed, my thanks to jaypipes for his reviews! > libvirt: QEMU native LUKS decryption for encrypted volumes > https://review.openstack.org/#/c/523958/ (Missing both +2s and +W) > > This change carries the bulk of the implementation, wiring up encrypted > volumes during their initial attachment. The commit message has a > detailed run down of the various upgrade and LM corner cases we attempt > to handle here, such as LM from a P to Q compute, detaching a P attached > encrypted volume after upgrading to Q etc. Thanks to melwitt and mdbooth for your reviews! I've respun to address the various nits and typos pointed out in this change. Ready and waiting to respin again if any others crop up. > Upgrade and LM testing is enabled by the following changes: > > fixed_key: Use a single hardcoded key across devstack deployments > https://review.openstack.org/#/c/536343/ > > compute: Introduce an encrypted volume LM test > https://review.openstack.org/#/c/536177/ > > This is being tested by tempest-dsvm-multinode-live-migration and > grenade-dsvm-neutron-multinode-live-migration in the following DNM Nova > change, enabling volume backed LM tests: > > DNM: Test LM with encrypted volumes > https://review.openstack.org/#/c/536350/ > > Hopefully that covers everything but please feel free to ping if you > would like more detail, background etc. Thanks in advance, grenade-dsvm-neutron-multinode-live-migration is currently failing due to our use of the Ocata UCA on stable/pike leading to the following issue with the libvirt 2.5.0 build it provides: libvirt 2.5.0-3ubuntu5.6~cloud0 appears to be compiled without gnutls https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1744758 I've cherry-picked the following devstack change back to stable/pike and pulled it into the test change above for Nova, hopefully working around these failures: Update to using pike cloud-archive https://review.openstack.org/#/c/536798/ tempest-dsvm-multinode-live-migration is also failing but AFAICT they are unrelated to this overall series and appear to be more generic volume backed live migration failures. Thanks again! Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From sbauza at redhat.com Tue Jan 23 13:52:41 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 23 Jan 2018 14:52:41 +0100 Subject: [openstack-dev] [nova] PTL Election Season In-Reply-To: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> References: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> Message-ID: On Tue, Jan 23, 2018 at 12:09 AM, Matt Riedemann wrote: > On 1/15/2018 11:04 AM, Kendall Nelson wrote: > >> Election details: https://governance.openstack.org/election/ >> >> Please read the stipulations and timelines for candidates and electorate >> contained in this governance documentation. >> >> Be aware, in the PTL elections if the program only has one candidate, >> that candidate is acclaimed and there will be no poll. There will only be a >> poll if there is more than one candidate stepping forward for a program's >> PTL position. >> >> There will be further announcements posted to the mailing list as action >> is required from the electorate or candidates. This email is for >> information purposes only. >> >> If you have any questions which you feel affect others please reply to >> this email thread. >> >> > To anyone that cares, I don't plan on running for Nova PTL again for the > Rocky release. Queens was my fourth tour and it's definitely time for > someone else to get the opportunity to lead here. I don't plan on going > anywhere and I'll be here to help with any transition needed assuming > someone else (or a couple of people hopefully) will run in the election. > It's been a great experience and I thank everyone that has had to put up > with me and my obsessive paperwork and process disorder in the meantime. > > Matt, you were a very good PTL. Not only because your reviews (after all, you'll still review changes next cycle ;) ) but also because you were helping others with their blueprints or questions if they had. Keeping up with the Riedemann ! -S > -- > > Thanks, > > Matt > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Tue Jan 23 14:19:59 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 23 Jan 2018 23:19:59 +0900 Subject: [openstack-dev] [horizon][packaging] django-openstack-auth retirement In-Reply-To: <20180122113012.xe42fi24v3ljm7rz@yuggoth.org> References: <20180122113012.xe42fi24v3ljm7rz@yuggoth.org> Message-ID: 2018-01-22 20:30 GMT+09:00 Jeremy Stanley : > On 2018-01-22 14:40:49 +0900 (+0900), Akihiro Motoki wrote: > [...] >> If you install horizon and django-openstack-auth by using pip (instead >> of distribution packages), please uninstall django-openstack-auth >> python package before upgrading horizon. >> Otherwise, "openstack_auth" module is maintained by both horizon and >> django-openstack-auth after upgrading horizon and it confuses the pip >> file management, while horizon works. > [...] > > If we were already publishing Horizon to PyPI, we could have a new > (and final) major version of DOA as a transitional package to stop > providing any module itself and depend on the new version of Horizon > which provides that module instead. I suppose without Horizon on > PyPI, documentation of the issue is the most we can do for this > situation. Horizon usually does not publish its releases to PyPI, so I think what we can do is to document it. P.S. The only exceptions on PyPI horizon are 12.0.2 and 2012.2 releases. 12.0.2 was released last week but I don't know why it is available at PyPI. In deliverables/pike/horizon.yaml in the openstack/releases repo, we don't have "include-pypi-link: yes". Thanks, Akihiro From doug at doughellmann.com Tue Jan 23 14:26:30 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 23 Jan 2018 09:26:30 -0500 Subject: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID In-Reply-To: <7b4c5530-55e9-2590-1b67-74b5ff938ef9@switch.ch> References: <054e9a0c-1c99-51dd-9e66-8adb3c910802@switch.ch> <1515696336-sup-7054@lrrr.local> <7142b655-abca-ed58-bc4c-07fa51cb401c@switch.ch> <1515771070-sup-7997@lrrr.local> <07ee3262-8aec-a4c4-f981-bc448afab0ba@switch.ch> <96f2a7d8-ea7c-5530-7975-62b477982f03@switch.ch> <1516293565-sup-9123@lrrr.local> <1516295114-sup-7111@lrrr.local> <1516630943-sup-4108@lrrr.local> <1516659378-sup-8232@lrrr.local> <7b4c5530-55e9-2590-1b67-74b5ff938ef9@switch.ch> Message-ID: <1516716895-sup-6461@lrrr.local> Excerpts from Saverio Proto's message of 2018-01-23 10:21:37 +0100: > Hello Doug, > > I have run the script: > here is my output: > > http://paste.openstack.org/show/650913/ It looks like the logger returned by oslo.log does include the values in the "extra" section but the others do not because the code to introduce the values isn't invoked, just as when I run the script. > At this point I have some questions. Can I upgrade just oslo.log library > keeping the rest of the stuff in Newton ? Maybe. oslo.log has several dependencies that may also need to be updated. Those should be backwards-compatible, but the configuration you would end up with is not something we have ever tested. I'm also not sure how you would get the right Ubuntu packages in place. Someone on the openstack-operators mailing list might be able to help with that. > The versions of oslo.log have a different numbering scheme than other > openstack projects, so I cannot understand the versions compatibility. The releases web site (https://releases.openstack.org) documents the versions of all of the components released for each series. > > As far as I understand 3.34.0 should be enough for me ? : > > git tag --contains 1b012d0fc6811f00e032e52ed586fe37e157584d > 3.34.0 > 3.35.0 > 3.36.0 3.34.0 is a queens series release, which makes it more likely that more other dependencies would need to be updated. Even backporting the changes to the Ocata branch and releasing it from there would require updating several other libraries. Are you using packages from Canonical, or are you building them yourself? Doug > > thank you > > Saverio > > On 22.01.18 23:20, Doug Hellmann wrote: > > Excerpts from Saverio Proto's message of 2018-01-22 18:45:15 +0100: > >> Hello Doug, > >> > >> in the extra session I see just {"project": "unknown", "version": "unknown"} > >> > >> here a full line from nova-api: > >> > >> {"thread_name": "MainThread", "extra": {"project": "unknown", "version": > >> "unknown"}, "process": 31142, "relative_created": 3459415335.4091644, > >> "module": "wsgi", "message": > >> "2001:xxx:xxxx:8100::80,2001:xxx:xxxx:81ff::b0 \"GET > >> /v2/64b5b50eb21d4efe9783eb1d81a9ec65/os-services HTTP/1.1\" status: 200 > >> len: 1812 time: 0.1813300", "hostname": "nova-0", "filename": "wsgi.py", > >> "levelno": 20, "lineno": 555, "asctime": "2018-01-22 18:37:02,312", > >> "msg": "2001:xxx:xxxx:8100::80,2001:xxx:xxxx:81ff::b0 \"GET > >> /v2/64b5b50eb21d4efe9783eb1d81a9ec65/os-services HTTP/1.1\" status: 200 > >> len: 1812 time: 0.1813300", "args": [], "process_name": "MainProcess", > >> "name": "nova.osapi_compute.wsgi.server", "thread": 140414249163824, > >> "created": 1516642622.312235, "traceback": null, "msecs": > >> 312.23511695861816, "funcname": "handle_one_response", "pathname": > >> "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", "levelname": "INFO"} > > > > It looks like you're running into a limitation of the older version of > > the library where the context was only logged from openstack source > > code. This particular log message is coming from the eventlet library. > > > > Try running the script below and saving the output to a pastebin. > > > > Under the newton version of oslo.log, I get > > http://paste.openstack.org/show/650566/ and under the queens version I > > get http://paste.openstack.org/show/650569/ which shows me that the > > "extra" handling is working more or less the same way but the "context" > > handling is improved in the newer version (lots of the values are null > > because I don't fully set up the context, but the request_id field has a > > valid value). > > > > Doug > > > > > > #!/usr/bin/env python > > > > from __future__ import print_function > > > > import logging > > > > from oslo_context import context > > from oslo_log import formatters, log > > > > > > ch = logging.StreamHandler() > > ch.setLevel(logging.DEBUG) > > > > formatter = formatters.JSONFormatter() > > ch.setFormatter(formatter) > > > > LOG = logging.getLogger() > > LOG.setLevel(logging.DEBUG) > > LOG.addHandler(ch) > > > > ctx = context.RequestContext(request_id='the-request-id') > > > > LOG.debug('without extra') > > print() > > > > LOG.debug('with extra', extra={'context': ctx}) > > print() > > > > log.getLogger().debug('via KeywordArgumentAdapter', context=ctx) > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From rbowen at redhat.com Fri Jan 19 21:25:58 2018 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 19 Jan 2018 16:25:58 -0500 Subject: [openstack-dev] [all] [ptg] Video interviews again at the PTG Message-ID: <40b789e8-ac0b-fdc6-70db-35a42b09e147@redhat.com> TL;DR: Sign up for PTG interviews at https://docs.google.com/spreadsheets/d/1MK7rCgYXCQZP1AgQ0RUiuc-cEXIzW5RuRzz5BWhV4nQ/edit#gid=0 As at previous PTGs, I will be doing interviews in Dublin, which will be posted to http://youtube.com/RDOCommunity - where you can see some past examples. If you, or your project/team/company/whatever wish to participate in one of these interviews, please sign up at https://docs.google.com/spreadsheets/d/1MK7rCgYXCQZP1AgQ0RUiuc-cEXIzW5RuRzz5BWhV4nQ/edit#gid=0 That spreadsheet also includes a description of the kinds of things we're looking for, and links to examples of videos from previous PTGs. I have 56 interview slots, so there should be plenty of room for most projects, as well as various cross-project interviews, so talk with your project team, and claim a spot! -- Rich Bowen - rbowen at redhat.com @RDOcommunity // @CentOSProject // @rbowen From lebre.adrien at free.fr Tue Jan 23 14:49:44 2018 From: lebre.adrien at free.fr (lebre.adrien at free.fr) Date: Tue, 23 Jan 2018 15:49:44 +0100 (CET) Subject: [openstack-dev] [FEMDC] A first step to the gap analysis of the OpenStack code base In-Reply-To: <1073909094.359978697.1516718776029.JavaMail.root@zimbra29-e5> Message-ID: <1560444865.359995003.1516718984389.JavaMail.root@zimbra29-e5> Dear all, Following our last exchanges, we had a few discussions regarding the OpenStack code base w.r.t the edge challenges. Our idea was (i) to identify a couple of requirements (from the simplest one, i.e. starting a VM at a specific location, to the most advanced one, i.e. interoperability issues between distinct cloud stacks) and (ii) to analyse how the OpenStack codebase can fulfil these requirements. Our exchanges have been summarised at : https://etherpad.openstack.org/p/edge-gap-analysis Please feel free to comment/complete it. Best regards, On behalf of the FEMDC SiG. ad_ri3n_ From dmsimard at redhat.com Tue Jan 23 14:56:38 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Tue, 23 Jan 2018 09:56:38 -0500 Subject: [openstack-dev] [zuul] Cannot view log outputs in browser In-Reply-To: References: Message-ID: That's odd. What should happen is that while the file is zipped as .txt.gz, Apache decompresses the file on-the-fly and serves it to you as plain text [1]. This should work even with something as basic as curl. There's no javascript or fancy things involved ​: $ curl -s http://logs.openstack.org/23/536623/1/check/build-openstack-sphinx-docs/3bbaf99/job-output.txt.gz |wc -l 1569​ To my knowledge, this hasn't changed even throughout the gradual rollout of Zuul v3. There is likely a redirect happening behind the scenes in order to use the os-loganalyze middleware which is what makes the timestamps clickable, add colors and things like that [2]. Would you perhaps have an extension or plugin that would block something like that ? [1]: http://git.openstack.org/cgit/openstack-infra/puppet- openstackci/tree/templates/logs.vhost.erb#n21 [2]: http://git.openstack.org/cgit/openstack-infra/puppet- openstackci/tree/templates/logs.vhost.erb#n107 David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Jan 23, 2018 7:52 AM, "Paul Bourke" wrote: Apologies if this has been asked before. It seems as of late (I think since the roll out of zuul v3, I can't seem to view job outputs directly in my browser. E.g. when I click link[0], I have to download 'job-output.txt.gz', unzip it, rename the extension to '.html', and finally open it in a browser. I've tried this with both Chrome 63.0.3239.132 and Firefox 57.0.4, OS Ubuntu 16.04. Is anyone else seeing this issue? Thanks, -Paul [0] http://logs.openstack.org/71/535671/11/check/kolla-ansible-o raclelinux-source-ceph/e63c60c/job-output.txt.gz __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gord at live.ca Tue Jan 23 16:13:04 2018 From: gord at live.ca (gordon chung) Date: Tue, 23 Jan 2018 16:13:04 +0000 Subject: [openstack-dev] [zuul] Cannot view log outputs in browser In-Reply-To: References: Message-ID: On 2018-01-23 07:45 AM, Paul Bourke wrote: > Apologies if this has been asked before. It seems as of late (I think > since the roll out of zuul v3, I can't seem to view job outputs directly > in my browser. E.g. when I click link[0], I have to download > 'job-output.txt.gz', unzip it, rename the extension to '.html', and > finally open it in a browser. I've tried this with both Chrome > 63.0.3239.132 and Firefox 57.0.4, OS Ubuntu 16.04. > > Is anyone else seeing this issue? unless you have some special extension on all browsers it's probably because of your company's proxy. it happens to me as well at work depending on what network i'm connected to. -- gord From ricardo at lsd.ufcg.edu.br Tue Jan 23 16:57:24 2018 From: ricardo at lsd.ufcg.edu.br (Ricardo =?utf-8?Q?Ara=C3=BAjo?=) Date: Tue, 23 Jan 2018 13:57:24 -0300 (BRT) Subject: [openstack-dev] [ironic] FFE request for deprecating python-oneviewclient from OneView interfaces Message-ID: <1278623784.157763.1516726644546.JavaMail.zimbra@lsd.ufcg.edu.br> Hi, I'd like to request an FFE for deprecating python-oneviewclient and introduce python-hpOneView in OneView interfaces [1]. This migration was performed in Pike cycle but it was reverted due to the lack of a CA certificate validation in python-hpOneView (available since 4.4.0 [2]). As the introduction of the new lib was already merged [3], following changes are in scope of this FFE: 1. Replace python-oneviewclient by python-hpOneView in power, management, inspect and deployment interfaces for OneView hardware type [4] 2. Move existing ironic related validation hosted in python-oneviewclient to ironic code base [5] 3. Remove python-oneviewclient dependency from Ironic [6] By performing this migration in Queens we will be able to concentrate efforts in maintaining a single python lib for accessing HPE OneView while being able to enhance current interfaces with features already provided in python-hpOneView like soft power operations [7] and timeout for power operations [8]. Despite being a big change to merge close to the end of the cycle, all migration patches have received core reviewers attention lately and a few positive reviews. They're also passing in both the community and UFCG OneView CI (running deployment tests with HPE OneView). Postponing this will be a blocker for the teams responsible for maintaining this hardware type and both python libs for the next cycle. dtantsur and TheJulia have kindly agreed to keep reviewing this work during the feature freeze window, if it gets an exception. Thanks, Ricardo (ricardoas) [1] - https://bugs.launchpad.net/ironic/+bug/1693788 [2] - https://github.com/HewlettPackard/python-hpOneView/releases/tag/v4.4.0 [3] - https://review.openstack.org/#/c/523943/ [4] - https://review.openstack.org/#/c/524310/ [5] - https://review.openstack.org/#/c/524599/ [6] - https://review.openstack.org/#/c/524729/ [7] - https://review.openstack.org/#/c/510685/ [8] - https://review.openstack.org/#/c/524624/ Ricardo Araújo Santos - www.lsd.ufcg.edu.br/~ricardo M.Sc in Computer Science at UFCG - www.ufcg.edu.br Researcher and Developer at Distributed Systems Laboratory - www.lsd.ufcg.edu.br Paraíba - Brasil -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Jan 23 17:49:49 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 23 Jan 2018 09:49:49 -0800 Subject: [openstack-dev] [tripleo] The Weekly Owl - 6th Edition Message-ID: Note: this is the sixth edition of a weekly update of what happens in TripleO. The goal is to provide a short reading (less than 5 minutes) to learn where we are and what we're doing. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-January/126270.html +---------------------------------+ | General announcements | +---------------------------------+ +--> Shipping Queens milestone 3 this week. +--> 2 new contributors this week: David Vallee Delisle and Lars Brune. Welcome! +--> The team should be planning for Rocky and next PTG: https://etherpad.openstack.org/p/tripleo-ptg-rocky +--> Specs targeted to Queens need to move to Rocky or will be abandoned. +------------------------------+ | Continuous Integration | +------------------------------+ +--> Rover is Wes and ruck is John. Please let them know any new CI issue. +--> Master promotion is 18 days, Pike is 17 days and Ocata is 17 days. +--> The team is still working on getting a promotion asap. +--> Sprint 7 started a week ago, team is focusing on internal ci jobs. +--> OVB jobs are now run in RDO cloud only and not RH1 anymore. +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting and https://goo.gl/D4WuBP +-------------+ | Upgrades | +-------------+ +--> CI is now testing upgrades from Ocata to Pike and it works fine! +--> Reviews are needed on FFU, Queens upgrade workflow and undercloud backup. +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status and https://etherpad.openstack.org/p/tripleo-upgrade-squad-meeting +---------------+ | Containers | +---------------+ +--> Kubernetes: dealing with networking during OpenShift deployment. +--> Containerized undercloud: work is targeted for Rocky at this point. We want to make the job voting and switch multinode jobs to only deploy a containerized undercloud and run tempest. More to discuss at PTG. +--> Containerized overcloud: good progress on "container prepare" +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +--------------+ | Integration | +--------------+ +--> Need reviews on Manila/CephNFS (FFE asked on ML) +--> Multiple Ceph clusters was moved to Rocky. +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> Working through NPM issues. +--> Roles management tripleo-common and tripleo-ui patches still ready/needing to get merged ASAP https://review.openstack.org/512266 +--> Many Rocky planning/discussions, focusing on making UI best place to configure network +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-------------+ | Owl facts | +-------------+ The Taliabu Masked Owl is a medium-sized owl with no ear-tufts. It is Also known as the Sula Island Masked Owl. Very little is known about this rare species. The facial disc is pale reddish-brown, becoming darker towards the eyes. The rim is similar in colour. The eyes are blackish-brown, and the bill blackish-grey. Upperparts are dark brown, with whitish speckles from the crown to the lower back and on the wing coverts. Primaries are uniform brown with whitish tips. The tail is brown with three darker bars. Underparts are deep golden-brown with dark spots, with some of the spots having pale areas. Legs are feathered reddish-brown to the lower third of the tarsi. The toes and the bare parts of the tarsi are grey. The claws are blackish. Very important: Voice: A hissing sound typical of barn owls. Our friends from the Taliabu Island in the Sula Archipelago in the Moluccan Sea are very lucky to have this owl! (source: https://www.owlpages.com/owls/species.php?s=140) Stay tuned! -- Your fellow reporter, Emilien Macchi From cdent+os at anticdent.org Tue Jan 23 18:40:07 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 23 Jan 2018 18:40:07 +0000 (GMT) Subject: [openstack-dev] [tc] [all] TC Report 18-04 Message-ID: (Hyperlinkified for your pleasure: https://anticdent.org/tc-report-18-04.html ) When a person is in early adolescence they get cramps in their legs and call it growing pains. Later, in adulthood, there's a different kind of pain when the strategies and tactics used to survive adolescence are no longer effective and there's a chafing that won't subside until there's been a change in behavior and expectations; an adaptation to new constraints and realities. Whatever that is called, we've got it going on in OpenStack, evident in the discussion had in the past week. ## OpenStack-wide Goals There are four proposed [OpenStack-wide goals](https://governance.openstack.org/tc/goals/index.html): * [Add Cold upgrades capabilities](https://review.openstack.org/#/c/533544/) * [Add Rocky goal to remove mox](https://review.openstack.org/#/c/532361/) * [ Add Rocky goal to enable mutable configuration](https://review.openstack.org/#/c/534605/) * [ Add Rocky goal to ensure pagination links](https://review.openstack.org/#/c/532627/) These need to be validated by the community, but they are not [getting as much feedback](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-23.log.html#t2018-01-23T09:16:51) as hoped. There are different theories as to why, from "people are busy", to "people don't feel empowered to comment", to "people don't care". Whatever it is, without input the onus falls on the TC to make choices, increasing the risk that the goals will be perceived as a diktat. As always, we need to work harder to have high fidelity feedback loops. This is especially true in our "mature" phase. ## Interop Testing Despite lots of discussion in [email](http://lists.openstack.org/pipermail/openstack-dev/2018-January/126146.html) and on the [review](https://review.openstack.org/#/c/521602/), the effort to clarify how trademark and interop tests are to be managed remains unresolved. Some [discussion](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-23.log.html#t2018-01-23T13:46:15) today explored whether there is an ordering problem. I find the whole thing very confusing. People who care about trademark tests should write and review any new ones in a trademark repo that hosts the trademark tempest plugin. Existing tests should migrate or be copied there as time allows. Then the trademark tests have a single responsibility and a single home and we don't have to think so much. People imply that this is crazy, and yes, it requires some effort and has some duplication, but doesn't everything? ## Scope of OpenStack Projects Last Thursday, dhellman [started a conversation](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-18.log.html#t2018-01-18T15:32:13) about what makes an OpenStack project, prompted by [Qinling's](https://review.openstack.org/#/c/533827/) application to be "official". The adult reality here is [stated pretty clearly](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-18.log.html#t2018-01-18T15:43:53) by Doug: > we used to have 2 options, yes or no. Now we have yes, no, and "let > us help you set up your own thing over here" To some extent gatekeeping projects is the main job of the TC, and now we've made it a bit more confusing. ## PTL Balance In [this morning's office hours](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-18.log.html#t2018-01-18T15:43:53) we had a discussion about ways to help the PTL role (especially of the larger and most active projects) be more manageable and balanced. The main challenge is that as currently constituted, the person in the PTL role often needs to keep the state of the whole project in their head. That's not sustainable. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From Louie.Kwan at windriver.com Tue Jan 23 18:58:38 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Tue, 23 Jan 2018 18:58:38 +0000 Subject: [openstack-dev] [requirements] requirements-tox-validate-projects FAILURE In-Reply-To: <1516314479.2080791.1240370112.48E05F7B@webmail.messagingengine.com> References: <47EFB32CD8770A4D9590812EE28C977E961DD31C@ALA-MBC.corp.ad.wrs.com> <1516314479.2080791.1240370112.48E05F7B@webmail.messagingengine.com> Message-ID: <47EFB32CD8770A4D9590812EE28C977E961DFD3C@ALA-MBC.corp.ad.wrs.com> Thanks Clark. Got the +1 from zuul. By the way, I also changed to use openstack/automaton instead of transitions. It is all good now. Louie -----Original Message----- From: Clark Boylan [mailto:cboylan at sapwetik.org] Sent: Thursday, January 18, 2018 5:28 PM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [requirements] requirements-tox-validate-projects FAILURE On Thu, Jan 18, 2018, at 1:54 PM, Kwan, Louie wrote: > Would like to add the following module to openstack.masakari project > > https://github.com/pytransitions/transitions > > https://review.openstack.org/#/c/534990/ > > requirements-tox-validate-projects failed: > > http://logs.openstack.org/90/534990/6/check/requirements-tox-validate-projects/ed69273/ara/result/4ee4f7a1-456c-4b89-933a-fe282cf534a3/ > > What else need to be done? Reading the log [0] the job failed because python-cratonclient removed its check-requirements job. This was done in https://review.openstack.org/#/c/535344/ as part of the craton retirement and should be fixed on the requirements side by https://review.openstack.org/#/c/535351/. I think a recheck at this point will come back green (so I have done that for you). [0] http://logs.openstack.org/90/534990/6/check/requirements-tox-validate-projects/ed69273/job-output.txt.gz#_2018-01-18_20_07_54_531014 Hope this helps, Clark __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Louie.Kwan at windriver.com Tue Jan 23 19:00:28 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Tue, 23 Jan 2018 19:00:28 +0000 Subject: [openstack-dev] [Zuul] requirements-check FAILURE In-Reply-To: <12ddcf60-7f35-efa1-626f-5b36e3c7b527@suse.com> References: <47EFB32CD8770A4D9590812EE28C977E961DC346@ALA-MBC.corp.ad.wrs.com> <12ddcf60-7f35-efa1-626f-5b36e3c7b527@suse.com> Message-ID: <47EFB32CD8770A4D9590812EE28C977E961DFD68@ALA-MBC.corp.ad.wrs.com> I got it now. Thanks all for the info. -----Original Message----- From: Andreas Jaeger [mailto:aj at suse.com] Sent: Thursday, January 18, 2018 2:43 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Zuul] requirements-check FAILURE On 2018-01-17 23:01, Kwan, Louie wrote: > Would like to add the following module to openstack.masakari project > > https://github.com/pytransitions/transitions > > Got the following error with zuul requirements-check > > Requirement set([Requirement(package=u'transitions', location='', specifiers='>=0.6.4', markers=u'', comment='', extras=frozenset([]))]) not in openstack/requirements > > http://logs.openstack.org/88/534888/3/check/requirements-check/edec7bf/ara/ > > Any tip or insight to fix it? Yes, read on how to add it: https://docs.openstack.org/requirements/latest/ Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From emilien at redhat.com Tue Jan 23 19:48:00 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 23 Jan 2018 11:48:00 -0800 Subject: [openstack-dev] [tripleo] FFE nfs_ganesha integration In-Reply-To: <7dfdaada-bfae-f4f5-b8d9-e541757585e2@redhat.com> References: <7dfdaada-bfae-f4f5-b8d9-e541757585e2@redhat.com> Message-ID: I agree this would be a great addition but I'm worried about the patches which right now don't pass the check pipeline. Also I don't see any release notes explaining the changes to our users and it's supposed to improve user experience... Please add release notes, make CI passing and we'll probably grant it for FFE. On Mon, Jan 22, 2018 at 8:34 AM, Giulio Fidente wrote: > hi, > > I would like to request an FFE for the integration of nfs_ganesha, which > will provide a better user experience to manila users > > This work was slown down by a few factors: > > - it depended on the migration of tripleo to the newer Ceph version > (luminous), which happened during the queens cycle > > - it depended on some additional functionalities to be implemented in > ceph-ansible which were only recently been made available to tripleo/ci > > - it proposes the addition of on an additional (and optional) network > (storagenfs) so that guests don't need connectivity to the ceph frontend > network to be able to use the cephfs shares > > The submissions are on review and partially testable in CI [1]. If accepted, > I'd like to reassign the blueprint [2] back to the queens cycle, as it was > initially. > > Thanks > > 1. https://review.openstack.org/#/q/status:open+topic:bp/nfs-ganesha > 2. https://blueprints.launchpad.net/tripleo/+spec/nfs-ganesha > -- > Giulio Fidente > GPG KEY: 08D733BA > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Emilien Macchi From MM9745 at att.com Tue Jan 23 21:05:47 2018 From: MM9745 at att.com (MCEUEN, MATT) Date: Tue, 23 Jan 2018 21:05:47 +0000 Subject: [openstack-dev] [openstack-helm] Requesting availability for Office Hours Message-ID: <7C64A75C21BB8D43BD75BB18635E4D89654C176D@MOSTLS1MSGUSRFF.ITServices.sbc.com> OpenStack-Helm team, We're seeking to accelerate onboarding of new team members and shorten the learning curve. One idea that has resonated is to have team office hours, where experienced team members set aside time to answer questions and help newer team members get up to speed in the IRC channel. We'd like for all core reviewers to have office hours, in addition to other team members who are willing to help! To get maximum participation and good cross-time-zone coverage, we'd like to pick 2-4 hours in the week that fit everyone's schedules best. If you're an OSH core or interested in helping, can you please fill times in a *typical* week you could set time aside? Once the results are in, we'll pick a few times that seem to work best for everyone. https://doodle.com/poll/say7eiy5573vthqe Click the "Calendar" tab to get a more readable view. Thanks! Matt McEuen -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Tue Jan 23 21:52:30 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Tue, 23 Jan 2018 16:52:30 -0500 Subject: [openstack-dev] [nova] Native QEMU LUKS decryption review overview ahead of FF In-Reply-To: <20180123134449.vn2jqwzdhibm72to@lyarwood.usersys.redhat.com> References: <20180122142212.2fqjvquljpji6kph@lyarwood.usersys.redhat.com> <20180123134449.vn2jqwzdhibm72to@lyarwood.usersys.redhat.com> Message-ID: On Tue, Jan 23, 2018 at 8:44 AM, Lee Yarwood wrote: > A breif progress update in-line below. > > On 22-01-18 14:22:12, Lee Yarwood wrote: > > Hello, > > > > With M3 and FF rapidly approaching this week I wanted to post a brief > > overview of the QEMU native LUKS series. > > > > The full series is available on the following topic, I'll go into more > > detail on each of the changes below: > > > > https://review.openstack.org/#/q/topic:bp/libvirt-qemu- > native-luks+status:open > > > > libvirt: Collocate encryptor and volume driver calls > > https://review.openstack.org/#/c/460243/ (Missing final +2 and +W) > > > > This refactor of the Libvirt driver connect and disconnect volume code > > has the added benefit of also correcting a number of bugs around the > > attaching and detaching of os-brick encryptors. IMHO this would be > > useful in Queens even if the rest of the series doesn't land. > > > > libvirt: Introduce disk encryption config classes > > https://review.openstack.org/#/c/464008/ (Missing final +2 and +W) > > > > This is the most straight forward change of the series and simply > > introduces the required config classes to wire up native LUKS decryption > > within the domain XML of an instance. Hopefully nothing controversial. > > Both of these have landed, my thanks to jaypipes for his reviews! > > > libvirt: QEMU native LUKS decryption for encrypted volumes > > https://review.openstack.org/#/c/523958/ (Missing both +2s and +W) > > > > This change carries the bulk of the implementation, wiring up encrypted > > volumes during their initial attachment. The commit message has a > > detailed run down of the various upgrade and LM corner cases we attempt > > to handle here, such as LM from a P to Q compute, detaching a P attached > > encrypted volume after upgrading to Q etc. > > Thanks to melwitt and mdbooth for your reviews! I've respun to address > the various nits and typos pointed out in this change. Ready and waiting > to respin again if any others crop up. > > > Upgrade and LM testing is enabled by the following changes: > > > > fixed_key: Use a single hardcoded key across devstack deployments > > https://review.openstack.org/#/c/536343/ > > > > compute: Introduce an encrypted volume LM test > > https://review.openstack.org/#/c/536177/ > > > > This is being tested by tempest-dsvm-multinode-live-migration and > > grenade-dsvm-neutron-multinode-live-migration in the following DNM Nova > > change, enabling volume backed LM tests: > > > > DNM: Test LM with encrypted volumes > > https://review.openstack.org/#/c/536350/ > > > > Hopefully that covers everything but please feel free to ping if you > > would like more detail, background etc. Thanks in advance, > > grenade-dsvm-neutron-multinode-live-migration is currently failing due > to our use of the Ocata UCA on stable/pike leading to the following > issue with the libvirt 2.5.0 build it provides: > > libvirt 2.5.0-3ubuntu5.6~cloud0 appears to be compiled without gnutls > https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1744758 > > Hey Lee, We have a new version of libvirt in ocata-proposed now that should fix your issue and is ready for testing. Thanks for your work on this and for opening the bug. Corey I've cherry-picked the following devstack change back to stable/pike and > pulled it into the test change above for Nova, hopefully working around > these failures: > > Update to using pike cloud-archive > https://review.openstack.org/#/c/536798/ > > tempest-dsvm-multinode-live-migration is also failing but AFAICT they > are unrelated to this overall series and appear to be more generic > volume backed live migration failures. > > Thanks again! > > Lee > -- > Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 > 2D76 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gord at live.ca Tue Jan 23 23:09:19 2018 From: gord at live.ca (gordon chung) Date: Tue, 23 Jan 2018 23:09:19 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-04 In-Reply-To: References: Message-ID: On 2018-01-23 01:40 PM, Chris Dent wrote: > > (Hyperlinkified for your pleasure: > https://anticdent.org/tc-report-18-04.html ) > > When a person is in early adolescence they get cramps in their legs > and call it growing pains. Later, in adulthood, there's a different > kind of pain when the strategies and tactics used to survive > adolescence are no longer effective and there's a chafing that won't > subside until there's been a change in behavior and expectations; an > adaptation to new constraints and realities. > i love this intro. when does your coming-of-age book come out? :) > > ## OpenStack-wide Goals > > These need to be validated by the community, but they are not [getting > as much > feedback](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-23.log.html#t2018-01-23T09:16:51) > > as hoped. There are different theories as to why, from "people are > busy", to "people don't feel empowered to comment", to "people don't > care". Whatever it is, without input the onus falls on the TC to make > choices, increasing the risk that the goals will be perceived as a > diktat. As always, we need to work harder to have high fidelity > feedback loops. This is especially true in our "mature" phase. > i think this probably links back to the issues with openstack-specs. maybe it's because the tasks aren't flashy enough, maybe it's because people have too much work. personally, i've never had a user/anyone tell me that "goal X would be useful in your project" and considering how silo'd the projects are, looking outside my project of focus is not very high on priority. > > ## PTL Balance > > In [this morning's office > hours](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-18.log.html#t2018-01-18T15:43:53) > this link is wrong! you made me read stuff i didn't want to read! :P i'm going to guess it roughly starts here: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-23.log.html#t2018-01-23T09:48:03 > we had a discussion about ways to help the PTL role (especially of the > larger and most active projects) be more manageable and balanced. The main > challenge is that as currently constituted, the person in the PTL role > often needs to keep the state of the whole project in their head. > > That's not sustainable. > if i were to (potentially) oversimplify it, i would agree with this statement: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-23.log.html#t2018-01-23T10:12:22 i don't believe a PTL necessarily has to keep the whole state of the project in their head (although they could). ultimately, it's up to the PTL to decide how much they're willing to defer to others. cheers, -- gord From cdent+os at anticdent.org Tue Jan 23 23:22:50 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 23 Jan 2018 23:22:50 +0000 (GMT) Subject: [openstack-dev] [tc] [all] TC Report 18-04 In-Reply-To: References: Message-ID: On Tue, 23 Jan 2018, gordon chung wrote: > i love this intro. when does your coming-of-age book come out? :) What, you don't have it already? It's _so_ amazing. > i think this probably links back to the issues with openstack-specs. > maybe it's because the tasks aren't flashy enough, maybe it's because > people have too much work. personally, i've never had a user/anyone tell > me that "goal X would be useful in your project" and considering how > silo'd the projects are, looking outside my project of focus is not very > high on priority. Yes, that too. >> ## PTL Balance >> >> In [this morning's office >> hours](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-18.log.html#t2018-01-18T15:43:53) > > this link is wrong! you made me read stuff i didn't want to read! :P i'm > going to guess it roughly starts here: > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-23.log.html#t2018-01-23T09:48:03 Bah, the one time I'm not super careful about checking links. Yeah, the one you found is the right one. > if i were to (potentially) oversimplify it, i would agree with this > statement: > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-23.log.html#t2018-01-23T10:12:22 > > i don't believe a PTL necessarily has to keep the whole state of the > project in their head (although they could). ultimately, it's up to the > PTL to decide how much they're willing to defer to others. I think that is probably how things should be, but I'm not sure it is how things always are. I expect there's a bit of nova exceptionalism built into this analysis and also a bit of bleed between being the PTL and doing anything with significant traction in nova when not the PTL: the big picture is a big deal and you got gotta be around, a lot. But, as I've said many times, the report intentionally represents my own interpretations and biases, in hope that someone might respond and say a variety of things, including "WRONG!", driving forward our dialectic. So, thanks for responding. I owe you a cookie or something. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From myphone.tk at gmail.com Tue Jan 23 23:57:52 2018 From: myphone.tk at gmail.com (Takehiro Kaneko) Date: Wed, 24 Jan 2018 08:57:52 +0900 Subject: [openstack-dev] Announcement of retirement of rack and python-rackclient project Message-ID: Hi all, I’d like to inform you of the retirement of rack and python-rackclient project. The projects are no longer maintained. Thank you for your contributions to the projects. If you have any question, please email me. Thank you. Takehiro. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Jan 24 01:09:48 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 23 Jan 2018 20:09:48 -0500 Subject: [openstack-dev] [glance] functional gate situation Message-ID: Update on the last 24 hours: (1) Sean's patch splitting the unit and functional tests in tox has merged. (2) My patch to restore the functional test gate jobs ran into a problem, namely, that one of the py35 tests suddenly begun failing in the gate, and I haven't been able to reproduce it locally. I started looking into it, but this problem doesn't make any sense at all (you'll see what I mean when you get a chance to look at it), so I put up a patch to skip the failing test: https://review.openstack.org/#/c/536939/ It's passed the check and I ninja-approved it, so it's in the gate now. (3) I edited the patch restoring the functional gate jobs to not run the py27 tests at all (no sense wasting any time until we know they have a chance of working). At least we can run the py35 functional tests (except for the one being skipped): https://review.openstack.org/#/c/536630/ (I rebased it on the skip-test patch, it's in the check now.) I'd prefer that nothing else be merged for glance until we get the functional gate restored, which will hopefully happen sometime this evening. I'll keep an eye on (2) and (3) for the next few hours. (4) With Sean's patch merged, I put up a patch to the requirements repo reverting the change that made the cross-glance-py27 test non-voting: https://review.openstack.org/#/c/536946/ That's been approved and is in the gate now. So, we've got 2 outstanding bugs: py27 functional test failures: https://bugs.launchpad.net/glance/+bug/1744824 py35 functional test failure: https://bugs.launchpad.net/glance/+bug/1745003 ... and of course the regular stuff that was mentioned on the priority email for this week: http://lists.openstack.org/pipermail/openstack-dev/2018-January/126353.html cheers, brian From jschluet at redhat.com Wed Jan 24 03:07:50 2018 From: jschluet at redhat.com (Jon Schlueter) Date: Tue, 23 Jan 2018 22:07:50 -0500 Subject: [openstack-dev] Announcement of retirement of rack and python-rackclient project In-Reply-To: References: Message-ID: On Tue, Jan 23, 2018 at 6:57 PM, Takehiro Kaneko wrote: > Hi all, > > > I’d like to inform you of the retirement of rack and python-rackclient > project. > > The projects are no longer maintained. > > Thank you for your contributions to the projects. with quick search on hound [1][2][3] I only saw 1 other project reference rackclient and that was openstack-infra/project-config [4] [1] http://codesearch.openstack.org/?q=from%20rack&i=nope&files=&repos= [2] http://codesearch.openstack.org/?q=rackclient&i=nope&files=&repos= [3] http://codesearch.openstack.org/?q=import%20rack&i=nope&files=&repos= [4] http://codesearch.openstack.org/?q=rackclient&i=nope&files=&repos=project-config Jon Schlueter > > If you have any question, please email me. > > > Thank you. > > Takehiro. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Jon Schlueter jschluet at redhat.com IRC: jschlueter/yazug Senior Software Engineer - OpenStack Productization Engineer From prometheanfire at gentoo.org Wed Jan 24 07:29:47 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 24 Jan 2018 01:29:47 -0600 Subject: [openstack-dev] [ALL][requirements] A freeze is coming and you should be prepared In-Reply-To: <20180123072350.2jby5zoeeyzaryv5@gentoo.org> References: <20180123072350.2jby5zoeeyzaryv5@gentoo.org> Message-ID: <20180124072947.u4dv674dv6bcczb6@gentoo.org> On 18-01-23 01:23:50, Matthew Thode wrote: > Requirements is freezing Friday at 23:59:59 UTC so any last > global-requrements updates that need to get in need to get in now. > > I'm afraid that my condition has left me cold to your pleas of mercy. > Just your daily reminder that the freeze will happen in about 3 days time. Reviews seem to be winding down for requirements now (which is a good sign this release will be chilled to perfection). -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From debayan.ray at gmail.com Wed Jan 24 07:36:59 2018 From: debayan.ray at gmail.com (Debayan Ray) Date: Wed, 24 Jan 2018 13:06:59 +0530 Subject: [openstack-dev] [ironic] FFE - Implementation for UEFI iSCSI boot for iLO drivers Message-ID: Requesting FFE for firmware based iSCSI boot from volume support in iLO ----------------------------------------------------------------------- # Pros ------ With the patches up for review[0] we have implemented firmware based iSCSI boot from volume for iLO hardware. This functionality will allow users to take advantage of iLO BMC based boot from volume, as UEFI firmware 1.40 or higher in HPE Gen9 and Gen10 ProLiant hardware supports booting from iSCSI volume. The change adds the feature to the iLO drivers feature set and does not have any impact on the existing functionalities of iLO driver. # Cons ------ None # Risks ------- None # Reason of delay ----------------- This feature required new version of proliantutils (2.5.0), which got released last week # Core reviewers ---------------- Julia Kreger, Shivanand Tendulker [0] https://review.openstack.org/#/c/468288/ Thanks & Regards, Debayan Ray (on behalf of Paresh Sao) -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Wed Jan 24 07:41:46 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 24 Jan 2018 15:41:46 +0800 Subject: [openstack-dev] [acceleration]Cyborg Team Weekly Meeting 2018.01.24 Message-ID: Hi all, Meeting as usual at #openstack-cyborg starting UTC1500. We will tide up some final loose ends for the release, as well as discuss about the Rocky PTG meeting agenda in Dublin. https://etherpad.openstack.org/p/cyborg-ptg-rocky -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Wed Jan 24 08:58:26 2018 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 24 Jan 2018 08:58:26 +0000 Subject: [openstack-dev] [nova] Native QEMU LUKS decryption review overview ahead of FF In-Reply-To: References: <20180122142212.2fqjvquljpji6kph@lyarwood.usersys.redhat.com> <20180123134449.vn2jqwzdhibm72to@lyarwood.usersys.redhat.com> Message-ID: <20180124085826.xl2omx6m4khgyrbg@lyarwood.usersys.redhat.com> On 23-01-18 16:52:30, Corey Bryant wrote: > On Tue, Jan 23, 2018 at 8:44 AM, Lee Yarwood wrote: >> grenade-dsvm-neutron-multinode-live-migration is currently failing due >> to our use of the Ocata UCA on stable/pike leading to the following >> issue with the libvirt 2.5.0 build it provides: >> >> libvirt 2.5.0-3ubuntu5.6~cloud0 appears to be compiled without gnutls >> https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1744758 >> > Hey Lee, > > We have a new version of libvirt in ocata-proposed now that should fix your > issue and is ready for testing. Thanks for your work on this and for > opening the bug. Thanks Corey, as reported in the bug this WORKSFORME. Thanks for the quick turn around with this, it's really appreciated! >> I've cherry-picked the following devstack change back to stable/pike and >> pulled it into the test change above for Nova, hopefully working around >> these failures: >> >> Update to using pike cloud-archive >> https://review.openstack.org/#/c/536798/ FWIW I still think we should enable the Pike UCA for our stable/pike jobs. As noted in the stable review, testing the Ocata UCA with stable/pike strikes me as pointless as no one will ever use that combination of UCA and stable/pike bits in a real world deployment. Cheers, Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From tommylikehu at gmail.com Wed Jan 24 08:58:29 2018 From: tommylikehu at gmail.com (TommyLike Hu) Date: Wed, 24 Jan 2018 08:58:29 +0000 Subject: [openstack-dev] [api-wg] [api] [cinder] [nova] Support specify action name in request url In-Reply-To: <0957CD8F4B55C0418161614FEC580D6B281983DD@YYZEML701-CHM.china.huawei.com> References: <0957CD8F4B55C0418161614FEC580D6B281983DD@YYZEML701-CHM.china.huawei.com> Message-ID: Thanks Hongbin, These links are useful for me! Hongbin Lu 于2018年1月20日周六 上午3:20写道: > I remembered there are several discussions about action APIs in the past. > This is one discussion I can find: > http://lists.openstack.org/pipermail/openstack-dev/2016-December/109136.html > . An obvious alternative is to expose each action with an independent API > endpoint. For example: > > > > * POST /servers//start: Start a server > > * POST /servers//stop: Stop a server > > * POST /servers//reboot: Reboot a server > > * POST /servers//pause: Pause a server > > > > Several people pointed out the pros and cons of either approach and other > alternatives [1] [2] [3]. Eventually, we (OpenStack Zun team) have adopted > the alternative approach [4] above and it works very well from my > perspective. However, I understand that there is no consensus on this > approach within the OpenStack community. > > > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2016-December/109178.html > > [2] > http://lists.openstack.org/pipermail/openstack-dev/2016-December/109208.html > > [3] > http://lists.openstack.org/pipermail/openstack-dev/2016-December/109248.html > > [4] > https://developer.openstack.org/api-ref/application-container/#manage-containers > > > > Best regards, > > Hongbin > > > > *From:* TommyLike Hu [mailto:tommylikehu at gmail.com] > *Sent:* January-18-18 5:07 AM > *To:* OpenStack Development Mailing List (not for usage questions) < > openstack-dev at lists.openstack.org> > *Subject:* [openstack-dev] [api-wg] [api] [cinder] [nova] Support specify > action name in request url > > > > Hey all, > > Recently We found an issue related to our OpenStack action APIs. We > usually expose our OpenStack APIs by registering them to our API Gateway > (for instance Kong [1]), but it becomes very difficult when regarding to > action APIs. We can not register and control them seperately because them > all share the same request url which will be used as the identity in the > gateway service, not say rate limiting and other advanced gateway features, > take a look at the basic resources in OpenStack > > > > 1. *Server*: "/servers/{server_id}/action" 35+ APIs are include. > > 2. *Volume*: "/volumes/{volume_id}/action" 14 APIs are include. > > 3. Other resource > > > > We have tried to register different interfaces with same upstream url, > such as: > > > > * api gateway*: /version/resource_one/action/action1 =>* upstream*: > /version/resource_one/action > > * api gateway*: /version/resource_one/action/action2 =>* upstream*: > /version/resource_one/action > > > > But it's not secure enough cause we can pass action2 in the request body > while invoking /action/action1, also, try to read the full body for route > is not supported by most of the api gateways(maybe plugins) and will have a > performance impact when proxy. So my question is do we have any solution or > suggestion for this case? Could we support specify action name both in > request body and url such as: > > > > *URL:/volumes/{volume_id}/action* > > *BODY:*{'extend':{}} > > > > and: > > > > *URL:/volumes/{volume_id}/action/extend* > > *BODY:* {'extend':{}} > > > > Thanks > > Tommy > > > > [1]: https://github.com/Kong/kong > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Wed Jan 24 09:09:20 2018 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 24 Jan 2018 09:09:20 +0000 Subject: [openstack-dev] [nova] Native QEMU LUKS decryption review overview ahead of FF In-Reply-To: <20180123134449.vn2jqwzdhibm72to@lyarwood.usersys.redhat.com> References: <20180122142212.2fqjvquljpji6kph@lyarwood.usersys.redhat.com> <20180123134449.vn2jqwzdhibm72to@lyarwood.usersys.redhat.com> Message-ID: <20180124090920.cjn2jlpiyijg7pga@lyarwood.usersys.redhat.com> On 23-01-18 13:44:49, Lee Yarwood wrote: > A breif progress update in-line below. > > On 22-01-18 14:22:12, Lee Yarwood wrote: > > Hello, > > > > With M3 and FF rapidly approaching this week I wanted to post a brief > > overview of the QEMU native LUKS series. > > > > The full series is available on the following topic, I'll go into more > > detail on each of the changes below: > > > > https://review.openstack.org/#/q/topic:bp/libvirt-qemu-native-luks+status:open > > > > libvirt: Collocate encryptor and volume driver calls > > https://review.openstack.org/#/c/460243/ (Missing final +2 and +W) > > > > This refactor of the Libvirt driver connect and disconnect volume code > > has the added benefit of also correcting a number of bugs around the > > attaching and detaching of os-brick encryptors. IMHO this would be > > useful in Queens even if the rest of the series doesn't land. > > > > libvirt: Introduce disk encryption config classes > > https://review.openstack.org/#/c/464008/ (Missing final +2 and +W) > > > > This is the most straight forward change of the series and simply > > introduces the required config classes to wire up native LUKS decryption > > within the domain XML of an instance. Hopefully nothing controversial. > > Both of these have landed, my thanks to jaypipes for his reviews! > > > libvirt: QEMU native LUKS decryption for encrypted volumes > > https://review.openstack.org/#/c/523958/ (Missing both +2s and +W) > > > > This change carries the bulk of the implementation, wiring up encrypted > > volumes during their initial attachment. The commit message has a > > detailed run down of the various upgrade and LM corner cases we attempt > > to handle here, such as LM from a P to Q compute, detaching a P attached > > encrypted volume after upgrading to Q etc. > > Thanks to melwitt and mdbooth for your reviews! I've respun to address > the various nits and typos pointed out in this change. Ready and waiting > to respin again if any others crop up. My thanks again to melwitt for another review on this final patch. I'm going to be offline for most of Thursday ahead of the FF deadline so if any non-RH core reviewers are able to look at this today I'll do my best to address any nits, concerns, facepalms etc ASAP. Cheers, Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From akekane at redhat.com Wed Jan 24 10:08:54 2018 From: akekane at redhat.com (Abhishek Kekane) Date: Wed, 24 Jan 2018 15:38:54 +0530 Subject: [openstack-dev] [glance] py27 gate situation In-Reply-To: References: Message-ID: Confirmed that no issue in loading the middleware, problem lies somewhere else. Still trying to figure it out. Thanks & Regards, Abhishek On Tue, Jan 23, 2018 at 4:00 PM, Abhishek Kekane wrote: > I have tried to debug this in my environment but so far not able to find > the reason. > While starting the api service from functional tests it fails to load > middleware 'rootapp' from api-paste.ini > > What I have done is added below lines in tox.ini to enable debugging of > functional tests > > [testenv:debug-functional] > basepython = python2.7 > setenv = > TEST_PATH = ./glance/tests/functional > commands = oslo_debug_helper {posargs} > > and added pdb at > 'https://github.com/openstack/glance/blob/master/glance/test > s/functional/__init__.py#L770' > > While executing each functional test, it creates the temp directory at > location /tmp/tmp* where it stores config, paste.ini and other required > files. > > So far no luck to find the exact cause. > > Thanks & Regards, > > Abhishek > > > > On Tue, Jan 23, 2018 at 5:37 AM, Brian Rosmaita < > rosmaita.fossdev at gmail.com> wrote: > >> Looks like something changed in a distro dependency over the weekend >> and the glance py27 gate is failing. >> >> I did a dist-upgrade in a new Ubuntu 16.04.3 vm, and was able to >> reproduce the failures locally. I'll continue looking, but it's EOD >> where I am, so I wanted to make sure this info is available to the >> people whose day is about to begin. The failures are confined to the >> py27 functional tests. Unit tests pass, as do all the py35 tests. >> >> The requirements team has merged a change making the cross-glance-py27 >> job non-voting: >> https://review.openstack.org/#/c/536082/ >> Thus, this issue isn't holding up requirements changes, but it's still >> pretty urgent for us to figure out because I don't like us running >> around naked with respect to requirements changes that could affect >> glance running under py27. >> >> Here's what I think we should do: >> >> (1) Sean has had a patch up for a while separating out the unit tests >> from the functional tests. I think it's a good idea. If you are >> aware of a reason why they should NOT be separated, please comment on >> the patch: >> https://review.openstack.org/#/c/474816/ >> I'd like to merge this soon so we can at least restore py27 unit tests >> to the requirements gate. We can always revert if it turns out that >> there is a really good reason for not separating out the functional >> tests. >> >> (2) I've got a patch up that depends on Sean's patch and restores the >> functional test gate jobs to the glance .zuul.yaml file (though it >> makes the py27 functional tests non-voting): >> https://review.openstack.org/#/c/536630/ >> >> (3) Continue to work on https://bugs.launchpad.net/glance/+bug/1744824 >> to figure out why the py27 functional tests are failing. As far as I >> can tell, it looks like a distro package issue. >> >> >> thanks, >> brian >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkovar at redhat.com Wed Jan 24 13:05:11 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 24 Jan 2018 14:05:11 +0100 Subject: [openstack-dev] [docs] Documentation meeting today Message-ID: <20180124140511.33ea7d7916fa3a67432e5428@redhat.com> Hi all, The docs meeting will continue today at 16:00 UTC in #openstack-doc, as scheduled. For more details, see the meeting page: https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting Cheers, pk From daniel.mellado.es at ieee.org Wed Jan 24 13:14:40 2018 From: daniel.mellado.es at ieee.org (Daniel Mellado) Date: Wed, 24 Jan 2018 14:14:40 +0100 Subject: [openstack-dev] [devstack] Broken repo on devstack-plugin-container for Fedora Message-ID: Hi everyone, Since today, when I try to install devstack-plugin-container plugin over fedora. It complains in here [1] about not being able to sync the cache for the repo with the following error [2]. This is affecting me on Fedora26+ from different network locations, so I was wondering if someone from suse could have a look (it did work for Andreas in opensuse... thanks in advance!) [1] https://github.com/openstack/devstack-plugin-container/blob/master/devstack/lib/docker#L164-L170 [2] http://paste.openstack.org/show/652041/ From fungi at yuggoth.org Wed Jan 24 14:25:26 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 24 Jan 2018 14:25:26 +0000 Subject: [openstack-dev] [horizon][packaging] django-openstack-auth retirement In-Reply-To: References: <20180122113012.xe42fi24v3ljm7rz@yuggoth.org> Message-ID: <20180124142526.cczgg2kgibb7k4rj@yuggoth.org> On 2018-01-23 23:19:59 +0900 (+0900), Akihiro Motoki wrote: [...] > Horizon usually does not publish its releases to PyPI, so I think what > we can do is to document it. > > P.S. > The only exceptions on PyPI horizon are 12.0.2 and 2012.2 releases. > 12.0.2 was released last week but I don't know why it is available at > PyPI. In deliverables/pike/horizon.yaml in the openstack/releases > repo, we don't have "include-pypi-link: yes". [...] Right, my "if we were already" was a reference to the work under discussion to eventually get all OpenStack services publishing wheels and sdists on PyPI. There are still some logistical issues to be ironed out (e.g., the fact that we don't control the "keystone" entry there) so I doubt it'll happen before Queens releases but we might have it going sometime in the Rocky cycle. I was mostly just lamenting that it's not something we can take advantage of for the current DOA/Horizon transition. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mordred at inaugust.com Wed Jan 24 14:47:30 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 24 Jan 2018 08:47:30 -0600 Subject: [openstack-dev] [horizon][packaging] django-openstack-auth retirement In-Reply-To: <20180124142526.cczgg2kgibb7k4rj@yuggoth.org> References: <20180122113012.xe42fi24v3ljm7rz@yuggoth.org> <20180124142526.cczgg2kgibb7k4rj@yuggoth.org> Message-ID: <698eddb8-4136-b3e4-4bf7-d88aef7d2f89@inaugust.com> On 01/24/2018 08:25 AM, Jeremy Stanley wrote: > On 2018-01-23 23:19:59 +0900 (+0900), Akihiro Motoki wrote: > [...] >> Horizon usually does not publish its releases to PyPI, so I think what >> we can do is to document it. >> >> P.S. >> The only exceptions on PyPI horizon are 12.0.2 and 2012.2 releases. >> 12.0.2 was released last week but I don't know why it is available at >> PyPI. In deliverables/pike/horizon.yaml in the openstack/releases >> repo, we don't have "include-pypi-link: yes". > [...] > > Right, my "if we were already" was a reference to the work under > discussion to eventually get all OpenStack services publishing > wheels and sdists on PyPI. There are still some logistical issues to > be ironed out (e.g., the fact that we don't control the "keystone" > entry there) so I doubt it'll happen before Queens releases but we > might have it going sometime in the Rocky cycle. I was mostly just > lamenting that it's not something we can take advantage of for the > current DOA/Horizon transition. Horizon and neutron were updated to start publishing to PyPI already. https://review.openstack.org/#/c/531822/ This is so that we can start working on unwinding the neutron and horizon specific versions of jobs for neutron and horizon plugins. Monty From rbowen at redhat.com Wed Jan 24 14:55:55 2018 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 24 Jan 2018 09:55:55 -0500 Subject: [openstack-dev] Help still needed at FOSDEM! Message-ID: We have a table at FOSDEM, and we desperately need people to sign up to staff it. https://etherpad.openstack.org/p/fosdem-2018 If you have an hour free at FOSDEM, please join us. Ideally, we need 2 people per slot, and at the moment we don't even have 1 for most of the slots. Thanks! -- Rich Bowen - rbowen at redhat.com @RDOcommunity // @CentOSProject // @rbowen From kumarmn at us.ibm.com Wed Jan 24 16:23:49 2018 From: kumarmn at us.ibm.com (Manoj Kumar) Date: Wed, 24 Jan 2018 10:23:49 -0600 Subject: [openstack-dev] [trove] Not running for Trove PTL In-Reply-To: <6e8813b1-c05b-e729-75dd-7c9863fd0730@catalyst.net.nz> References: <6e8813b1-c05b-e729-75dd-7c9863fd0730@catalyst.net.nz> Message-ID: I have had the good fortune to be the PTL for Trove the last cycle. During this period, I was able to oversee a resurgence of community participation in Trove. As the project continues to evolve, it could use some new leadership. I wanted to clear the path for others to run. So I do not intend to nominate myself for the Rocky cycle. Cheers, - Manoj -------------- next part -------------- An HTML attachment was scrubbed... URL: From zh.f at outlook.com Wed Jan 24 16:30:50 2018 From: zh.f at outlook.com (Zhang Fan) Date: Wed, 24 Jan 2018 16:30:50 +0000 Subject: [openstack-dev] [trove] Not running for Trove PTL In-Reply-To: References: <6e8813b1-c05b-e729-75dd-7c9863fd0730@catalyst.net.nz>, Message-ID: What a pity. Thanks so much for your remarkable work for trove in the last cycle, it’s my pleasure to work with you. Wish you the best! From Fan’s plastic iPhone On 25 Jan 2018, at 00:24, Manoj Kumar > wrote: I have had the good fortune to be the PTL for Trove the last cycle. During this period, I was able to oversee a resurgence of community participation in Trove. As the project continues to evolve, it could use some new leadership. I wanted to clear the path for others to run. So I do not intend to nominate myself for the Rocky cycle. Cheers, - Manoj __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Wed Jan 24 16:33:58 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 24 Jan 2018 08:33:58 -0800 Subject: [openstack-dev] [infra][all] New Zuul Depends-On syntax Message-ID: <87efmfz05l.fsf@meyer.lemoncheese.net> Hi, We recently introduced a new URL-based syntax for Depends-On: footers in commit messages: Depends-On: https://review.openstack.org/535851 The old syntax will continue to work for a while, but please begin using the new syntax on new changes. Why are we changing this? Zuul has grown the ability to interact with multiple backend systems (Gerrit, GitHub, and plain Git so far), and we have extended the cross-repo-dependency feature to support multiple systems. But Gerrit is the only one that uses the change-id syntax. URLs, on the other hand, are universal. That means you can write, as in https://review.openstack.org/535541, a commit message such as: Depends-On: https://github.com/ikalnytskyi/sphinxcontrib-openapi/pull/17 Or in a Github pull request like https://github.com/ansible/ansible/pull/20974, you can write: Depends-On: https://review.openstack.org/536159 But we're getting a bit ahead of ourselves here -- we're just getting started with Gerrit <-> GitHub dependencies and we haven't worked everything out yet. While you can Depends-On any GitHub URL, you can't add any project to required-projects yet, and we need to establish a process to actually report on GitHub projects. But cool things are coming. We will continue to support the Gerrit-specific syntax for a while, probably for several months at least, so you don't need to update the commit messages of changes that have accumulated precious +2s. But do please start using the new syntax now, so that we can age the old syntax out. There are a few differences in using the new syntax: * Rather than copying the change-id from a commit message, you'll need to get the URL from Gerrit. That means the dependent change already needs to be uploaded. In some complex situations, this may mean that you need to amend an existing commit message to add in the URL later. If you're uploading both changes, Gerrit will output the URL when you run git-review, and you can copy it from there. If you are looking at an existing change in Gerrit, you can copy the URL from the permalink at the top left of the page. Where it says "Change 535855 - Needs ..." the change number itself is the permalink of the change. * The new syntax points to a specific change on a specific branch. This means if you depend on a change to multiple branches, or changes to multiple projects, you need to list each URL. The old syntax looks for all changes with that ID, and depends on all of them. This may mean some changes need multiple Depends-On footers, however, it also means that we can express dependencies is a more fine-grained manner. Please start using the new syntax, and let us know in #openstack-infra if you have any problems. As new features related to GitHub support become available, we'll announce them here. Thanks, Jim From breton at cynicmansion.ru Wed Jan 24 16:41:22 2018 From: breton at cynicmansion.ru (Boris Bobrov) Date: Wed, 24 Jan 2018 17:41:22 +0100 Subject: [openstack-dev] Help still needed at FOSDEM! In-Reply-To: References: Message-ID: <1ddc3536-7176-a7ea-cad3-5debdd9cee71@cynicmansion.ru> Hi, What is expected from people at the booth? On 24.01.2018 15:55, Rich Bowen wrote: > We have a table at FOSDEM, and we desperately need people to sign up to > staff it. > > https://etherpad.openstack.org/p/fosdem-2018 > > If you have an hour free at FOSDEM, please join us. Ideally, we need 2 > people per slot, and at the moment we don't even have 1 for most of the > slots. > > Thanks! > From pkovar at redhat.com Wed Jan 24 16:59:38 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 24 Jan 2018 17:59:38 +0100 Subject: [openstack-dev] [docs] Documentation meeting minutes for 2018-01-24 In-Reply-To: <20180124140511.33ea7d7916fa3a67432e5428@redhat.com> References: <20180124140511.33ea7d7916fa3a67432e5428@redhat.com> Message-ID: <20180124175938.f9236e005d0d349ac7d1cc1a@redhat.com> ======================= #openstack-doc: docteam ======================= Meeting started by pkovar at 16:00:51 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/docteam/2018/docteam.2018-01-24-16.00.log.html . Meeting summary --------------- * roll call (pkovar, 16:01:02) * Rocky PTG (pkovar, 16:05:52) * LINK: https://www.openstack.org/ptg/ (pkovar, 16:05:57) * Planning etherpad for docs+i18n created (pkovar, 16:06:02) * LINK: https://etherpad.openstack.org/p/docs-i18n-ptg-rocky (pkovar, 16:06:09) * Sign up and tell us your preference wrt parcel time into small chunks or have full-day focus on one team agenda? (pkovar, 16:06:13) * as dhellmann pointed out adding badges will require further work in openstackdocstheme (pkovar, 16:21:48) * ACTION: ensure https://review.openstack.org/#/c/530142/ is on the ptg agenda (pkovar, 16:22:50) * action taken, resolved (pkovar, 16:25:43) * Bug Triage Team (pkovar, 16:35:06) * LINK: https://wiki.openstack.org/wiki/Documentation/SpecialityTeams (pkovar, 16:35:12) * Updated docs for documentation bug triaging (pkovar, 16:35:24) * LINK: https://docs.openstack.org/doc-contrib-guide/doc-bugs.html (pkovar, 16:35:28) * Open discussion (pkovar, 16:41:43) * chason added a topic idea for ptg: Project bugs with a "doc" tag will notify openstack-manuals (pkovar, 16:57:22) Meeting ended at 16:58:01 UTC. Action items, by person ----------------------- * openstack * ensure https://review.openstack.org/#/c/530142/ is on the ptg agenda People present (lines said) --------------------------- * pkovar (88) * asettle (20) * chason (16) * dhellmann (9) * openstack (3) * stephenfin (1) Generated by `MeetBot`_ 0.1.4 From sean.mcginnis at gmx.com Wed Jan 24 17:41:57 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 24 Jan 2018 11:41:57 -0600 Subject: [openstack-dev] [PTL][release] Reminders for Queens-3 January 25 Message-ID: <20180124174156.GA12630@sm-xps> Hey everyone, The queens-3 milestone is tomorrow. This is a big week for deadlines as this is also official Feature Freeze, the final release date for client libraries, the start of soft string freeze, and the beginning of the requirements freeze. Please make sure all release requests are submitting some time before the end of tomorrow. As everyone prepares to do these release patches, I just wanted to point out a few things to keep in mind. Client Branching ================ Since this is the freeze for all clients, please include the creation of the stable/queens branch with these requests. As a reminder, the way this works is to add a "branches" section to the deliverable yaml file. So if you are doing a 1.0.0 release like so: releases: - projects: - hash: 90f3ed251084952b43b89a172895a005182e6970 repo: openstack/example version: 1.0.0 You would then include the following to create the branch from that point: branches: - name: stable/queens location: 1.0.0 Release Notes Link ================== If your repo includes release notes and you would like them to show up on the releases page [1], the deliverables yaml file will need to include the URL for those notes: release-notes: https://docs.openstack.org/releasenotes/example/unreleased.html Note however that this is currently "unreleased" in the URL. You can do this now if you want to get current release notes out there, but it may be better to wait until RC when unreleased.html can be changed to queens.html [1] https://releases.openstack.org/queens/index.html Cycle Highlights ================ We will want this be RC, but you can start adding cycle highlights for your team already if you wish. If you already know of some highlights from queens that you will want published, those can be added at any time and will get included in the generated output. As a reminder, details of the new release highlights mechanism can be found here: http://lists.openstack.org/pipermail/openstack-dev/2017-December/125613.html New_Release Tool ================ And one final reminder in case it's useful to you - we have a tool in the releases repo that helps generate the deliverable yaml file: http://git.openstack.org/cgit/openstack/releases/tree/tools/new_release.sh You can run this by calling: tools/new_release.sh queens python-cinderclient feature --stable-branch That should update the deliverable file with an increment release number based on the type ("feature" in this case, so the Y in X.Y.Z) and include the stable branch creation pieces. Any questions about any of this, please feel free to ping us in the #openstack-release channel. Thanks! -- Sean McGinnis (smcginnis) From ruby.loo at intel.com Wed Jan 24 18:22:32 2018 From: ruby.loo at intel.com (Loo, Ruby) Date: Wed, 24 Jan 2018 18:22:32 +0000 Subject: [openstack-dev] [ironic] Remove in-tree policy and config? In-Reply-To: References: Message-ID: <00E8F5DC-B478-4816-9D2C-FD061B523FC1@intel.com> Thanks for bringing it up John. I totally forgot about that. Not only are the samples in the docs, there is a link in those docs so that the sample can be downloaded as a file. Also, for a patch that modifies the configs, you can see the rendered file(s) via the generated docs. PROFIT! :) +1. --ruby Now to deal with the "" in our config file... grumble, grumble... From: Jim Rollenhagen Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Monday, January 22, 2018 at 3:55 PM To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [ironic] Remove in-tree policy and config? Huge +1, I didn't realize this was in docs now. We can finally stop doing it manually \o/ // jim On Mon, Jan 22, 2018 at 7:45 AM, John Garbutt > wrote: Hi, While I was looking at the traits work, I noticed we still have policy and config in tree for ironic and ironic inspector: http://git.openstack.org/cgit/openstack/ironic/tree/etc/ironic/policy.json.sample http://git.openstack.org/cgit/openstack/ironic/tree/etc/ironic/ironic.conf.sample http://git.openstack.org/cgit/openstack/ironic/tree/etc/ironic/policy.json And in a similar way: http://git.openstack.org/cgit/openstack/ironic-inspector/tree/policy.yaml.sample http://git.openstack.org/cgit/openstack/ironic-inspector/tree/example.conf There is an argument that says we shouldn't force operators to build a full environment to generate these, but this has been somewhat superseded by us having good docs: https://docs.openstack.org/ironic/latest/configuration/sample-config.html https://docs.openstack.org/ironic/latest/configuration/sample-policy.html https://docs.openstack.org/ironic-inspector/latest/configuration/sample-config.html https://docs.openstack.org/ironic-inspector/latest/configuration/sample-policy.html It could look something like this (but with the tests working...): https://review.openstack.org/#/c/536349 What do you all think? Thanks, johnthetubaguy __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruby.loo at intel.com Wed Jan 24 18:23:43 2018 From: ruby.loo at intel.com (Loo, Ruby) Date: Wed, 24 Jan 2018 18:23:43 +0000 Subject: [openstack-dev] [ironic] FFE request for node traits In-Reply-To: <325b156c-48c0-9b8d-6560-81867c2c4835@redhat.com> References: <325b156c-48c0-9b8d-6560-81867c2c4835@redhat.com> Message-ID: <08E10BE8-51C3-400B-B7CD-1B47AAC467D0@intel.com> +1 :) On 2018-01-23, 5:04 AM, "Dmitry Tantsur" wrote: +1 on keeping moving forward with it. that's important for future nova work, as well as our deploy steps work. On 01/22/2018 10:11 PM, Mark Goddard wrote: > The node traits feature [1] is an essential priority for ironic in Queens, and > is an important step in the continuing evolution of scheduling enabled by the > placement API. Traits will allow us to move away from capability-based > scheduling. Capabilities have several limitations for scheduling including > depending on filters in nova-scheduler rather than allowing placement to select > matching hosts. Several upcoming features depend on traits [2]. > > Landing node traits late in the cycle will lead to less time being available for > testing, with a risk that the feature is release with defects. There are changes > at most major levels in the code except the drivers, but these are for the most > part fairly isolated from existing code. The current issues with the grenade CI > job mean that upgrade code paths are not being exercised frequently, and could > lead to additional test/bug fix load on the team later in the cycle. The node > traits code patches are all in review [3], and are now generally getting > positive reviews or minor negative feedback. > > rloo and TheJulia have kindly offered to review during the FFE window. > > [1] > http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/node-traits.html > [2] > https://review.openstack.org/#/c/504952/7/specs/approved/config-template-traits.rst > [3] https://review.openstack.org/#/q/topic:bug/1722194+(status:open) > > Thanks, > Mark (mgoddard) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ruby.loo at intel.com Wed Jan 24 18:24:09 2018 From: ruby.loo at intel.com (Loo, Ruby) Date: Wed, 24 Jan 2018 18:24:09 +0000 Subject: [openstack-dev] [ironic] FFE request for node rescue feature In-Reply-To: <2cc426ca-4133-c272-034e-fd2151f98b6b@redhat.com> References: <2cc426ca-4133-c272-034e-fd2151f98b6b@redhat.com> Message-ID: <473BB559-38F4-4399-B76B-3A46A5A03006@intel.com> +1 (and thx Dmitry and Julia for reviewing!) --ruby On 2018-01-23, 5:15 AM, "Dmitry Tantsur" wrote: I'm +1 on this, because the feature has been proposed for a while (has changed the contributor group at least once) and is needed for feature parity with virtual machines in nova. On 01/23/2018 06:56 AM, Shivanand Tendulker wrote: > Hi > > The rescue feature [1] is an high priority for ironic in Queens. The spec for > the same was merged in Newton. This feature is necessary for users that lose > regular access to their machine (e.g. lost passwords). > > Landing node rescue feature late in the cycle will lead to less time being > available for testing, with a risk that the feature being released with defects. > The code changes are fairly isolated from existing code to ensure it does not > cause any regression. The Ironic side rescue code patches are all in review [2], > and are now are getting positive reviews or minor negative feedback. > > dtantsur and TheJulia have kindly agreed to review the same during the FFE window. > > [1] > https://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/implement-rescue-mode.html > [2] > https://review.openstack.org/#/q/topic:bug/1526449+(status:open+AND+project:openstack/ironic) > > Thanks and Regards, > Shiv (stendulker) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ruby.loo at intel.com Wed Jan 24 18:26:46 2018 From: ruby.loo at intel.com (Loo, Ruby) Date: Wed, 24 Jan 2018 18:26:46 +0000 Subject: [openstack-dev] [ironic] FFE - classic drivers deprecation In-Reply-To: References: Message-ID: <7370690A-0D98-4C96-BED2-2DB967A4B9E6@intel.com> +1 :) I'm also +1 on amending our FFE rules so that the PTL can get a FFE on one thing of their desire, regardless of anyone disagreeing, as long as they have two cores that are willing to review. As a small thank-you for being PTL! :D (I'm serious even though I just thought of this.) --ruby On 2018-01-23, 5:23 AM, "Dmitry Tantsur" wrote: Hi all, I'm writing to request an FFE for the classic drivers deprecation work [1][2]. This is a part of the driver composition reform [3] - the effort started in Ocata to revamp bare metal drivers. The following changes are in scope of this FFE: 1. Provide an automatic migration to hardware types as part of 'ironic-dbsync online_data_migrations' 2. Update the CI to use hardware types 3. Issue a deprecation warning when loading classic drivers, and deprecate enabled_drivers option. Finishing it in Queens will allow us to stick to our schedule (outlined in [1]) to remove classic drivers in Rocky. Keeping two methods of loading drivers is a maintenance burden. Even worse, two sets of mostly equivalent drivers confuse users, and the confusion well increase as we introduce features (like rescue) that are only available for nodes using the new-style drivers. The downside of this work is that it introduces a non-trivial data migration close to the end of the cycle. Thus, it is designed [1][2] to not fail if the migration cannot fully succeed due to environmental reasons. rloo and stendulker were so kind to agree to review this work during the feature freeze window, if it gets an exception. Dmitry [1] http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/classic-drivers-future.html [2] https://review.openstack.org/536298 [3] http://specs.openstack.org/openstack/ironic-specs/specs/7.0/driver-composition-reform.html __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ruby.loo at intel.com Wed Jan 24 18:29:37 2018 From: ruby.loo at intel.com (Loo, Ruby) Date: Wed, 24 Jan 2018 18:29:37 +0000 Subject: [openstack-dev] [ironic] FFE - Implementation for UEFI iSCSI boot for iLO drivers In-Reply-To: References: Message-ID: <9D762B24-1760-4107-9D40-8D82C08EC189@intel.com> +1. This seems minimal risk. --ruby From: Debayan Ray Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, January 24, 2018 at 2:36 AM To: "openstack-dev at lists.openstack.org" Subject: [openstack-dev] [ironic] FFE - Implementation for UEFI iSCSI boot for iLO drivers Requesting FFE for firmware based iSCSI boot from volume support in iLO ----------------------------------------------------------------------- # Pros ------ With the patches up for review[0] we have implemented firmware based iSCSI boot from volume for iLO hardware. This functionality will allow users to take advantage of iLO BMC based boot from volume, as UEFI firmware 1.40 or higher in HPE Gen9 and Gen10 ProLiant hardware supports booting from iSCSI volume. The change adds the feature to the iLO drivers feature set and does not have any impact on the existing functionalities of iLO driver. # Cons ------ None # Risks ------- None # Reason of delay ----------------- This feature required new version of proliantutils (2.5.0), which got released last week # Core reviewers ---------------- Julia Kreger, Shivanand Tendulker [0] https://review.openstack.org/#/c/468288/ Thanks & Regards, Debayan Ray (on behalf of Paresh Sao) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Wed Jan 24 19:34:06 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 24 Jan 2018 13:34:06 -0600 Subject: [openstack-dev] [sdk][masakari][tricircle] Inclusion of SDK classes in openstacksdk tree In-Reply-To: <5E7A3D1BF5FD014E86E5F971CF446EFF56567198@DGGEML501-MBS.china.huawei.com> References: <02a1cd17-46da-845d-4ea9-4eddf00dbded@inaugust.com> <5E7A3D1BF5FD014E86E5F971CF446EFF56567198@DGGEML501-MBS.china.huawei.com> Message-ID: <1fe8b2ce-9cef-aeb5-8ac2-2e099e91f541@inaugust.com> On 01/21/2018 07:08 PM, joehuang wrote: > Hello, Monty, > > Tricircle did not develop any extra Neutron network resources, Tricircle provide plugins under Neutron, and same support resources as Neutron have. To ease the management of multiple Neutron servers, one Tricircle Admin API is provided to manage the resource routings between local neutron(s) and central neutron, it's one standalone service, and only for cloud administrator, therefore python-tricircleclient adn CLI were developed to support these administration functions. > > do you mean to put Tricircle Admin API sdk under openstacksdk tree? Yes - if you want to, you are welcome to put them there. > Best Regards > Chaoyi Huang (joehuang) > > ________________________________________ > From: Monty Taylor [mordred at inaugust.com] > Sent: 21 January 2018 1:22 > To: OpenStack Development Mailing List (not for usage questions) > Subject: [openstack-dev] [sdk][masakari][tricircle] Inclusion of SDK classes in openstacksdk tree > > Hey everybody, > > Wanted to send a quick note to let people know that all OpenStack > services are more than welcome to put any openstacksdk proxy and > resource classes they have directly into the openstacksdk tree. > > Looking through codesearch.openstack.org, masakariclient and tricircle > each have SDK-related classes in their trees. > > You don't HAVE to put the code into openstacksdk. In fact, I wrote a > patch for masakariclient to register the classes with > openstack.connection.Connection: > > https://review.openstack.org/#/c/534883/ > > But I wanted to be clear that the code is **welcome** directly in tree, > and that anyone working on an OpenStack service is welcome to put > support code directly in the openstacksdk tree. > > Monty > > PS. Joe - you've also got some classes in the tricircle test suite > extending the network service. I haven't followed all the things ... are > the tricircle network extensions done as neutron plugins now? (It looks > like they are) If so, why don't we figure out getting your network > resources in-tree as well. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From major at mhtx.net Wed Jan 24 20:03:20 2018 From: major at mhtx.net (Major Hayden) Date: Wed, 24 Jan 2018 14:03:20 -0600 Subject: [openstack-dev] [openstack-ansible] Limiting pip wheel builds for OpenStack clients Message-ID: <42ffc325-4162-5daa-b413-9c5d2cc60835@mhtx.net> Hey there, I was spelunking into the slow wheel build problems we've been seeing in CentOS and I found that our wheel build process was spending 4-6 minutes building cassandra-driver. The wheel build process usually takes 8-12 minutes, so half the time is being spent there. More digging revealed that cassandra-driver is a dependency of python-monascaclient, which is a dependency of heat. The requirements.txt for heat drags in all of the clients: https://github.com/openstack/heat/blob/master/requirements.txt We're already doing selective wheel builds and building only the wheels and venvs we need for the OpenStack services which are selected for deployment. Would it make sense to reduce the OpenStack client list for heat during the wheel/venv build? For example, if we're not deploying monasca, should we build/venv the python-monascaclient package (and its dependencies)? I've opened a bug: https://bugs.launchpad.net/openstack-ansible/+bug/1745215 -- Major Hayden From pabelanger at redhat.com Wed Jan 24 20:21:17 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 24 Jan 2018 15:21:17 -0500 Subject: [openstack-dev] [devstack] Broken repo on devstack-plugin-container for Fedora In-Reply-To: References: Message-ID: <20180124202117.GA22369@localhost.localdomain> On Wed, Jan 24, 2018 at 02:14:40PM +0100, Daniel Mellado wrote: > Hi everyone, > > Since today, when I try to install devstack-plugin-container plugin over > fedora. It complains in here [1] about not being able to sync the cache > for the repo with the following error [2]. > > This is affecting me on Fedora26+ from different network locations, so I > was wondering if someone from suse could have a look (it did work for > Andreas in opensuse... thanks in advance!) > > [1] > https://github.com/openstack/devstack-plugin-container/blob/master/devstack/lib/docker#L164-L170 > > [2] http://paste.openstack.org/show/652041/ > We should consider mirroring this into our AFS mirror infrastrcuture to help remove the dependency on opensuse servers. Then each regional mirror has a copy and we don't always need to hit upstream. -Paul From shrewsbury.dave at gmail.com Wed Jan 24 20:25:09 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Wed, 24 Jan 2018 15:25:09 -0500 Subject: [openstack-dev] [infra][all] New Zuul Depends-On syntax In-Reply-To: <87efmfz05l.fsf@meyer.lemoncheese.net> References: <87efmfz05l.fsf@meyer.lemoncheese.net> Message-ID: This is a (the?) killer feature. On Wed, Jan 24, 2018 at 11:33 AM, James E. Blair wrote: > Hi, > > We recently introduced a new URL-based syntax for Depends-On: footers > in commit messages: > > Depends-On: https://review.openstack.org/535851 > > The old syntax will continue to work for a while, but please begin using > the new syntax on new changes. > > Why are we changing this? Zuul has grown the ability to interact with > multiple backend systems (Gerrit, GitHub, and plain Git so far), and we > have extended the cross-repo-dependency feature to support multiple > systems. But Gerrit is the only one that uses the change-id syntax. > URLs, on the other hand, are universal. > > That means you can write, as in https://review.openstack.org/535541, a > commit message such as: > > Depends-On: https://github.com/ikalnytskyi/sphinxcontrib-openapi/pull/17 > > Or in a Github pull request like > https://github.com/ansible/ansible/pull/20974, you can write: > > Depends-On: https://review.openstack.org/536159 > > But we're getting a bit ahead of ourselves here -- we're just getting > started with Gerrit <-> GitHub dependencies and we haven't worked > everything out yet. While you can Depends-On any GitHub URL, you can't > add any project to required-projects yet, and we need to establish a > process to actually report on GitHub projects. But cool things are > coming. > > We will continue to support the Gerrit-specific syntax for a while, > probably for several months at least, so you don't need to update the > commit messages of changes that have accumulated precious +2s. But do > please start using the new syntax now, so that we can age the old syntax > out. > > There are a few differences in using the new syntax: > > * Rather than copying the change-id from a commit message, you'll need > to get the URL from Gerrit. That means the dependent change already > needs to be uploaded. In some complex situations, this may mean that > you need to amend an existing commit message to add in the URL later. > > If you're uploading both changes, Gerrit will output the URL when you > run git-review, and you can copy it from there. If you are looking at > an existing change in Gerrit, you can copy the URL from the permalink > at the top left of the page. Where it says "Change 535855 - Needs > ..." the change number itself is the permalink of the change. > Is the permalink the only valid format here for gerrit? Or does the fully expanded link also work. E.g., Depends-On: https://review.openstack.org/536540 versus Depends-On: https://review.openstack.org/#/c/536540/ [snip] -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.k.mooney at intel.com Wed Jan 24 20:30:19 2018 From: sean.k.mooney at intel.com (Mooney, Sean K) Date: Wed, 24 Jan 2018 20:30:19 +0000 Subject: [openstack-dev] [openstack-ansible] Limiting pip wheel builds for OpenStack clients In-Reply-To: <42ffc325-4162-5daa-b413-9c5d2cc60835@mhtx.net> References: <42ffc325-4162-5daa-b413-9c5d2cc60835@mhtx.net> Message-ID: <4B1BB321037C0849AAE171801564DFA6889B7F63@IRSMSX107.ger.corp.intel.com> > -----Original Message----- > From: Major Hayden [mailto:major at mhtx.net] > Sent: Wednesday, January 24, 2018 8:03 PM > To: OpenStack Development Mailing List (not for usage questions) > > Subject: [openstack-dev] [openstack-ansible] Limiting pip wheel builds > for OpenStack clients > > Hey there, > > I was spelunking into the slow wheel build problems we've been seeing > in CentOS and I found that our wheel build process was spending 4-6 > minutes building cassandra-driver. The wheel build process usually > takes 8-12 minutes, so half the time is being spent there. > > More digging revealed that cassandra-driver is a dependency of python- > monascaclient, which is a dependency of heat. The requirements.txt for > heat drags in all of the clients: > > https://github.com/openstack/heat/blob/master/requirements.txt [Mooney, Sean K] the python-monascaclient package is presumably an optional Dependency of heat as are the other client. E.g. I would hope that if you are using a heat with a cloud that does not have Monasca it could still run without have python-monascaclient installed so All of the clients should proably be move form the requirements.txt to the test-requiremetns.txt and only the minimal required packages for heat to work should be in requirements.txt. > > We're already doing selective wheel builds and building only the wheels > and venvs we need for the OpenStack services which are selected for > deployment. Would it make sense to reduce the OpenStack client list for > heat during the wheel/venv build? For example, if we're not deploying > monasca, should we build/venv the python-monascaclient package (and its > dependencies)? > > I've opened a bug: > > https://bugs.launchpad.net/openstack-ansible/+bug/1745215 > > -- > Major Hayden > > _______________________________________________________________________ > ___ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mordred at inaugust.com Wed Jan 24 20:31:40 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 24 Jan 2018 14:31:40 -0600 Subject: [openstack-dev] [infra][all] New Zuul Depends-On syntax In-Reply-To: References: <87efmfz05l.fsf@meyer.lemoncheese.net> Message-ID: <005844f0-ea22-3e0b-fd2e-c1c889d9293b@inaugust.com> On 01/24/2018 02:25 PM, David Shrewsbury wrote: > This is a (the?) killer feature. > > > On Wed, Jan 24, 2018 at 11:33 AM, James E. Blair > wrote: > > Hi, > > We recently introduced a new URL-based syntax for Depends-On: footers > in commit messages: > >   Depends-On: https://review.openstack.org/535851 > > > The old syntax will continue to work for a while, but please begin using > the new syntax on new changes. > > Why are we changing this?  Zuul has grown the ability to interact with > multiple backend systems (Gerrit, GitHub, and plain Git so far), and we > have extended the cross-repo-dependency feature to support multiple > systems.  But Gerrit is the only one that uses the change-id syntax. > URLs, on the other hand, are universal. > > That means you can write, as in https://review.openstack.org/535541 > , a > commit message such as: > >   Depends-On: > https://github.com/ikalnytskyi/sphinxcontrib-openapi/pull/17 > > > Or in a Github pull request like > https://github.com/ansible/ansible/pull/20974 > , you can write: > >   Depends-On: https://review.openstack.org/536159 > > > But we're getting a bit ahead of ourselves here -- we're just getting > started with Gerrit <-> GitHub dependencies and we haven't worked > everything out yet.  While you can Depends-On any GitHub URL, you can't > add any project to required-projects yet, and we need to establish a > process to actually report on GitHub projects.  But cool things are > coming. > > We will continue to support the Gerrit-specific syntax for a while, > probably for several months at least, so you don't need to update the > commit messages of changes that have accumulated precious +2s.  But do > please start using the new syntax now, so that we can age the old syntax > out. > > There are a few differences in using the new syntax: > > * Rather than copying the change-id from a commit message, you'll need >   to get the URL from Gerrit.  That means the dependent change already >   needs to be uploaded.  In some complex situations, this may mean that >   you need to amend an existing commit message to add in the URL later. > >   If you're uploading both changes, Gerrit will output the URL when you >   run git-review, and you can copy it from there.  If you are > looking at >   an existing change in Gerrit, you can copy the URL from the permalink >   at the top left of the page.  Where it says "Change 535855 - Needs >   ..." the change number itself is the permalink of the change. > > > > Is the permalink the only valid format here for gerrit? Or does the fully > expanded link also work. E.g., > >    Depends-On: https://review.openstack.org/536540 > > versus > >    Depends-On: https://review.openstack.org/#/c/536540/ The fully expanded one works too. See: https://review.openstack.org/#/c/520812/ for an example of a patch with expanded links. From Greg.Waines at windriver.com Wed Jan 24 21:04:29 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 24 Jan 2018 21:04:29 +0000 Subject: [openstack-dev] [masakari] Questions on masakari CLI for hosts and segments In-Reply-To: References: Message-ID: <7505791C-E2C9-488E-82E2-8DB5B6A05700@windriver.com> From reading the Masakari API Specifications commit (https://review.openstack.org/#/c/512591/) I believe that the proper syntax for the host and segment creates is as follows: masakari segment-create --name segment-1 --recovery-method auto --service-type COMPUTE masakari host-create --name --type COMPUTE --control-attributes SSH --segment-id segment-1 let me know if this is correct, Greg. From: Greg Waines Reply-To: "openstack-dev at lists.openstack.org" Date: Monday, January 22, 2018 at 10:59 AM To: "openstack-dev at lists.openstack.org" Subject: [openstack-dev] [masakari] Questions on masakari CLI for hosts and segments masakari segment-create --name segment-1 --recovery-method auto --service-type xyz For ‘service-type’, · what are the semantics of this parameter ? · what are the allowed values ? · what is a typical or example value ? masakari host-create --name devstack-masakari --type xyz --control-attributes xyz --segment-id segment-1 For ‘type’, * what are the semantics of this parameter ? * what are the allowed values ? * what is a typical or example value ? For ‘control-attributes, * what are the semantics of this parameter ? * what are the allowed values ? * what is a typical or example value ? And what are the semantics of Masakari Failover Segments ? My guess is that · hosts belong to one and only one masakari segment · when a host fails, the VMs formerly running on that host will ONLY be recovered to other hosts within the same segment Correct ? Anything else ? Greg. -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Jan 24 21:05:26 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 24 Jan 2018 16:05:26 -0500 Subject: [openstack-dev] [openstack-ansible] Limiting pip wheel builds for OpenStack clients In-Reply-To: <4B1BB321037C0849AAE171801564DFA6889B7F63@IRSMSX107.ger.corp.intel.com> References: <42ffc325-4162-5daa-b413-9c5d2cc60835@mhtx.net> <4B1BB321037C0849AAE171801564DFA6889B7F63@IRSMSX107.ger.corp.intel.com> Message-ID: <1516827827-sup-2097@lrrr.local> Excerpts from Mooney, Sean K's message of 2018-01-24 20:30:19 +0000: > > > -----Original Message----- > > From: Major Hayden [mailto:major at mhtx.net] > > Sent: Wednesday, January 24, 2018 8:03 PM > > To: OpenStack Development Mailing List (not for usage questions) > > > > Subject: [openstack-dev] [openstack-ansible] Limiting pip wheel builds > > for OpenStack clients > > > > Hey there, > > > > I was spelunking into the slow wheel build problems we've been seeing > > in CentOS and I found that our wheel build process was spending 4-6 > > minutes building cassandra-driver. The wheel build process usually > > takes 8-12 minutes, so half the time is being spent there. > > > > More digging revealed that cassandra-driver is a dependency of python- > > monascaclient, which is a dependency of heat. The requirements.txt for > > heat drags in all of the clients: > > > > https://github.com/openstack/heat/blob/master/requirements.txt > [Mooney, Sean K] the python-monascaclient package is presumably an optional > Dependency of heat as are the other client. > E.g. I would hope that if you are using a heat with a cloud that does not have > Monasca it could still run without have python-monascaclient installed so > All of the clients should proably be move form the requirements.txt to the > test-requiremetns.txt and only the minimal required packages for heat to work > should be in requirements.txt. This is what the "extras" section of setup.cfg is for. It should be possible to say something like: [extras] monasca = python-monascaclient>=1.0.0 (or whatever version) Then users of pip would install a heat that uses monasca with: pip install heat[monasca] and distro packagers would know why any extra packages are needed and could take appropriate action in their package specifications, too. > > > > We're already doing selective wheel builds and building only the wheels > > and venvs we need for the OpenStack services which are selected for > > deployment. Would it make sense to reduce the OpenStack client list for > > heat during the wheel/venv build? For example, if we're not deploying > > monasca, should we build/venv the python-monascaclient package (and its > > dependencies)? > > > > I've opened a bug: > > > > https://bugs.launchpad.net/openstack-ansible/+bug/1745215 > > > > -- > > Major Hayden > > > > _______________________________________________________________________ > > ___ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev- > > request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Greg.Waines at windriver.com Wed Jan 24 21:13:57 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 24 Jan 2018 21:13:57 +0000 Subject: [openstack-dev] [masakari] BUG in Masakari Installation and Procedure and/or Documentation Message-ID: <265F454E-3330-4C9E-B2A2-1506F2843AA9@windriver.com> I am looking for some input before I raise a BUG. I reviewed the following commits which documented the Masakari and MasakariMonitors Installation and Procedures. i.e. https://review.openstack.org/#/c/489570/ https://review.openstack.org/#/c/489095/ I created an AIO devstack with Masakari on current/master ... this morning. I followed the above instructions on configuring and installing Masakari and MasakariMonitors. I created a VM and then ‘sudo kill -9 ’ and I got the following error from instance monitoring trying to send the notification message to masakari-engine. ( The request you have made requires authentication. ) ... see below, Is this a known BUG ? Greg. 2018-01-24 20:29:16.902 12473 INFO masakarimonitors.instancemonitor.libvirt_handler.callback [-] Libvirt Event: type=VM, hostname=devstack-masakari-new, uuid=6884cf13-5797-487b-9cb1-053a2e18b60e, time=2018-01-24 20:29:16.902347, event_id=LIFECYCLE, detail=STOPPED_FAILED) 2018-01-24 20:29:16.903 12473 INFO masakarimonitors.ha.masakari [-] Send a notification. {'notification': {'hostname': 'devstack-masakari-new', 'type': 'VM', 'payload': {'instance_uuid': '6884cf13-5797-487b-9cb1-053a2e18b60e', 'vir_domain_event': 'STOPPED_FAILED', 'event': 'LIFECYCLE'}, 'generated_time': datetime.datetime(2018, 1, 24, 20, 29, 16, 902347)}} 2018-01-24 20:29:16.977 12473 WARNING masakarimonitors.ha.masakari [-] Retry sending a notification. (HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-9c734f56-aca9-40a9-b2dd-3f372de8c34e), The request you have made requires authentication.): HttpException: HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-9c734f56-aca9-40a9-b2dd-3f372de8c34e), The request you have made requires authentication. ... 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari [-] Exception caught: HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-26a5de94-aaad-4f8f-949e-cbfeb5e31b8b), The request you have made requires authentication.: HttpException: HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-26a5de94-aaad-4f8f-949e-cbfeb5e31b8b), The request you have made requires authentication. 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari Traceback (most recent call last): 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/masakarimonitors/ha/masakari.py", line 91, in send_notification 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari payload=event['notification']['payload']) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/masakariclient/sdk/ha/v1/_proxy.py", line 65, in create_notification 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self._create(_notification.Notification, **attrs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/proxy2.py", line 194, in _create 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return res.create(self._session) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/resource2.py", line 588, in create 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari json=request.body, headers=request.headers) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 848, in post 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self.request(url, 'POST', **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/session.py", line 64, in map_exceptions_wrapper 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return func(*args, **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/session.py", line 352, in request 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return super(Session, self).request(*args, **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 573, in request 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari auth_headers = self.get_auth_headers(auth) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 900, in get_auth_headers 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return auth.get_headers(self, **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/plugin.py", line 95, in get_headers 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari token = self.get_token(session) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", line 88, in get_token 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self.get_access(session).auth_token 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", line 134, in get_access 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari self.auth_ref = self.get_auth_ref(session) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/generic/base.py", line 198, in get_auth_ref 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self._plugin.get_auth_ref(session, **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/v3/base.py", line 165, in get_auth_ref 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari authenticated=False, log=False, **rkwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 848, in post 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self.request(url, 'POST', **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/session.py", line 66, in map_exceptions_wrapper 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari raise exceptions.from_exception(e) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari HttpException: HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-26a5de94-aaad-4f8f-949e-cbfeb5e31b8b), The request you have made requires authentication. 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari -------------- next part -------------- An HTML attachment was scrubbed... URL: From saverio.proto at switch.ch Wed Jan 24 21:18:39 2018 From: saverio.proto at switch.ch (Saverio Proto) Date: Wed, 24 Jan 2018 22:18:39 +0100 Subject: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID In-Reply-To: <1516716895-sup-6461@lrrr.local> References: <054e9a0c-1c99-51dd-9e66-8adb3c910802@switch.ch> <1515696336-sup-7054@lrrr.local> <7142b655-abca-ed58-bc4c-07fa51cb401c@switch.ch> <1515771070-sup-7997@lrrr.local> <07ee3262-8aec-a4c4-f981-bc448afab0ba@switch.ch> <96f2a7d8-ea7c-5530-7975-62b477982f03@switch.ch> <1516293565-sup-9123@lrrr.local> <1516295114-sup-7111@lrrr.local> <1516630943-sup-4108@lrrr.local> <1516659378-sup-8232@lrrr.local> <7b4c5530-55e9-2590-1b67-74b5ff938ef9@switch.ch> <1516716895-sup-6461@lrrr.local> Message-ID: <5c56967b-223f-eee1-9707-2bdfce8ac7c8@switch.ch> > 3.34.0 is a queens series release, which makes it more likely that more > other dependencies would need to be updated. Even backporting the > changes to the Ocata branch and releasing it from there would require > updating several other libraries. > That is what I was fearing. Consider that our upgrade schedule is now to have Pike by the end of 2018. Unless we try to skip a release. > Are you using packages from Canonical, or are you building them > yourself? I am using the packages from Canonical, but I am familiar patching those packages and merge my changes upstream back to Canonical. If the problem is just dependencies with the ".deb" packages, I can handle that. But if the problem is really python code not working together across multiple components, then I have little hope to fix this in Newton. Thanks for the support, if I manage to make some progress on this I will send an update on this thread on the mailing list. Cheers, Saverio From jaypipes at gmail.com Wed Jan 24 22:48:50 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 24 Jan 2018 17:48:50 -0500 Subject: [openstack-dev] [nova] PTL Election Season In-Reply-To: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> References: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> Message-ID: On 01/22/2018 06:09 PM, Matt Riedemann wrote: > On 1/15/2018 11:04 AM, Kendall Nelson wrote: >> Election details: https://governance.openstack.org/election/ >> >> Please read the stipulations and timelines for candidates and >> electorate contained in this governance documentation. >> >> Be aware, in the PTL elections if the program only has one candidate, >> that candidate is acclaimed and there will be no poll. There will only >> be a poll if there is more than one candidate stepping forward for a >> program's PTL position. >> >> There will be further announcements posted to the mailing list as >> action is required from the electorate or candidates. This email is >> for information purposes only. >> >> If you have any questions which you feel affect others please reply to >> this email thread. >> > > To anyone that cares, I don't plan on running for Nova PTL again for the > Rocky release. Queens was my fourth tour and it's definitely time for > someone else to get the opportunity to lead here. I don't plan on going > anywhere and I'll be here to help with any transition needed assuming > someone else (or a couple of people hopefully) will run in the election. > It's been a great experience and I thank everyone that has had to put up > with me and my obsessive paperwork and process disorder in the meantime. Thanks, Matt, for your service to the Nova (and broader OpenStack) contributor community over the last couple years. I know you're not going anywhere, but it's worth +1'ing the many comments about you being an excellent and patient PTL. Cheers mate, -jay From ekcs.openstack at gmail.com Wed Jan 24 22:54:55 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Wed, 24 Jan 2018 14:54:55 -0800 Subject: [openstack-dev] [requirements][congress] getting requirements updates for congress-dashboard Message-ID: Hi all, I'm having some trouble getting congress-dashboard to receive requirements updates from global-requirements. The check-requirements job seems to be configured, but does NOT seem to be running: http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul.d/pr ojects.yaml#n4246 The project is also listed in the requirements repo: http://git.openstack.org/cgit/openstack/requirements/tree/projects.txt#n24 Any hints on what may be wrong? Thanks very much! Eric Kao From mriedemos at gmail.com Wed Jan 24 22:57:29 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 24 Jan 2018 16:57:29 -0600 Subject: [openstack-dev] [nova] Native QEMU LUKS decryption review overview ahead of FF In-Reply-To: <20180122142212.2fqjvquljpji6kph@lyarwood.usersys.redhat.com> References: <20180122142212.2fqjvquljpji6kph@lyarwood.usersys.redhat.com> Message-ID: On 1/22/2018 8:22 AM, Lee Yarwood wrote: > Hello, > > With M3 and FF rapidly approaching this week I wanted to post a brief > overview of the QEMU native LUKS series. > > The full series is available on the following topic, I'll go into more > detail on each of the changes below: > > https://review.openstack.org/#/q/topic:bp/libvirt-qemu-native-luks+status:open > > libvirt: Collocate encryptor and volume driver calls > https://review.openstack.org/#/c/460243/ (Missing final +2 and +W) > > This refactor of the Libvirt driver connect and disconnect volume code > has the added benefit of also correcting a number of bugs around the > attaching and detaching of os-brick encryptors. IMHO this would be > useful in Queens even if the rest of the series doesn't land. > > libvirt: Introduce disk encryption config classes > https://review.openstack.org/#/c/464008/ (Missing final +2 and +W) > > This is the most straight forward change of the series and simply > introduces the required config classes to wire up native LUKS decryption > within the domain XML of an instance. Hopefully nothing controversial. > > libvirt: QEMU native LUKS decryption for encrypted volumes > https://review.openstack.org/#/c/523958/ (Missing both +2s and +W) > > This change carries the bulk of the implementation, wiring up encrypted > volumes during their initial attachment. The commit message has a > detailed run down of the various upgrade and LM corner cases we attempt > to handle here, such as LM from a P to Q compute, detaching a P attached > encrypted volume after upgrading to Q etc. > > Upgrade and LM testing is enabled by the following changes: > > fixed_key: Use a single hardcoded key across devstack deployments > https://review.openstack.org/#/c/536343/ > > compute: Introduce an encrypted volume LM test > https://review.openstack.org/#/c/536177/ > > This is being tested by tempest-dsvm-multinode-live-migration and > grenade-dsvm-neutron-multinode-live-migration in the following DNM Nova > change, enabling volume backed LM tests: > > DNM: Test LM with encrypted volumes > https://review.openstack.org/#/c/536350/ > > Hopefully that covers everything but please feel free to ping if you > would like more detail, background etc. Thanks in advance, > > Lee > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > The patch is already approved, and I asked melwitt to write a release note, at which point it was noted that swap volume will not work with native luks encrypted volumes. That's a regression. We need to at least report a nova bug for this so we can work on some kind of fallback to the non-native decryption until there is a libvirt/qemu fix upstream and we can put version conditionals in place for when we can support swap volume with a native luks-encrypted volume. -- Thanks, Matt From sean.mcginnis at gmx.com Wed Jan 24 23:01:28 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 24 Jan 2018 17:01:28 -0600 Subject: [openstack-dev] [Release-job-failures] Pre-release of openstack/octavia-dashboard failed In-Reply-To: References: Message-ID: <20180124230127.GA31633@sm-xps> On Wed, Jan 24, 2018 at 10:50:15PM +0000, zuul at openstack.org wrote: > Build failed. > > - release-openstack-python http://logs.openstack.org/cb/cb4f0e814b6b250309246611b0f5d37aba3bbbdf/pre-release/release-openstack-python/8012d1f/ : POST_FAILURE in 14m 33s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > Appears to be a (hopefully) transient networking issue: SSH Error: data could not be sent to remote host \"188.240.223.230\". Make sure this host can be reached over ssh" http://logs.openstack.org/cb/cb4f0e814b6b250309246611b0f5d37aba3bbbdf/pre-release/release-openstack-python/8012d1f/job-output.txt.gz#_2018-01-24_22_44_37_577509 I will request this job is re-enqueued. From hongbin.lu at huawei.com Wed Jan 24 23:02:15 2018 From: hongbin.lu at huawei.com (Hongbin Lu) Date: Wed, 24 Jan 2018 23:02:15 +0000 Subject: [openstack-dev] [nova][neutron] Extend instance IP filter for floating IP Message-ID: <0957CD8F4B55C0418161614FEC580D6B281A8378@YYZEML702-CHM.china.huawei.com> Hi all, Nova currently allows us to filter instances by fixed IP address(es). This feature is known to be useful in an operational scenario that cloud administrators detect abnormal traffic in an IP address and want to trace down to the instance that this IP address belongs to. This feature works well except a limitation that it only supports fixed IP address(es). In the real operational scenarios, cloud administrators might find that the abused IP address is a floating IP and want to do the filtering in the same way as fixed IP. Right now, unfortunately, the experience is diverged between these two classes of IP address. Cloud administrators need to deploy the logic to (i) detect the class of IP address (fixed or floating), (ii) use nova's IP filter if the address is a fixed IP address, (iii) do manual filtering if the address is a floating IP address. I wonder if nova team is willing to accept an enhancement that makes the IP filter support both. Optimally, cloud administrators can simply pass the abused IP address to nova and nova will handle the heterogeneity. In term of implementation, I expect the change is small. After this patch [1], Nova will query Neutron to compile a list of ports' device_ids (device_id is equal to the uuid of the instance to which the port binds) and use the device_ids to query the instances. If Neutron returns an empty list, Nova can give a second try to query Neutron for floating IPs. There is a RFE [2] and POC [3] for proposing to add a device_id attribute to the floating IP API resource. Nova can leverage this attribute to compile a list of instances uuids and use it as filter on listing the instances. If this feature is implemented, will it benefit the general community? Finally, I also wonder how others are tackling a similar problem. Appreciate your feedback. [1] https://review.openstack.org/#/c/525505/ [2] https://bugs.launchpad.net/neutron/+bug/1723026 [3] https://review.openstack.org/#/c/534882/ Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ken1ohmichi at gmail.com Wed Jan 24 23:28:52 2018 From: ken1ohmichi at gmail.com (Ken'ichi Ohmichi) Date: Wed, 24 Jan 2018 15:28:52 -0800 Subject: [openstack-dev] [nova] PTL Election Season In-Reply-To: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> References: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> Message-ID: 2018-01-22 15:09 GMT-08:00 Matt Riedemann : > On 1/15/2018 11:04 AM, Kendall Nelson wrote: >> >> Election details: https://governance.openstack.org/election/ >> >> Please read the stipulations and timelines for candidates and electorate >> contained in this governance documentation. >> >> Be aware, in the PTL elections if the program only has one candidate, that >> candidate is acclaimed and there will be no poll. There will only be a poll >> if there is more than one candidate stepping forward for a program's PTL >> position. >> >> There will be further announcements posted to the mailing list as action >> is required from the electorate or candidates. This email is for information >> purposes only. >> >> If you have any questions which you feel affect others please reply to >> this email thread. >> > > To anyone that cares, I don't plan on running for Nova PTL again for the > Rocky release. Queens was my fourth tour and it's definitely time for > someone else to get the opportunity to lead here. I don't plan on going > anywhere and I'll be here to help with any transition needed assuming > someone else (or a couple of people hopefully) will run in the election. > It's been a great experience and I thank everyone that has had to put up > with me and my obsessive paperwork and process disorder in the meantime. I was surprised because you leaded the team in multiple wide areas and I guessed you can run as the PTL forever :-) Anyways thank you so much for your great work in 4 cycles. Thanks From rosmaita.fossdev at gmail.com Thu Jan 25 01:24:28 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 24 Jan 2018 20:24:28 -0500 Subject: [openstack-dev] [glance] functional gate situation In-Reply-To: References: Message-ID: Update on the 24 hours since the last update: tl;dr: glance cores, do not approve any substantial code changes until after the functional tests are restored. See below for details. On Tue, Jan 23, 2018 at 8:09 PM, Brian Rosmaita wrote: > Update on the last 24 hours: > > (1) Sean's patch splitting the unit and functional tests in tox has merged. Still good. > (2) My patch to restore the functional test gate jobs ran into a > problem, namely, that one of the py35 tests suddenly begun failing in > the gate, and I haven't been able to reproduce it locally. I started > looking into it, but this problem doesn't make any sense at all > (you'll see what I mean when you get a chance to look at it), so I put > up a patch to skip the failing test: > https://review.openstack.org/#/c/536939/ > It's passed the check and I ninja-approved it, so it's in the gate now. It's still in the gate. Due to an unfortunate concatenation of circumstances, today's gerrit restart occurred at the same time zuul had passed this patch, and the success was not recorded, and it had to be rechecked. I believe it was moved up in the queue (thanks, infra team!), but nonetheless it's been in the integrated queue for > 7 hours now, with zuul projecting completion in 58 minutes. > (3) I edited the patch restoring the functional gate jobs to not run > the py27 tests at all (no sense wasting any time until we know they > have a chance of working). At least we can run the py35 functional > tests (except for the one being skipped): > https://review.openstack.org/#/c/536630/ > (I rebased it on the skip-test patch, it's in the check now.) This one still depends on #2, so no action yet. > I'd prefer that nothing else be merged for glance until we get the > functional gate restored, which will hopefully happen sometime this > evening. I'll keep an eye on (2) and (3) for the next few hours. I'm still hopeful, though less hopeful than I was. In any case, do not approve any substantial changes until the functional tests have been restored. > (4) With Sean's patch merged, I put up a patch to the requirements > repo reverting the change that made the cross-glance-py27 test > non-voting: > https://review.openstack.org/#/c/536946/ > That's been approved and is in the gate now. This patch got caught in the gerrit restart, too, and is being re-processed (and it looks like it's going to fail, although not because of glance ... the cross-glance-py* jobs were both successful. > So, we've got 2 outstanding bugs: > py27 functional test failures: https://bugs.launchpad.net/glance/+bug/1744824 > py35 functional test failure: https://bugs.launchpad.net/glance/+bug/1745003 > > ... and of course the regular stuff that was mentioned on the priority > email for this week: > http://lists.openstack.org/pipermail/openstack-dev/2018-January/126353.html These continue to be areas of active interest. On the plus side, we did release the python-glanceclient 2.9.1 today. > cheers, > brian From ghanshyammann at gmail.com Thu Jan 25 03:05:35 2018 From: ghanshyammann at gmail.com (Ghanshyam Mann) Date: Thu, 25 Jan 2018 08:35:35 +0530 Subject: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs In-Reply-To: References: Message-ID: On Fri, Jan 19, 2018 at 4:23 PM, Graham Hayes wrote: > > > On 19/01/18 03:28, Ghanshyam Mann wrote: >> On Thu, Jan 11, 2018 at 10:06 PM, Colleen Murphy wrote: >>> Hi everyone, >>> >>> We have governance review under debate[1] that we need the community's help on. >>> The debate is over what recommendation the TC should make to the Interop team >>> on where the tests it uses for the OpenStack trademark program should be >>> located, specifically those for the new add-on program being introduced. Let me >>> badly summarize: >>> >>> A couple of years ago we issued a resolution[2] officially recommending that >>> the Interop team use solely tempest as its source of tests for capability >>> verification. The Interop team has always had the view that the developers, >>> being the people closest to the project they're creating, are the best people >>> to write tests verifying correct functionality, and so the Interop team doesn't >>> maintain its own test suite, instead selecting tests from those written in >>> coordination between the QA team and the other project teams. These tests are >>> used to validate clouds applying for the OpenStack Powered tag, and since all >>> of the projects included in the OpenStack Powered program already had tests in >>> tempest, this was a natural fit. When we consider adding new trademark programs >>> comprising of other projects, the test source is less obvious. Two examples are >>> designate, which has never had tests in the tempest repo, and heat, which >>> recently had its tests removed from the tempest repo. >>> >>> So far the patch proposes three options: >>> >>> 1) All trademark-related tests should go in the tempest repo, in accordance >>> with the original resolution. This would mean that even projects that have >>> never had tests in tempest would now have to add at least some of their >>> black-box tests to tempest. >>> >>> The value of this option is that centralizes tests used for the Interop program >>> in a location where interop-minded folks from the QA team can control them. The >>> downside is that projects that so far have avoided having a dependency on >>> tempest will now lose some control over the black-box tests that they use for >>> functional and integration that would now also be used for trademark >>> certification. >>> There's also concern for the review bandwidth of the QA team - we can't expect >>> the QA team to be continually responsible for an ever-growing list of projects >>> and their trademark tests. >>> >>> 2) All trademark-related tests for *add-on projects* should be sourced from >>> plugins external to tempest. >>> >>> The value of this option is it allows project teams to retain control over >>> these tests. The potential problem with it is that individual project teams are >>> not necessarily reviewing test changes with an eye for interop concerns and so >>> could inadvertently change the behavior of the trademark-verification tools. >>> >>> 3) All trademark-related tests should go in a single separate tempest plugin. >>> >>> This has the value of giving the QA and Interop teams control over >>> interop-related tests while also making clear the distinction between tests >>> used for trademark verification and tests used for CI. Matt's argument against >>> this is that there actually is very little distinction between those two cases, >>> and that a given test could have many different applications. >> >> options#3 can solve centralize test location issue but there is >> another issue it leads. If we start moving all interop test to >> separate interop repo then, many of exiting tempest test (used by >> interop) also falls under this category. Which means those existing >> tempest tests need to stay in 2 location one in new interop plugin and >> second in tempest also as tempest is being used for lot other purpose >> also, gate, production Cloud testing & stability etc. Duplication >> tests in 2 location is not good option. > > We could just install the interop plugin into all the gates, and ensure > it is ran, which would mean the tests are only ever in one place. That cover gate things at some extend and with workaround but not outside gate where test are being used to test the cloud. > > >>> >>> Other ideas that have been thrown around are: >>> >>> * Maintaining a branch in the tempest repo that Interop tests are pulled from. >>> >>> * Tagging Interop-related tests with decorators to make it clear that they need >>> to be handled carefully. >> >> Nice and imp point. This is been take care very carefully in Tempest >> till now . While changing tests or removing test, we have a very clear >> and strict process [4] to not affect any interop tests and i think it >> is 100% success till now, i have not heard any complained that we have >> changed any test which has broken interop. Adding new decorator etc >> has different issues to we did not accepted but main problem is solved >> by defining process.. > > Out of interest, what is the issue with a new test tag? it seems like it > would be a good way to highlight to people what tests need extra care. As mentioned above, use case if tests are not only interop so tagging few test case for interop is not good way. I remember someone asked me to add HW dependent tag on some of the test which were failing on their env. Also we used to have legacy tag which we removed like 'gate' etc. Tagging tests is always hard to maintain and extra overhead. Its user responsibility to keep rack of list of tests they are interested in and want to keep eyes on those. Which is what interop, ceph plugin etc are doing currently. > >> >>> >>> At the heart of the issue is the perception that projects that keep their >>> integration tests within the tempest tree are somehow blessed, maybe by the QA >>> team or by the TC. It would be nice to try to clarify what technical >>> and political >>> reasons we have for why different projects have tests in different places - >>> review bandwidth of the QA team, ownership/control by the project teams, >>> technical interdependency between certain projects, or otherwise. >>> >>> Ultimately, as Jeremy said in the comments on the resolution patch, the >>> recommendation should be one that works best for the QA and Interop teams. So >>> far we've heard from Matt and Mark expressing moderate support for option 2. >>> We'd like to hear more from those teams about how they see this working, >>> especially with regard to concerns about the quality and stability standards >>> that out-of-tree tests may be held to. We additionally need input from the >>> whole community on how maintaining trademark-related tests in tempest will >>> affect you if you don't already have your tests there. We'd especially like to >>> address any perceptions of favoritism or exclusionism that stem from these >>> issues. >>> >>> And to quickly clear up one detail before it makes it onto this thread, the >>> Queens Community Goal about splitting tempest plugins out of the main project's >>> tree[3] is entirely about addressing technical problems related to packaging for >>> existing tempest plugins, it's not a decree about what should live >>> within the tempest >>> repository nor does it have anything to do with the Interop program. >>> >>> As I'm not deeply steeped in the history of either the Interop or QA teams I am >>> sure I've misrepresented some details here, I'm sorry about that. But we'd like >>> to get this resolution moving forward and we're currently stuck, so this thread >>> is intended to gather enough community input to get unstuck and avoid letting >>> this proposal become stale. Please respond to this thread or comment on the >>> resolution proposal[1] if you have any thoughts. >>> >>> Colleen >>> >>> [1] https://review.openstack.org/#/c/521602 >>> [2] https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html >>> [3] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html >>> >> >> .. [4] https://docs.openstack.org/tempest/latest/test_removal.html >> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -gmann From aj at suse.com Thu Jan 25 03:34:05 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 25 Jan 2018 04:34:05 +0100 Subject: [openstack-dev] [devstack] Broken repo on devstack-plugin-container for Fedora In-Reply-To: References: Message-ID: <2e41e12f-7671-493f-3709-8b41e3991e95@suse.com> On 2018-01-24 14:14, Daniel Mellado wrote: > Hi everyone, > > Since today, when I try to install devstack-plugin-container plugin over > fedora. It complains in here [1] about not being able to sync the cache > for the repo with the following error [2]. > > This is affecting me on Fedora26+ from different network locations, so I > was wondering if someone from suse could have a look (it did work for > Andreas in opensuse... thanks in advance!) Just a heads up: So, one problem: The signing key was expired. The key was extended but not used - now the repo has been published again using the extended key. So, download works. AFAIU there's still some problem where dnf is not happy with - Daniel is investigating, Andreas > > [1] > https://github.com/openstack/devstack-plugin-container/blob/master/devstack/lib/docker#L164-L170 > > [2] http://paste.openstack.org/show/652041/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From aj at suse.com Thu Jan 25 03:40:23 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 25 Jan 2018 04:40:23 +0100 Subject: [openstack-dev] [requirements][congress] getting requirements updates for congress-dashboard In-Reply-To: References: Message-ID: <8f8ea74c-671b-7b21-55f7-5bf3ca02df87@suse.com> On 2018-01-24 23:54, Eric K wrote: > Hi all, > > I'm having some trouble getting congress-dashboard to receive requirements > updates from global-requirements. > > The check-requirements job seems to be configured, but does NOT seem to be > running: > http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul.d/pr > ojects.yaml#n4246 > > The project is also listed in the requirements repo: > http://git.openstack.org/cgit/openstack/requirements/tree/projects.txt#n24 > > Any hints on what may be wrong? Thanks very much! You can run the requirements update manually on your repository, the README of requirements explains why. That will allow you to see what fails. Also, you can check the output of the proposal job yourself, go to: http://zuul.openstack.org/builds.html?job_name=propose-update-requirements and look at the log files, you should find: http://logs.openstack.org/c1/c15d9830375196c9a6c0c111073e1148bf192b4b/post/propose-update-requirements/32b7875/job-output.txt.gz#_2018-01-24_01_50_58_527699 So, remove that obsolete entry and check that the rest of the file is fine with manually running the update, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From mthode at mthode.org Thu Jan 25 04:32:27 2018 From: mthode at mthode.org (Matthew Thode) Date: Wed, 24 Jan 2018 22:32:27 -0600 Subject: [openstack-dev] [ALL][requirements] A freeze is coming and you should be prepared In-Reply-To: <20180124072947.u4dv674dv6bcczb6@gentoo.org> References: <20180123072350.2jby5zoeeyzaryv5@gentoo.org> <20180124072947.u4dv674dv6bcczb6@gentoo.org> Message-ID: <20180125043227.v3mfb5u2ndeennvu@mthode.org> On 18-01-24 01:29:47, Matthew Thode wrote: > On 18-01-23 01:23:50, Matthew Thode wrote: > > Requirements is freezing Friday at 23:59:59 UTC so any last > > global-requrements updates that need to get in need to get in now. > > > > I'm afraid that my condition has left me cold to your pleas of mercy. > > > > Just your daily reminder that the freeze will happen in about 3 days > time. Reviews seem to be winding down for requirements now (which is > a good sign this release will be chilled to perfection). > There's still a couple of things that may cause bumps for iso8601 and oslo.versionedobjects but those are the main things. The msgpack change is also rolling out (thanks dirk :D). Even with all these changes though, in this universe, there's only one absolute. Everything freezes! https://review.openstack.org/535520 (oslo.serialization) -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mriedemos at gmail.com Thu Jan 25 05:06:13 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 24 Jan 2018 23:06:13 -0600 Subject: [openstack-dev] [tc] [all] TC Report 18-04 In-Reply-To: References: Message-ID: On 1/23/2018 5:22 PM, Chris Dent wrote: >> if i were to (potentially) oversimplify it, i would agree with this >> statement: >> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-23.log.html#t2018-01-23T10:12:22 >> >> >> i don't believe a PTL necessarily has to keep the whole state of the >> project in their head (although they could). ultimately, it's up to the >> PTL to decide how much they're willing to defer to others. > > I think that is probably how things should be, but I'm not sure it > is how things always are. > > I expect there's a bit of nova exceptionalism built into this > analysis and also a bit of bleed between being the PTL and doing > anything with significant traction in nova when not the PTL: the > big picture is a big deal and you got gotta be around, a lot. > > But, as I've said many times, the report intentionally represents > my own interpretations and biases, in hope that someone might > respond and say a variety of things, including "WRONG!", driving > forward our dialectic. > > So, thanks for responding. I owe you a cookie or something. I work long hours because I work long hours, not because I'm a PTL. I've always done it regardless of the project or role I'm in. I don't expect the next nova PTL to do things the same way. I accepted long ago that I can't keep all things going on in my head. We used to have more full(er) time people working on the project and it was easier to have subject matter experts (think sdague, danpb, alaski, johnthetubaguy, comstud, jogo) but times change and people move on. New people have stepped up too. I obsess over tracking things as a tool for at least trying to know what's going on if I care to dig deep, that's why I've always got lots of etherpads with lists, e.g. [1]. As for digging deep on stuff in a given release, it depends on what it is, how I think I can help, and what I think it's relative priority is to the other stuff I can work on or help review in a constructive way. That means I can't focus on all the big things, and I don't try to. I hardly reviewed any of the major server-side placement stuff this release, as an example. As John pointed out in the TC discussion, the one thing that has bummed me out the most over the years, and has probably gotten progressively worse, is I tend to feel a sense of personal responsibility for what does, or doesn't, end up getting done in each release and that can weigh on me. Everyone wants everything when they want it, and they want you to help, and also work on fixing the half-baked features we merged two releases ago, plus docs, plus good CI coverage, plus upgrade support, plus more features, etc. And when it's not all delivered or we make one step forward but find out we're two steps back on something else now, that's where it's the most challenging and I at least have to rely on the help of others to work through that stuff. But, again, I think I've experienced that same thing before being PTL, and in other projects outside of OpenStack, so it might just be the nature of the industry we're in, or my own personality, etc. In the end it's all a good experience and rewarding, especially when you're able to help someone out. Finally, remember there was a talk about the pros/cons of being a PTL at the Boston summit for anyone thinking about running next week [2]. [1] https://etherpad.openstack.org/p/nova-queens-blueprint-status [2] https://www.openstack.org/videos/boston-2017/being-a-project-team-lead-ptl-the-good-the-bad-and-the-ugly -- Thanks, Matt From ifat.afek at nokia.com Thu Jan 25 07:09:40 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Thu, 25 Jan 2018 07:09:40 +0000 Subject: [openstack-dev] [requirements] [vitrage] global requirements update for python-vitrageclient Message-ID: <6CB35FB2-7018-49A6-BF29-1440D33E3BDB@nokia.com> Hi, I tried to update the version of python-vitrageclient [1], but the legacy-requirements-integration-dsvm test failed with an error that does not seem related to my changes: error: can't copy 'etc/glance-image-import.conf': doesn't exist or not a regular file I noticed that two other changes [2][3] failed with the same error. Can you please help? Thanks, Ifat. [1] https://review.openstack.org/#/c/537307 [2] https://review.openstack.org/#/c/535460/ [3] https://review.openstack.org/#/c/536142/ From amoralej at redhat.com Thu Jan 25 09:49:10 2018 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Thu, 25 Jan 2018 10:49:10 +0100 Subject: [openstack-dev] [release][puppet] Tarballs missing for some puppet modules in pike In-Reply-To: References: <20180119134330.GA30356@sm-xps> <20180119143457.GB30356@sm-xps> <20180119165043.GA11289@sm-xps> Message-ID: Hi, We had the same issue again in release-post job for https://review.openstack.org/#/c/536927/ Logs in http://logs.openstack.org/32/323c387a2d1794e0679510657629470da8f7de92/release-post/tag-releases/b2091f1/job-output.txt.gz shows a similar issue. The script stuck at doing a "git fetch" Best regards, Alfredo On Fri, Jan 19, 2018 at 6:29 PM, Alfredo Moralejo Alonso < amoralej at redhat.com> wrote: > > > On Fri, Jan 19, 2018 at 5:50 PM, Sean McGinnis > wrote: > >> > >> > Just an update - it does look like it was just the one job failure. >> There are >> > multiple puppet-* releases done as part of the one job, and it appears >> they are >> > processed in alphabetically order. So this last time it got as far as >> > puppet-swift (at least further along than puppet-horizon) before it hit >> this >> > timeout. >> > >> > I'm fairly confident once we get the job to run again it should make it >> through >> > these last few releases. >> > >> >> Jeremy was able to re-queue the job, and it appears everything completed >> as >> expected. I've taken a quick look through the tarballs, and I think >> everything >> is there. Please take a look and let me know if you see anything unusual. >> >> > Yeah, everything looks ok to me now. > > Thanks for your help, > > Alfredo > > >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scheuran at linux.vnet.ibm.com Thu Jan 25 09:57:04 2018 From: scheuran at linux.vnet.ibm.com (Andreas Scheuring) Date: Thu, 25 Jan 2018 10:57:04 +0100 Subject: [openstack-dev] [nova][thirdparty][CI] Nova IBM zKVM CI broken Message-ID: <8E26306B-842B-4039-B092-056FAC05DE77@linux.vnet.ibm.com> Hi, the Nova IBM zKVM CI is currently producing invalid builds. Please ignore the -1 results for now. I’m working on fixing it. Will let you know once it’s working fine again. Thanks! --- Andreas Scheuring (andreas_s) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Thu Jan 25 10:02:56 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 25 Jan 2018 11:02:56 +0100 Subject: [openstack-dev] [nova] PTL Election Season In-Reply-To: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> References: <14028a45-5d76-2ae4-ab06-7b4ed7746eac@gmail.com> Message-ID: <20180125100256.skk4i7dqyon333yz@eukaryote> On Mon, Jan 22, 2018 at 05:09:31PM -0600, Matt Riedemann wrote: [...] > To anyone that cares, I don't plan on running for Nova PTL again for the > Rocky release. Queens was my fourth tour and it's definitely time for > someone else to get the opportunity to lead here. I don't plan on going > anywhere and I'll be here to help with any transition needed assuming > someone else (or a couple of people hopefully) will run in the election. > It's been a great experience and I thank everyone that has had to put up > with me and my obsessive paperwork and process disorder in the meantime. Hey Matt, Thanks (understatement!) for the all the brilliant work (and also for all the thankless tasks). I deeply appreciate how effectively you handle communication in public and the energy that you bring to the project. It is a great joy interacting and working with you. I continue to be amazed at how you can stay on top of almost everything (at least give the illusion of it -- it reminds me of the "Sheperd Tone"[*]), _while_ getting things done. Glad to hear you're not going anywhere. To be continued. [*] https://en.wikipedia.org/wiki/Shepard_tone https://www.youtube.com/watch?v=LVWTQcZbLgY -- /kashyap From mbooth at redhat.com Thu Jan 25 10:45:42 2018 From: mbooth at redhat.com (Matthew Booth) Date: Thu, 25 Jan 2018 10:45:42 +0000 Subject: [openstack-dev] [nova] Native QEMU LUKS decryption review overview ahead of FF In-Reply-To: References: <20180122142212.2fqjvquljpji6kph@lyarwood.usersys.redhat.com> Message-ID: On 24 January 2018 at 22:57, Matt Riedemann wrote: > On 1/22/2018 8:22 AM, Lee Yarwood wrote: > >> Hello, >> >> With M3 and FF rapidly approaching this week I wanted to post a brief >> overview of the QEMU native LUKS series. >> >> The full series is available on the following topic, I'll go into more >> detail on each of the changes below: >> >> https://review.openstack.org/#/q/topic:bp/libvirt-qemu-nativ >> e-luks+status:open >> >> libvirt: Collocate encryptor and volume driver calls >> https://review.openstack.org/#/c/460243/ (Missing final +2 and +W) >> >> This refactor of the Libvirt driver connect and disconnect volume code >> has the added benefit of also correcting a number of bugs around the >> attaching and detaching of os-brick encryptors. IMHO this would be >> useful in Queens even if the rest of the series doesn't land. >> >> libvirt: Introduce disk encryption config classes >> https://review.openstack.org/#/c/464008/ (Missing final +2 and +W) >> >> This is the most straight forward change of the series and simply >> introduces the required config classes to wire up native LUKS decryption >> within the domain XML of an instance. Hopefully nothing controversial. >> >> libvirt: QEMU native LUKS decryption for encrypted volumes >> https://review.openstack.org/#/c/523958/ (Missing both +2s and +W) >> >> This change carries the bulk of the implementation, wiring up encrypted >> volumes during their initial attachment. The commit message has a >> detailed run down of the various upgrade and LM corner cases we attempt >> to handle here, such as LM from a P to Q compute, detaching a P attached >> encrypted volume after upgrading to Q etc. >> >> Upgrade and LM testing is enabled by the following changes: >> >> fixed_key: Use a single hardcoded key across devstack deployments >> https://review.openstack.org/#/c/536343/ >> >> compute: Introduce an encrypted volume LM test >> https://review.openstack.org/#/c/536177/ >> >> This is being tested by tempest-dsvm-multinode-live-migration and >> grenade-dsvm-neutron-multinode-live-migration in the following DNM Nova >> change, enabling volume backed LM tests: >> >> DNM: Test LM with encrypted volumes >> https://review.openstack.org/#/c/536350/ >> >> Hopefully that covers everything but please feel free to ping if you >> would like more detail, background etc. Thanks in advance, >> >> Lee >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > The patch is already approved, and I asked melwitt to write a release > note, at which point it was noted that swap volume will not work with > native luks encrypted volumes. That's a regression. > It's only a regression since swap_volume with encrypted volumes was fixed in https://review.openstack.org/#/c/460243/, which landed on Monday as part of this series. Prior to Monday, swap_volume with encrypted volumes would result in the raw encrypted volume being presented to the guest after the swap. We need to at least report a nova bug for this so we can work on some kind > of fallback to the non-native decryption until there is a libvirt/qemu fix > upstream and we can put version conditionals in place for when we can > support swap volume with a native luks-encrypted volume. > In the context of the above, I don't think this is a priority as clearly nobody is currently doing it. There's already a bug to track the problem in libvirt, which is linked in a code comment. Admittedly that BZ is unnecessarily private, which I noted in review, but we've reached out to the author to ask them to open it up as there's nothing sensitive going on in there. In general, anything qemu can do natively makes Nova both simpler and more robust because we don't have to modify the host configuration. This eliminates a whole class of error states and race conditions, because when we kill qemu there's nothing left to clean up. Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifat.afek at nokia.com Thu Jan 25 11:15:16 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Thu, 25 Jan 2018 11:15:16 +0000 Subject: [openstack-dev] [requirements] [vitrage][glance] global requirements update for python-vitrageclient Message-ID: <15AE2191-101C-4A5B-BEFD-2E7EE1DC6432@nokia.com> Adding Glance team. Any idea what could be wrong? Thanks, Ifat. On 25/01/2018, 9:09, "Afek, Ifat (Nokia - IL/Kfar Sava)" wrote: Hi, I tried to update the version of python-vitrageclient [1], but the legacy-requirements-integration-dsvm test failed with an error that does not seem related to my changes: error: can't copy 'etc/glance-image-import.conf': doesn't exist or not a regular file I noticed that two other changes [2][3] failed with the same error. Can you please help? Thanks, Ifat. [1] https://review.openstack.org/#/c/537307 [2] https://review.openstack.org/#/c/535460/ [3] https://review.openstack.org/#/c/536142/ From celebdor at gmail.com Thu Jan 25 11:55:37 2018 From: celebdor at gmail.com (Antoni Segura Puimedon) Date: Thu, 25 Jan 2018 12:55:37 +0100 Subject: [openstack-dev] [kuryr][libnetwork] Release kuryr-libnetwork 1.x for Queens In-Reply-To: References: Message-ID: On Mon, Jan 22, 2018 at 3:46 PM, Daniel Mellado wrote: > +1 > > > El 21/1/18 a las 8:13, Irena Berezovsky escribió: > > +1 > > On Fri, Jan 19, 2018 at 9:42 PM, Hongbin Lu wrote: >> >> Hi Kuryr team, >> >> I think Kuryr-libnetwork is ready to move out of beta status. I propose to >> make the first 1.x release of Kuryr-libnetwork for Queens and cut a stable >> branch on it. What do you think about this proposal? Agreed. Thanks a lot for bringing it up Hongbin! >> >> Best regards, >> Hongbin >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From cdent+os at anticdent.org Thu Jan 25 12:54:01 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 25 Jan 2018 12:54:01 +0000 (GMT) Subject: [openstack-dev] [all][api] API-SIG meeting cancelled Message-ID: Today's (25th January) API-SIG meeting has been cancelled. The usual chairs are either travelling or ill. Regular schedule will resume next week. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From sean.mcginnis at gmx.com Thu Jan 25 13:29:57 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 25 Jan 2018 07:29:57 -0600 Subject: [openstack-dev] [release][puppet] Tarballs missing for some puppet modules in pike In-Reply-To: References: <20180119134330.GA30356@sm-xps> <20180119143457.GB30356@sm-xps> <20180119165043.GA11289@sm-xps> Message-ID: <20180125132957.GA28977@sm-xps> On Thu, Jan 25, 2018 at 10:49:10AM +0100, Alfredo Moralejo Alonso wrote: > Hi, > > We had the same issue again in release-post job for > https://review.openstack.org/#/c/536927/ > > Logs in > http://logs.openstack.org/32/323c387a2d1794e0679510657629470da8f7de92/release-post/tag-releases/b2091f1/job-output.txt.gz > shows a similar issue. The script stuck at doing a "git fetch" > > Best regards, > > Alfredo > Thanks Alfredo. I have requested that job be run again. Sean From ifat.afek at nokia.com Thu Jan 25 13:39:36 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Thu, 25 Jan 2018 13:39:36 +0000 Subject: [openstack-dev] [vitrage] Vitrage templates are now loaded using a new API Message-ID: Hi, A new API for Vitrage template add and template delete was added this week. As part of this change, Vitrage templates are now stored in a database and are no longer read from the file system. In case you are using templates, make sure to call the new API and add them to Vitrage once you get the latest version. More details can be found on the spec [1] and on Vitrage API [2] and CLI [3] documentation. Thanks, Ifat. [1] https://specs.openstack.org/openstack/vitrage-specs/specs/queens/implemented/template-CRUD.html [2] https://docs.openstack.org/vitrage/latest/contributor/vitrage-api.html [3] https://docs.openstack.org/python-vitrageclient/latest/contributor/cli.html From sean.mcginnis at gmx.com Thu Jan 25 14:00:11 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 25 Jan 2018 08:00:11 -0600 Subject: [openstack-dev] [release] Release countdown for week R-4, January 27 - February 2 Message-ID: <20180125140010.GA31820@sm-xps> Happy deadline week everyone. Here's what's coming up for next week. Development Focus ----------------- The R-4 week is our one deadline free week between the lib freezes and Queens-3 milestone and RC. Work should be focused on fixing any requirements update issues, critical bugs, and wrapping up feature work to prepare for the Release Candidate deadline (for deliverables following the with-milestones model) or final Queens releases (for deliverables following the with-intermediary model) next Thursday, 8th of February. General Information ------------------- For deliverables following the cycle-with-milestones model, we are now past Feature Freeze. The focus should be on determining and fixing release-critical bugs. At this stage only bugfixes should be approved for merging in the master branches: feature work should only be considered if explicitly granted a Feature Freeze exception by the team PTL (after a public discussion on the mailing-list). StringFreeze is now in effect, in order to let the I18N team do the translation work in good conditions. The StringFreeze is currently soft (allowing exceptions as long as they are discussed on the mailing-list and deemed worth the effort). It will become a hard StringFreeze on 8th of February along with the RC. The requirements repository is also frozen, until all cycle-with-milestones deliverables have produced a RC1 and have their stable/queens branches. Note that deliverables that are not tagged for release by the appropriate deadline will be reviewed to see if they are still active enough to stay on the official project list. Actions --------- stable/queens branches should be created soon for all non-already-branched libraries. You should expect 2-3 changes to be proposed for each: a .gitreview update, a reno update (skipped for projects not using reno), and a tox.ini constraints URL update. Please review those in priority so that the branch can be functional ASAP. For cycle-with-intermediary deliverables, release liaisons should consider releasing their latest version, and creating stable/queens branches from it ASAP. For cycle-with-milestones deliverables, release liaisons should wait until R-3 week to create RC1 (to avoid having an RC2 created quickly after). Review release notes for any missing information, and start preparing "prelude" release notes as summaries of the content of the release so that those are merged before the first release candidate. Along with the prelude work, it is also a good time to start planning what highlights you want for your project team in the cycle highlights: http://lists.openstack.org/pipermail/openstack-dev/2017-December/125613.html For release-independent deliverables, release liaisons should check that their deliverable file includes all the existing releases, so that they can be properly accounted for in the releases.openstack.org website. If your team has not done so, remember to file Queens goal completion information, as explained in: https://governance.openstack.org/tc/goals/index.html#completing-goals Upcoming Deadlines & Dates -------------------------- Rocky PTL nominations: January 29 - February 1 Rocky PTL election: February 7 - 14 OpenStack Summit Vancouver CFP deadline: February 8 Rocky PTG in Dublin: Week of February 26, 2018 -- Sean McGinnis (smcginnis) From osaf96 at gmail.com Thu Jan 25 14:41:17 2018 From: osaf96 at gmail.com (Osaf Ali) Date: Thu, 25 Jan 2018 09:41:17 -0500 Subject: [openstack-dev] =?utf-8?q?=28no_subject=29?= In-Reply-To: References: Message-ID: Osaf96 at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From anicolae at lenovo.com Thu Jan 25 15:01:26 2018 From: anicolae at lenovo.com (Anda Nicolae) Date: Thu, 25 Jan 2018 15:01:26 +0000 Subject: [openstack-dev] RHOSP 10 registering nodes for the undercloud fails Message-ID: Hello, I am trying to deploy OpenStack 10 using OpenStack Platform Director 10. I am using a bare-metal server with RedHat 7.4, on which I have created 3 VMs: 1st VM is the undercloud node, 2nd VM is the overcloud controller node and the 3rd VM is the overcloud compute node. The bare-metal server I am using is also my KVM hypervisor for the overcloud. The bare-metal server has 2 interfaces: an external interface used in instackenv.json for registering the overcloud nodes and a provisioning interface which will be used for provisioning the overcloud nodes. My 1st question is: Is it mandatory that the KVM hypervisor for the overcloud VMs and the undercloud are the same machine? As you can see, in my case the undercloud VM and the KVM hypervisor are different machines. I saw a blog post which used a topology similar to mine: https://keithtenzer.com/2017/04/20/red-hat-openstack-platform-10-newton-installation-and-configuration-guide/ The error I am getting while running: openstack baremetal import -- json ~/.instackenv.json is in ironic_conductor.log: Failed to validate power driver interface for node . Error: SSH connection cannot be established: Failed to establish SSH connection to host I have created the instackenv.json file which looks similar to this: "arch": "x86_64", "host-ip": "kvm_hypervisor_external_ip_address", "power_manager": "nova.virt.baremetal.virtual_power_driver.VirtualPowerManager", "ssh-user": "stack", "ssh-key": "$(cat ~/.ssh/id_rsa)", "nodes": [ { "mac": [ "" ], "name": "overcloud-controller", "capabilities" : "profile:control", "cpu": "4", "memory": "6000", "disk": "50", "arch": "x86_64", "pm_user": "stack", "pm_addr": "kvm_hypervisor_external_ip_address", "pm_password": "$(cat ~/.ssh/id_rsa)", "pm_type": "pxe_ssh" } , { "mac": [ "" ], "name": "overcloud-compute", "capabilities" : "profile:compute", "cpu": "4", "memory": "6000", "disk": "50", "arch": "x86_64", "pm_user": "stack", "pm_addr": "hypervisor_external_ip_address", "pm_password": "$(cat ~/.ssh/id_rsa)", "pm_type": "pxe_ssh" } ] } -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Jan 25 15:15:25 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 25 Jan 2018 10:15:25 -0500 Subject: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID In-Reply-To: <5c56967b-223f-eee1-9707-2bdfce8ac7c8@switch.ch> References: <054e9a0c-1c99-51dd-9e66-8adb3c910802@switch.ch> <1515696336-sup-7054@lrrr.local> <7142b655-abca-ed58-bc4c-07fa51cb401c@switch.ch> <1515771070-sup-7997@lrrr.local> <07ee3262-8aec-a4c4-f981-bc448afab0ba@switch.ch> <96f2a7d8-ea7c-5530-7975-62b477982f03@switch.ch> <1516293565-sup-9123@lrrr.local> <1516295114-sup-7111@lrrr.local> <1516630943-sup-4108@lrrr.local> <1516659378-sup-8232@lrrr.local> <7b4c5530-55e9-2590-1b67-74b5ff938ef9@switch.ch> <1516716895-sup-6461@lrrr.local> <5c56967b-223f-eee1-9707-2bdfce8ac7c8@switch.ch> Message-ID: <1516893061-sup-6021@lrrr.local> Excerpts from Saverio Proto's message of 2018-01-24 22:18:39 +0100: > > 3.34.0 is a queens series release, which makes it more likely that more > > other dependencies would need to be updated. Even backporting the > > changes to the Ocata branch and releasing it from there would require > > updating several other libraries. > > > > That is what I was fearing. Consider that our upgrade schedule is now to > have Pike by the end of 2018. Unless we try to skip a release. You should seriously consider upgrading to at least Queens. Pike will be out of community support by Sept 2018 (https://releases.openstack.org). Some of the community deployment tools have support for "fast-forward" upgrades (allowing you to install several versions in series and only launch the cloud again when the final version is installed). You can check with the team that manages your deployment tool to see if it supports this capability. > > Are you using packages from Canonical, or are you building them > > yourself? > > I am using the packages from Canonical, but I am familiar patching those > packages and merge my changes upstream back to Canonical. > If the problem is just dependencies with the ".deb" packages, I can > handle that. But if the problem is really python code not working > together across multiple components, then I have little hope to fix this > in Newton. I don't know Canonical's support policies, but it may be possible to backport a patch to oslo.log to provide the missing information. > Thanks for the support, if I manage to make some progress on this I will > send an update on this thread on the mailing list. Good luck, and please do let me know how it goes. Doug From slawek at kaplonski.pl Thu Jan 25 15:53:42 2018 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdvbWlyIEthcMWCb8WEc2tp?=) Date: Thu, 25 Jan 2018 16:53:42 +0100 Subject: [openstack-dev] Issue with eventlet monkey patching Message-ID: <31C24762-4286-4957-92C5-9835649F9B67@kaplonski.pl> Hi, Recently we found in Neutron errors with starting our agents which uses eventlet.monkey_patch() method. Bug is described in [1]. I heard on IRC that it's not related only to Neutron so here is what we found about that. It looks that this issue happens on Ubuntu with python2.7 2.7.12-1ubuntu0~16.04.3 with eventlet < 0.22.0 (in OpenStack requirements it is set to 0.20.0). There is no this issue with python2.7.12-1ubuntu0~16.04.2 and eventlet 0.20.0 Something similar was already reported for monotonic in [2]. From one of comments there we found that problem can be caused because: "ctypes.util.find_library is now using subprocess.Popen, instead of os.popen (python/cpython at eb063011), and eventlet monkey-patches subprocess.Popen but not os.popen." It looks that eventlet patch [3] fixes/workaround this issue. I pushed similar patch to Neutron [4] and it looks that our issue is solved for now. I hope that maybe this info will be helpful for You :) [1] https://bugs.launchpad.net/neutron/+bug/1745013 [2] https://www.bountysource.com/issues/43892421-new-monotonic-broken-on-docker [3] https://github.comhttps://review.openstack.org/#/c/537863/1/eventlet/eventlet/commit/b756447bab51046dfc6f1e0e299cc997ab343701 [4] https://review.openstack.org/#/c/537863 — Best regards Slawek Kaplonski slawek at kaplonski.pl From scheuran at linux.vnet.ibm.com Thu Jan 25 16:22:16 2018 From: scheuran at linux.vnet.ibm.com (Andreas Scheuring) Date: Thu, 25 Jan 2018 17:22:16 +0100 Subject: [openstack-dev] [nova][thirdparty][CI] Nova IBM zKVM CI broken In-Reply-To: <8E26306B-842B-4039-B092-056FAC05DE77@linux.vnet.ibm.com> References: <8E26306B-842B-4039-B092-056FAC05DE77@linux.vnet.ibm.com> Message-ID: <8668AA72-4E86-4908-9C3C-E2AD44FCA9EF@linux.vnet.ibm.com> CI should be back online now, tests are running again… Let’s see if the tests succeed.. The issue was, that for some reason a new pip package ‘python-pcre’ got installed. But the build failed, as the build dependency 'libpcre3-dev’ as not satisfied. Now the package 'libpcre3-dev' is part of the daily nodepool image build. --- Andreas Scheuring (andreas_s) On 25. Jan 2018, at 10:57, Andreas Scheuring wrote: Hi, the Nova IBM zKVM CI is currently producing invalid builds. Please ignore the -1 results for now. I’m working on fixing it. Will let you know once it’s working fine again. Thanks! --- Andreas Scheuring (andreas_s) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From manjeet.s.bhatia at intel.com Thu Jan 25 18:11:24 2018 From: manjeet.s.bhatia at intel.com (Bhatia, Manjeet S) Date: Thu, 25 Jan 2018 18:11:24 +0000 Subject: [openstack-dev] [neutron][l3][flavors] FFE request for patch no 523257 and 532993 Message-ID: Hello all ! I'd like to request a FFE for patch 523257 [1] that adds new resources and events to handle operations For routers if L3 flavors framework is used. The neutron-lib part is already merged [lib] thanks to Boden and Miguel for quick reviews on that. The second patch 53993 [2] adds the missing notifications for floatingip update and delete events without which l3 flavor drivers Backends isn't able to perform the update and delete operations on floatingip's correctly. These two patches are needed for L3 flavors driver in networking-odl [nodll3]. [1]. https://review.openstack.org/#/c/523257 [2]. https://review.openstack.org/#/c/532993 [lib] https://review.openstack.org/#/c/535512/ [nodll3] https://review.openstack.org/#/c/504182/ Thanks and regards ! Manjeet Singh Bhatia -------------- next part -------------- An HTML attachment was scrubbed... URL: From manjeet.s.bhatia at intel.com Thu Jan 25 18:17:13 2018 From: manjeet.s.bhatia at intel.com (Bhatia, Manjeet S) Date: Thu, 25 Jan 2018 18:17:13 +0000 Subject: [openstack-dev] [neutron][l3][flavors][floatingip] FFE request for patch no 523257 and 532993 Message-ID: Hello all ! I'd like to request a FFE for patch 523257 [1] that adds new resources and events to handle operations For routers if L3 flavors framework is used. The neutron-lib part is already merged [lib] thanks to Boden and Miguel for quick reviews on that. The second patch 53993 [2] adds the missing notifications for floatingip update and delete events without which l3 flavor drivers Backends isn't able to perform the update and delete operations on floatingip's correctly. These two patches are needed for L3 flavors driver in networking-odl [nodll3]. [1]. https://review.openstack.org/#/c/523257 [2]. https://review.openstack.org/#/c/532993 [lib] https://review.openstack.org/#/c/535512/ [nodll3] https://review.openstack.org/#/c/504182/ Sorry for 2nd email on this I forgot to add [openstack-dev] subject in last one. Thanks and regards ! Manjeet Singh Bhatia -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Thu Jan 25 18:59:44 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 25 Jan 2018 10:59:44 -0800 Subject: [openstack-dev] [tripleo] branching stable/queens for tripleoclient Message-ID: Hi, We're about to release Queens milestone 3. https://review.openstack.org/537752 Which means, we'll branch tripleoclient stable/queens this week. Since we don't follow stable policy anymore, we can in theory accept any backport but I would ask our team to backport only bugfixes and things related to upgrades at that point (unless an FFE was granted). Also, we'll have to create queens CI jobs in RDO CI & upstream zuul. No big deal but just FYI. Any comment / feedback is welcome, Thanks! -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Thu Jan 25 20:55:35 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 25 Jan 2018 14:55:35 -0600 Subject: [openstack-dev] [infra][all] New Zuul Depends-On syntax In-Reply-To: <005844f0-ea22-3e0b-fd2e-c1c889d9293b@inaugust.com> References: <87efmfz05l.fsf@meyer.lemoncheese.net> <005844f0-ea22-3e0b-fd2e-c1c889d9293b@inaugust.com> Message-ID: <2b04ea61-2b2c-d740-37e7-47215b10e3a0@nemebean.com> On 01/24/2018 02:31 PM, Monty Taylor wrote: > On 01/24/2018 02:25 PM, David Shrewsbury wrote: >> This is a (the?) killer feature. >> >> >> On Wed, Jan 24, 2018 at 11:33 AM, James E. Blair > > wrote: >> >>     Hi, >> >>     We recently introduced a new URL-based syntax for Depends-On: footers >>     in commit messages: >> >>        Depends-On: https://review.openstack.org/535851 >>     >> >>     The old syntax will continue to work for a while, but please begin >> using >>     the new syntax on new changes. >> >>     Why are we changing this?  Zuul has grown the ability to interact >> with >>     multiple backend systems (Gerrit, GitHub, and plain Git so far), >> and we >>     have extended the cross-repo-dependency feature to support multiple >>     systems.  But Gerrit is the only one that uses the change-id syntax. >>     URLs, on the other hand, are universal. >> >>     That means you can write, as in https://review.openstack.org/535541 >>     , a >>     commit message such as: >> >>        Depends-On: >>     https://github.com/ikalnytskyi/sphinxcontrib-openapi/pull/17 >>     >> >>     Or in a Github pull request like >>     https://github.com/ansible/ansible/pull/20974 >>     , you can write: >> >>        Depends-On: https://review.openstack.org/536159 >>     >> >>     But we're getting a bit ahead of ourselves here -- we're just getting >>     started with Gerrit <-> GitHub dependencies and we haven't worked >>     everything out yet.  While you can Depends-On any GitHub URL, you >> can't >>     add any project to required-projects yet, and we need to establish a >>     process to actually report on GitHub projects.  But cool things are >>     coming. >> >>     We will continue to support the Gerrit-specific syntax for a while, >>     probably for several months at least, so you don't need to update the >>     commit messages of changes that have accumulated precious +2s. >> But do >>     please start using the new syntax now, so that we can age the old >> syntax >>     out. >> >>     There are a few differences in using the new syntax: >> >>     * Rather than copying the change-id from a commit message, you'll >> need >>        to get the URL from Gerrit.  That means the dependent change >> already >>        needs to be uploaded.  In some complex situations, this may >> mean that >>        you need to amend an existing commit message to add in the URL >> later. >> >>        If you're uploading both changes, Gerrit will output the URL >> when you >>        run git-review, and you can copy it from there.  If you are >>     looking at >>        an existing change in Gerrit, you can copy the URL from the >> permalink >>        at the top left of the page.  Where it says "Change 535855 - Needs >>        ..." the change number itself is the permalink of the change. >> >> >> >> Is the permalink the only valid format here for gerrit? Or does the fully >> expanded link also work. E.g., >> >>     Depends-On: https://review.openstack.org/536540 >> >> versus >> >>     Depends-On: https://review.openstack.org/#/c/536540/ > > The fully expanded one works too. See: > >   https://review.openstack.org/#/c/520812/ > > for an example of a patch with expanded links. I'm curious what this means as far as best practices for inter-patch references. In the past my understanding was the the change id was preferred, both because if gerrit changed its URL format the change id links would be updated appropriately, and also because change ids can be looked up offline in git commit messages. Would that still be the case for everything except depends-on now? From mriedemos at gmail.com Thu Jan 25 21:12:41 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 25 Jan 2018 15:12:41 -0600 Subject: [openstack-dev] Swift-backed volume backups are still breaking the gate Message-ID: <82afc399-ce6a-bd9e-696d-29c16300475d@gmail.com> We thought things were fixed with [1] but it turns out that swiftclient logs requests and responses at DEBUG level, so we're still switching thread context during a backup write and failing the backup operation, causing copious amounts of pain in the gate and piling up the rechecks. I've got a workaround here [2] which will hopefully be good enough to stabilize things for awhile, but there is probably not much point in rechecking a lot of patches, at least ones that run through the integrated gate, until that is merged. [1] https://review.openstack.org/#/c/537437/ [2] https://review.openstack.org/#/c/538027/ -- Thanks, Matt From lbragstad at gmail.com Thu Jan 25 21:14:31 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 25 Jan 2018 15:14:31 -0600 Subject: [openstack-dev] [keystone] FFE for unified limits Message-ID: <39da1489-1d10-7eef-36b7-36df904d52bc@gmail.com> Hey all, The work for unified limits [0] has been up for a while, reviewers are happy with it being experimental, and it is slowly making it's way through the gate. I propose we consider a feature freeze exception given the state of the gate and the frequency of rechecks/failures. Thoughts, comments, or concerns? [0] https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/unified-limits -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From lbragstad at gmail.com Thu Jan 25 21:15:00 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 25 Jan 2018 15:15:00 -0600 Subject: [openstack-dev] [keystone] FFE for application credentials Message-ID: <0595c36a-5ae8-e0ec-f3f8-1e56fa5777f6@gmail.com> Hey all, The work for application credentials [0] has been up for a while, reviewers are happy with it, and it is slowly making it's way through the gate. I propose we consider a feature freeze exception given the state of the gate and the frequency of rechecks/failures. Thoughts, comments, or concerns? [0] https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/application-credentials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From lbragstad at gmail.com Thu Jan 25 21:15:27 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 25 Jan 2018 15:15:27 -0600 Subject: [openstack-dev] [keystone] FFE for application credentials Message-ID: <74d128b9-a985-02d0-f002-c86ce775cea1@gmail.com> Hey all, The work for system assignments and system scope [0] has been up for a while, reviewers are happy with it, and it is slowly making it's way through the gate. I propose we consider a feature freeze exception given the state of the gate and the frequency of rechecks/failures. Thoughts, comments, or concerns? [0] https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/system-scope -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From lbragstad at gmail.com Thu Jan 25 21:17:11 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 25 Jan 2018 15:17:11 -0600 Subject: [openstack-dev] [keystone] FFE for application credentials In-Reply-To: <74d128b9-a985-02d0-f002-c86ce775cea1@gmail.com> References: <74d128b9-a985-02d0-f002-c86ce775cea1@gmail.com> Message-ID: The subject of this message should have been "FFE for system scope"... not application credentials. Apologies for the confusion. On 01/25/2018 03:15 PM, Lance Bragstad wrote: > Hey all, > > The work for system assignments and system scope [0] has been up for a > while, reviewers are happy with it, and it is slowly making it's way > through the gate. I propose we consider a feature freeze exception given > the state of the gate and the frequency of rechecks/failures. > > Thoughts, comments, or concerns? > > [0] > https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/system-scope > > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From colleen at gazlene.net Thu Jan 25 21:17:43 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Thu, 25 Jan 2018 22:17:43 +0100 Subject: [openstack-dev] [keystone] FFE for unified limits In-Reply-To: <39da1489-1d10-7eef-36b7-36df904d52bc@gmail.com> References: <39da1489-1d10-7eef-36b7-36df904d52bc@gmail.com> Message-ID: +1 On Thu, Jan 25, 2018 at 10:14 PM, Lance Bragstad wrote: > Hey all, > > The work for unified limits [0] has been up for a while, reviewers are > happy with it being experimental, and it is slowly making it's way > through the gate. I propose we consider a feature freeze exception given > the state of the gate and the frequency of rechecks/failures. > > Thoughts, comments, or concerns? > > [0] > https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/unified-limits > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From colleen at gazlene.net Thu Jan 25 21:23:53 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Thu, 25 Jan 2018 22:23:53 +0100 Subject: [openstack-dev] [keystone] FFE for application credentials In-Reply-To: <0595c36a-5ae8-e0ec-f3f8-1e56fa5777f6@gmail.com> References: <0595c36a-5ae8-e0ec-f3f8-1e56fa5777f6@gmail.com> Message-ID: +1 On Thu, Jan 25, 2018 at 10:15 PM, Lance Bragstad wrote: > Hey all, > > The work for application credentials [0] has been up for a while, > reviewers are happy with it, and it is slowly making it's way through > the gate. I propose we consider a feature freeze exception given the > state of the gate and the frequency of rechecks/failures. > > Thoughts, comments, or concerns? > > [0] > https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/application-credentials > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From colleen at gazlene.net Thu Jan 25 21:24:07 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Thu, 25 Jan 2018 22:24:07 +0100 Subject: [openstack-dev] [keystone] FFE for application credentials In-Reply-To: <74d128b9-a985-02d0-f002-c86ce775cea1@gmail.com> References: <74d128b9-a985-02d0-f002-c86ce775cea1@gmail.com> Message-ID: +1 On Thu, Jan 25, 2018 at 10:15 PM, Lance Bragstad wrote: > Hey all, > > The work for system assignments and system scope [0] has been up for a > while, reviewers are happy with it, and it is slowly making it's way > through the gate. I propose we consider a feature freeze exception given > the state of the gate and the frequency of rechecks/failures. > > Thoughts, comments, or concerns? > > [0] > https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/system-scope > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mgagne at calavera.ca Thu Jan 25 21:44:35 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Thu, 25 Jan 2018 16:44:35 -0500 Subject: [openstack-dev] [infra][all] New Zuul Depends-On syntax In-Reply-To: <2b04ea61-2b2c-d740-37e7-47215b10e3a0@nemebean.com> References: <87efmfz05l.fsf@meyer.lemoncheese.net> <005844f0-ea22-3e0b-fd2e-c1c889d9293b@inaugust.com> <2b04ea61-2b2c-d740-37e7-47215b10e3a0@nemebean.com> Message-ID: On Thu, Jan 25, 2018 at 3:55 PM, Ben Nemec wrote: > > > I'm curious what this means as far as best practices for inter-patch > references. In the past my understanding was the the change id was > preferred, both because if gerrit changed its URL format the change id links > would be updated appropriately, and also because change ids can be looked up > offline in git commit messages. Would that still be the case for everything > except depends-on now? > That's my concern too. Also AFAIK, Change-Id is branch agnostic. This means you can more easily cherry-pick between branches without having to change the URL to match the new branch for your dependencies. -- Mathieu From hrybacki at redhat.com Thu Jan 25 21:49:52 2018 From: hrybacki at redhat.com (Harry Rybacki) Date: Thu, 25 Jan 2018 16:49:52 -0500 Subject: [openstack-dev] [keystone] FFE for unified limits In-Reply-To: References: <39da1489-1d10-7eef-36b7-36df904d52bc@gmail.com> Message-ID: On Thu, Jan 25, 2018 at 4:17 PM, Colleen Murphy wrote: > +1 > +1 > On Thu, Jan 25, 2018 at 10:14 PM, Lance Bragstad wrote: >> Hey all, >> >> The work for unified limits [0] has been up for a while, reviewers are >> happy with it being experimental, and it is slowly making it's way >> through the gate. I propose we consider a feature freeze exception given >> the state of the gate and the frequency of rechecks/failures. >> >> Thoughts, comments, or concerns? >> >> [0] >> https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/unified-limits >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From hrybacki at redhat.com Thu Jan 25 21:50:16 2018 From: hrybacki at redhat.com (Harry Rybacki) Date: Thu, 25 Jan 2018 16:50:16 -0500 Subject: [openstack-dev] [keystone] FFE for application credentials In-Reply-To: References: <74d128b9-a985-02d0-f002-c86ce775cea1@gmail.com> Message-ID: On Thu, Jan 25, 2018 at 4:24 PM, Colleen Murphy wrote: > +1 > +1 > On Thu, Jan 25, 2018 at 10:15 PM, Lance Bragstad wrote: >> Hey all, >> >> The work for system assignments and system scope [0] has been up for a >> while, reviewers are happy with it, and it is slowly making it's way >> through the gate. I propose we consider a feature freeze exception given >> the state of the gate and the frequency of rechecks/failures. >> >> Thoughts, comments, or concerns? >> >> [0] >> https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/system-scope >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sombrafam at gmail.com Thu Jan 25 22:02:17 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Thu, 25 Jan 2018 20:02:17 -0200 Subject: [openstack-dev] [cinder] Deprecation of Kaminario and Pure automatic calculation for max_over_subscription_ratio Message-ID: Folks, In the Queens release, Cinder has implemented an internal mechanism to automatically calculate the max_over_subscription_ratio[1]. As Kaminario and Pure drivers already have a config option and are doing that internally, we kindly recommend that the driver maintainers deprecate those options in favor of the more generic option (max_over_subscription_ratio=auto) in order to maintain a better more consistent behavior among drivers. You can find me (erlon) at #openstack-cinder if you have any further questions. Sincerely, Erlon [1] https://review.openstack.org/#/c/534854/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Thu Jan 25 22:47:51 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 25 Jan 2018 17:47:51 -0500 Subject: [openstack-dev] Issue with eventlet monkey patching In-Reply-To: <31C24762-4286-4957-92C5-9835649F9B67@kaplonski.pl> References: <31C24762-4286-4957-92C5-9835649F9B67@kaplonski.pl> Message-ID: Thanks Slawek, this indeed looks like the problem we are having with Glance. Searchlight, too. On Thu, Jan 25, 2018 at 10:53 AM, Sławomir Kapłoński wrote: > Hi, > > Recently we found in Neutron errors with starting our agents which uses eventlet.monkey_patch() method. Bug is described in [1]. > > I heard on IRC that it's not related only to Neutron so here is what we found about that. > It looks that this issue happens on Ubuntu with python2.7 2.7.12-1ubuntu0~16.04.3 with eventlet < 0.22.0 (in OpenStack requirements it is set to 0.20.0). > There is no this issue with python2.7.12-1ubuntu0~16.04.2 and eventlet 0.20.0 > > Something similar was already reported for monotonic in [2]. From one of comments there we found that problem can be caused because: > "ctypes.util.find_library is now using subprocess.Popen, instead of os.popen (python/cpython at eb063011), and eventlet monkey-patches subprocess.Popen but not os.popen." > > It looks that eventlet patch [3] fixes/workaround this issue. > I pushed similar patch to Neutron [4] and it looks that our issue is solved for now. > > I hope that maybe this info will be helpful for You :) > > [1] https://bugs.launchpad.net/neutron/+bug/1745013 > [2] https://www.bountysource.com/issues/43892421-new-monotonic-broken-on-docker > [3] https://github.comhttps://review.openstack.org/#/c/537863/1/eventlet/eventlet/commit/b756447bab51046dfc6f1e0e299cc997ab343701 > [4] https://review.openstack.org/#/c/537863 > > — > Best regards > Slawek Kaplonski > slawek at kaplonski.pl > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From emilien at redhat.com Thu Jan 25 22:49:12 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 25 Jan 2018 14:49:12 -0800 Subject: [openstack-dev] [tripleo] branching stable/queens for tripleoclient In-Reply-To: References: Message-ID: On Thu, Jan 25, 2018 at 10:59 AM, Emilien Macchi wrote: > Hi, > > We're about to release Queens milestone 3. > > https://review.openstack.org/537752 > > Which means, we'll branch tripleoclient stable/queens this week. > Since we don't follow stable policy anymore, we can in theory accept any > backport but I would ask our team to backport only bugfixes and things > related to upgrades at that point (unless an FFE was granted). > > Also, we'll have to create queens CI jobs in RDO CI & upstream zuul. No > big deal but just FYI. > I think https://review.openstack.org/#/c/538053/ and https://review.rdoproject.org/r/#/c/11598/ are the main things to do on short term. We'll also have periodic jobs but that can wait. > > Any comment / feedback is welcome, > Thanks! > -- > Emilien Macchi > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Jan 25 23:57:30 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 25 Jan 2018 17:57:30 -0600 Subject: [openstack-dev] [nova] q-3 tag and gate status Message-ID: At this point in the day, I'm going to push the q-3 tag. We have quite a few things approved but not yet merged (some for over 20 hours in the gate now). The latest list of stuff that's approved is in the etherpad here: https://etherpad.openstack.org/p/nova-queens-blueprint-status I don't expect FFEs for the rest of it. We've got a lot of code to recheck grind through the gate in the next several days, and we're only two weeks from RC1, so I don't want to spend time reviewing feature code for FFEs and instead focus on bug triage / fixes, docs, testing, etc because we're bound to have some stuff crop up late as a result of what's being merged this week. I would like to somehow focus on prioritizing reviews early in Rocky on the series of blueprints that have been carried for multiple releases now so that we can flush those through before taking on more work, but that's a discussion for the PTG. -- Thanks, Matt From corvus at inaugust.com Fri Jan 26 00:08:57 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 25 Jan 2018 16:08:57 -0800 Subject: [openstack-dev] [infra][all] New Zuul Depends-On syntax In-Reply-To: ("Mathieu =?iso-8859-1?Q?Gagn=E9=22's?= message of "Thu, 25 Jan 2018 16:44:35 -0500") References: <87efmfz05l.fsf@meyer.lemoncheese.net> <005844f0-ea22-3e0b-fd2e-c1c889d9293b@inaugust.com> <2b04ea61-2b2c-d740-37e7-47215b10e3a0@nemebean.com> Message-ID: <87zi51v5uu.fsf@meyer.lemoncheese.net> Mathieu Gagné writes: > On Thu, Jan 25, 2018 at 3:55 PM, Ben Nemec wrote: >> >> >> I'm curious what this means as far as best practices for inter-patch >> references. In the past my understanding was the the change id was >> preferred, both because if gerrit changed its URL format the change id links >> would be updated appropriately, and also because change ids can be looked up >> offline in git commit messages. Would that still be the case for everything >> except depends-on now? Yes, that's a down-side of URLs. I personally think it's fine to keep using change-ids for anything other than Depends-On, though in many of those cases the commit sha may work as well. > That's my concern too. Also AFAIK, Change-Id is branch agnostic. This > means you can more easily cherry-pick between branches without having > to change the URL to match the new branch for your dependencies. Yes, there is a positive and negative aspect to this issue. On the one hand, for those times where it was convenient to say "depend on this change in all its forms across all branches of all projects", one must now add a URL for each. On the other hand, with URLs, it is now possible to indicate that a change specifically depends on another change targeted to one branch, or targeted to several branches. Simply list each URL (or don't) as appropriate. That wasn't possible before -- it wall all or none. -Jim From emilien at redhat.com Fri Jan 26 00:22:56 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 25 Jan 2018 16:22:56 -0800 Subject: [openstack-dev] [tripleo] CI'ing ceph-ansible against TripleO scenarios Message-ID: Is there any plans to run TripleO CI jobs in ceph-ansible? I know the project is on github but thanks to zuulv3 we can now easily configure ceph-ansible to run Ci jobs in OpenStack Infra. It would be really great to investigate that in the near future so we avoid eventual regressions. Sebastien, Giulio, John, thoughts? -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From clay.gerrard at gmail.com Fri Jan 26 00:41:51 2018 From: clay.gerrard at gmail.com (Clay Gerrard) Date: Thu, 25 Jan 2018 16:41:51 -0800 Subject: [openstack-dev] Swift-backed volume backups are still breaking the gate In-Reply-To: <82afc399-ce6a-bd9e-696d-29c16300475d@gmail.com> References: <82afc399-ce6a-bd9e-696d-29c16300475d@gmail.com> Message-ID: Does it help that swift also had to fix this? https://github.com/openstack/swift/blob/6d2503652b5f666275113cf9f3e185a2d9b3a121/swift/common/utils.py#L4415 The interesting/useful bit is where we replace our primary loghandlers createLock method to use one of these [Green|OS]-thread-safe PipeMutex lock things... -Clay On Thu, Jan 25, 2018 at 1:12 PM, Matt Riedemann wrote: > We thought things were fixed with [1] but it turns out that swiftclient > logs requests and responses at DEBUG level, so we're still switching thread > context during a backup write and failing the backup operation, causing > copious amounts of pain in the gate and piling up the rechecks. > > I've got a workaround here [2] which will hopefully be good enough to > stabilize things for awhile, but there is probably not much point in > rechecking a lot of patches, at least ones that run through the integrated > gate, until that is merged. > > [1] https://review.openstack.org/#/c/537437/ > [2] https://review.openstack.org/#/c/538027/ > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Jan 26 00:44:34 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 25 Jan 2018 19:44:34 -0500 Subject: [openstack-dev] [release][ptl][all] please approve the reno updates for queens Message-ID: <1516927374-sup-9825@lrrr.local> As part of creating branches for projects, the job proposes updates to add a "queens" page to the release notes build. The script prepares a best-effort version of the update, but local variances in the repositories means that doesn't always work. Please take the patches over and fix them, then land them as quickly as is reasonable to ensure that the release notes for your project are published correctly. https://review.openstack.org/#/q/topic:reno-queens Thanks! Doug From pabelanger at redhat.com Fri Jan 26 00:49:31 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 25 Jan 2018 19:49:31 -0500 Subject: [openstack-dev] [tripleo] CI'ing ceph-ansible against TripleO scenarios In-Reply-To: References: Message-ID: <20180126004931.GA8048@localhost.localdomain> On Thu, Jan 25, 2018 at 04:22:56PM -0800, Emilien Macchi wrote: > Is there any plans to run TripleO CI jobs in ceph-ansible? > I know the project is on github but thanks to zuulv3 we can now easily > configure ceph-ansible to run Ci jobs in OpenStack Infra. > > It would be really great to investigate that in the near future so we avoid > eventual regressions. > Sebastien, Giulio, John, thoughts? > -- > Emilien Macchi Just a note, we haven't actually agree to enable CI for github projects just yet. While it is something zuul can do now, I believe we still need to decide when / how to enable it. We are doing some initial testing with ansible/ansible however. -Paul From mriedemos at gmail.com Fri Jan 26 03:01:13 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 25 Jan 2018 21:01:13 -0600 Subject: [openstack-dev] Swift-backed volume backups are still breaking the gate In-Reply-To: References: <82afc399-ce6a-bd9e-696d-29c16300475d@gmail.com> Message-ID: <4565fbef-9ac3-e895-6998-34dce3f408ed@gmail.com> On 1/25/2018 6:41 PM, Clay Gerrard wrote: > Does it help that swift also had to fix this? > > https://github.com/openstack/swift/blob/6d2503652b5f666275113cf9f3e185a2d9b3a121/swift/common/utils.py#L4415 > > The interesting/useful bit is where we replace our primary loghandlers > createLock method to use one of these [Green|OS]-thread-safe PipeMutex > lock things... Is ThreadSafeSysLogHandler something that could live in oslo.log so we don't have to whack this mole everywhere at random times? -- Thanks, Matt From mgagne at calavera.ca Fri Jan 26 03:33:44 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Thu, 25 Jan 2018 22:33:44 -0500 Subject: [openstack-dev] [infra][all] New Zuul Depends-On syntax In-Reply-To: <87zi51v5uu.fsf@meyer.lemoncheese.net> References: <87efmfz05l.fsf@meyer.lemoncheese.net> <005844f0-ea22-3e0b-fd2e-c1c889d9293b@inaugust.com> <2b04ea61-2b2c-d740-37e7-47215b10e3a0@nemebean.com> <87zi51v5uu.fsf@meyer.lemoncheese.net> Message-ID: On Thu, Jan 25, 2018 at 7:08 PM, James E. Blair wrote: > Mathieu Gagné writes: > >> On Thu, Jan 25, 2018 at 3:55 PM, Ben Nemec wrote: >>> >>> >>> I'm curious what this means as far as best practices for inter-patch >>> references. In the past my understanding was the the change id was >>> preferred, both because if gerrit changed its URL format the change id links >>> would be updated appropriately, and also because change ids can be looked up >>> offline in git commit messages. Would that still be the case for everything >>> except depends-on now? > > Yes, that's a down-side of URLs. I personally think it's fine to keep > using change-ids for anything other than Depends-On, though in many of > those cases the commit sha may work as well. > >> That's my concern too. Also AFAIK, Change-Id is branch agnostic. This >> means you can more easily cherry-pick between branches without having >> to change the URL to match the new branch for your dependencies. > > Yes, there is a positive and negative aspect to this issue. > > On the one hand, for those times where it was convenient to say "depend > on this change in all its forms across all branches of all projects", > one must now add a URL for each. > > On the other hand, with URLs, it is now possible to indicate that a > change specifically depends on another change targeted to one branch, or > targeted to several branches. Simply list each URL (or don't) as > appropriate. That wasn't possible before -- it wall all or none. > > -Jim > > The old syntax will continue to work for a while I still believe Change-Id should be supported and not removed as suggested. The use of URL assumes you have access to Gerrit to fetch more information about the change. This might not always be true or possible, especially when Gerrit is kept private and only the git repository is replicated publicly and you which to cherry-pick something (and its dependencies) from it. -- Mathieu From clay.gerrard at gmail.com Fri Jan 26 03:36:19 2018 From: clay.gerrard at gmail.com (Clay Gerrard) Date: Thu, 25 Jan 2018 19:36:19 -0800 Subject: [openstack-dev] Swift-backed volume backups are still breaking the gate In-Reply-To: <4565fbef-9ac3-e895-6998-34dce3f408ed@gmail.com> References: <82afc399-ce6a-bd9e-696d-29c16300475d@gmail.com> <4565fbef-9ac3-e895-6998-34dce3f408ed@gmail.com> Message-ID: On Thu, Jan 25, 2018 at 7:01 PM, Matt Riedemann wrote: > Is ThreadSafeSysLogHandler something that could live in oslo.log so we > don't have to whack this mole everywhere at random times? That might make sense, unless we can get eventlet's monkey patching of the logging module to do something similar... FWIW, Swift doesn't use oslo.log and has it's own crufty logging issues: https://bugs.launchpad.net/swift/+bug/1380815 -Clay -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Fri Jan 26 04:59:42 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 25 Jan 2018 22:59:42 -0600 Subject: [openstack-dev] [infra][all] New Zuul Depends-On syntax In-Reply-To: References: <87efmfz05l.fsf@meyer.lemoncheese.net> <005844f0-ea22-3e0b-fd2e-c1c889d9293b@inaugust.com> <2b04ea61-2b2c-d740-37e7-47215b10e3a0@nemebean.com> <87zi51v5uu.fsf@meyer.lemoncheese.net> Message-ID: For my part, I tried it [1] and it doesn't seem to have worked. (The functional test failure is what the dep is supposed to have fixed.) Did I do something wrong? [1] https://review.openstack.org/#/c/533821/12 On 01/25/2018 09:33 PM, Mathieu Gagné wrote: > On Thu, Jan 25, 2018 at 7:08 PM, James E. Blair wrote: >> Mathieu Gagné writes: >> >>> On Thu, Jan 25, 2018 at 3:55 PM, Ben Nemec wrote: >>>> >>>> >>>> I'm curious what this means as far as best practices for inter-patch >>>> references. In the past my understanding was the the change id was >>>> preferred, both because if gerrit changed its URL format the change id links >>>> would be updated appropriately, and also because change ids can be looked up >>>> offline in git commit messages. Would that still be the case for everything >>>> except depends-on now? >> >> Yes, that's a down-side of URLs. I personally think it's fine to keep >> using change-ids for anything other than Depends-On, though in many of >> those cases the commit sha may work as well. >> >>> That's my concern too. Also AFAIK, Change-Id is branch agnostic. This >>> means you can more easily cherry-pick between branches without having >>> to change the URL to match the new branch for your dependencies. >> >> Yes, there is a positive and negative aspect to this issue. >> >> On the one hand, for those times where it was convenient to say "depend >> on this change in all its forms across all branches of all projects", >> one must now add a URL for each. >> >> On the other hand, with URLs, it is now possible to indicate that a >> change specifically depends on another change targeted to one branch, or >> targeted to several branches. Simply list each URL (or don't) as >> appropriate. That wasn't possible before -- it wall all or none. >> >> -Jim >> > >> The old syntax will continue to work for a while > > I still believe Change-Id should be supported and not removed as > suggested. The use of URL assumes you have access to Gerrit to fetch > more information about the change. > This might not always be true or possible, especially when Gerrit is > kept private and only the git repository is replicated publicly and > you which to cherry-pick something (and its dependencies) from it. > > -- > Mathieu > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From prometheanfire at gentoo.org Fri Jan 26 06:12:38 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Fri, 26 Jan 2018 00:12:38 -0600 Subject: [openstack-dev] [ALL][requirements] A freeze is coming and you should be prepared In-Reply-To: <20180125043227.v3mfb5u2ndeennvu@mthode.org> References: <20180123072350.2jby5zoeeyzaryv5@gentoo.org> <20180124072947.u4dv674dv6bcczb6@gentoo.org> <20180125043227.v3mfb5u2ndeennvu@mthode.org> Message-ID: <20180126061238.dayud3ayid5fibzd@gentoo.org> On 18-01-24 22:32:27, Matthew Thode wrote: > On 18-01-24 01:29:47, Matthew Thode wrote: > > On 18-01-23 01:23:50, Matthew Thode wrote: > > > Requirements is freezing Friday at 23:59:59 UTC so any last > > > global-requrements updates that need to get in need to get in now. > > > > > > I'm afraid that my condition has left me cold to your pleas of mercy. > > > > > > > Just your daily reminder that the freeze will happen in about 3 days > > time. Reviews seem to be winding down for requirements now (which is > > a good sign this release will be chilled to perfection). > > > > There's still a couple of things that may cause bumps for iso8601 and > oslo.versionedobjects but those are the main things. The msgpack change > is also rolling out (thanks dirk :D). Even with all these changes > though, in this universe, there's only one absolute. Everything freezes! > > https://review.openstack.org/535520 (oslo.serialization) > Last day, gate is sad and behind, but not my fault you waited til the last minute :P (see my first comment). The Iceman Cometh! -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mbultel at redhat.com Fri Jan 26 10:11:51 2018 From: mbultel at redhat.com (mathieu bultel) Date: Fri, 26 Jan 2018 11:11:51 +0100 Subject: [openstack-dev] [tripleo] Upgrade community weekly meeting In-Reply-To: References: Message-ID: Hi All, After few weeks and meetings on Friday, we decide to change a little bit the pattern of this meeting. We decided to keep the schedule up and open for anything/anyone, but each times, we will check before the meeting if the agenda [1] is empty. If so, then we won't make it, or we would use this schedule for another topic. This has been decided like that for few reasons. The main reason is that we repeat and discuss the same status/issues that we previously discuss on our normal scrum and most of the attendees is folks from inside the DFG. So its a kind of a repetition for us and there is no big value in this except when people external of the DFG bring things, like Weshay used to or rdo-cloud folks. Thank you all. Mathieu [1] https://etherpad.openstack.org/p/tripleo-upgrade-squad-meeting On 11/09/2017 01:23 PM, mathieu bultel wrote: > Hi TripleO, > > As discuss on the last TripleO meeting, the Upgrade squad will stand a > weekly meeting each Friday to discuss and share about the Upgrade > related topics. > > Every body who are interested is welcome to join us and fell free to > raise any questions/issues/concerns. > > Tomorrow will be our first session and it will be more CI oriented. > > We will use this etherpad to track the discussions and the status: > > https://etherpad.openstack.org/p/tripleo-upgrade-squad-meeting > > For the next meetings, I will setup agenda for each of those with the > following topics: Bugs, CI and Features (Specs), but everybody can add > his own items in the agenda. > > Mathieu > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mark at stackhpc.com Fri Jan 26 10:33:46 2018 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 26 Jan 2018 10:33:46 +0000 Subject: [openstack-dev] [requirements] [vitrage][glance][ironic] global requirements update for python-vitrageclient Message-ID: Also seeing this for the u-c [1] and g-r [2] bumps for python-ironicclient 2.2.0. These are required in order to use the ironic node traits feature in nova. [1] https://review.openstack.org/#/c/538093 [2] https://review.openstack.org/#/c/538066/3 On 25 January 2018 at 11:15, Afek, Ifat (Nokia - IL/Kfar Sava) < ifat.afek at nokia.com> wrote: > Adding Glance team. > Any idea what could be wrong? > > Thanks, > Ifat. > > > On 25/01/2018, 9:09, "Afek, Ifat (Nokia - IL/Kfar Sava)" < > ifat.afek at nokia.com> wrote: > > Hi, > > I tried to update the version of python-vitrageclient [1], but the > legacy-requirements-integration-dsvm test failed with an error that does > not seem related to my changes: > > error: can't copy 'etc/glance-image-import.conf': doesn't exist or > not a regular file > > I noticed that two other changes [2][3] failed with the same error. > > Can you please help? > > Thanks, > Ifat. > > > [1] https://review.openstack.org/#/c/537307 > [2] https://review.openstack.org/#/c/535460/ > [3] https://review.openstack.org/#/c/536142/ > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Fri Jan 26 10:47:38 2018 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 26 Jan 2018 10:47:38 +0000 Subject: [openstack-dev] [requirements] [vitrage][glance][ironic] global requirements update for python-vitrageclient In-Reply-To: References: Message-ID: Looks like this should be resolved by https://review.openstack.org/#/c/537453/. Mark On 26 January 2018 at 10:33, Mark Goddard wrote: > Also seeing this for the u-c [1] and g-r [2] bumps for python-ironicclient > 2.2.0. These are required in order to use the ironic node traits feature in > nova. > > [1] https://review.openstack.org/#/c/538093 > [2] https://review.openstack.org/#/c/538066/3 > > On 25 January 2018 at 11:15, Afek, Ifat (Nokia - IL/Kfar Sava) < > ifat.afek at nokia.com> wrote: > >> Adding Glance team. >> Any idea what could be wrong? >> >> Thanks, >> Ifat. >> >> >> On 25/01/2018, 9:09, "Afek, Ifat (Nokia - IL/Kfar Sava)" < >> ifat.afek at nokia.com> wrote: >> >> Hi, >> >> I tried to update the version of python-vitrageclient [1], but the >> legacy-requirements-integration-dsvm test failed with an error that does >> not seem related to my changes: >> >> error: can't copy 'etc/glance-image-import.conf': doesn't exist or >> not a regular file >> >> I noticed that two other changes [2][3] failed with the same error. >> >> Can you please help? >> >> Thanks, >> Ifat. >> >> >> [1] https://review.openstack.org/#/c/537307 >> [2] https://review.openstack.org/#/c/535460/ >> [3] https://review.openstack.org/#/c/536142/ >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfidente at redhat.com Fri Jan 26 12:29:31 2018 From: gfidente at redhat.com (Giulio Fidente) Date: Fri, 26 Jan 2018 13:29:31 +0100 Subject: [openstack-dev] [tripleo] CI'ing ceph-ansible against TripleO scenarios In-Reply-To: <20180126004931.GA8048@localhost.localdomain> References: <20180126004931.GA8048@localhost.localdomain> Message-ID: <74a2641a-a2af-771b-3e17-8ccadfd06e2e@redhat.com> On 01/26/2018 01:49 AM, Paul Belanger wrote: > On Thu, Jan 25, 2018 at 04:22:56PM -0800, Emilien Macchi wrote: >> Is there any plans to run TripleO CI jobs in ceph-ansible? >> I know the project is on github but thanks to zuulv3 we can now easily >> configure ceph-ansible to run Ci jobs in OpenStack Infra. >> >> It would be really great to investigate that in the near future so we avoid >> eventual regressions. >> Sebastien, Giulio, John, thoughts? >> -- >> Emilien Macchi > > Just a note, we haven't actually agree to enable CI for github projects just > yet. While it is something zuul can do now, I believe we still need to decide > when / how to enable it. > > We are doing some initial testing with ansible/ansible however. but we like being on the front line! :D we discussed this same topic with Sebastien and John a few weeks back and agreed on having some gate job for ceph-ansible CI'ing against TripleO! how do we start? I think the candidate branch on ceph-ansible to gate is "beta-3.1" but there will be more ... I am just not sure we're stable enough to gate master yet ... but we might do it non-voting, it's up for debate on TripleO side we'd be looking at running scenarios 001 and 004 ... maybe initially 004 only is good enough as it covers (at least for ceph) most of what is in 001 as well can we continue on IRC? :D and thanks Emilien and Paul for starting the thread and helping -- Giulio Fidente GPG KEY: 08D733BA From mbultel at redhat.com Fri Jan 26 13:20:13 2018 From: mbultel at redhat.com (mathieu bultel) Date: Fri, 26 Jan 2018 14:20:13 +0100 Subject: [openstack-dev] [tripleo] Upgrade community weekly meeting In-Reply-To: References: Message-ID: Hi All, After few weeks and meetings on Friday, we decide to change a little bit the pattern of this meeting. We decided to keep the schedule up and open for anything/anyone, but each times, we will check before the meeting if the agenda [1] is empty. If so, then we won't make it, or we would use this schedule for another topic. This has been decided like that for few reasons. The main reason is that we repeat and discuss the same status/issues that we previously discuss on our normal scrum and most of the attendees is folks from inside the DFG. So its a kind of a repetition for us and there is no big value in this except when people external of the DFG bring things, like Weshay used to or rdo-cloud folks. Thank you all. Mathieu [1] https://etherpad.openstack.org/p/tripleo-upgrade-squad-meeting On 11/09/2017 01:23 PM, mathieu bultel wrote: > Hi TripleO, > > As discuss on the last TripleO meeting, the Upgrade squad will stand a > weekly meeting each Friday to discuss and share about the Upgrade > related topics. > > Every body who are interested is welcome to join us and fell free to > raise any questions/issues/concerns. > > Tomorrow will be our first session and it will be more CI oriented. > > We will use this etherpad to track the discussions and the status: > > https://etherpad.openstack.org/p/tripleo-upgrade-squad-meeting > > For the next meetings, I will setup agenda for each of those with the > following topics: Bugs, CI and Features (Specs), but everybody can add > his own items in the agenda. > > Mathieu > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From balazs.gibizer at ericsson.com Fri Jan 26 14:05:04 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 26 Jan 2018 15:05:04 +0100 Subject: [openstack-dev] [nova][neutron][infra] zuul job definitions overrides and the irrelevant-file attribute Message-ID: <1516975504.9811.5@smtp.office365.com> Hi, I'm getting more and more confused how the zuul job hierarchy works or is supposed to work. First there was a bug in nova that some functional tests are not triggered although the job (re-)definition in the nova part of the project-config should not prevent it to run [1]. There we figured out that irrelevant-files parameter of the jobs are not something that can be overriden during re-definition or through parent-child relationship. The base job openstack-tox-functional has an irrelevant-files attribute that lists '^doc/.*$' as a path to be ignored [2]. In the other hand the nova part of the project-config tries to make this ignore less broad by adding only '^doc/source/.*$' . This does not work as we expected and the job did not run on changes that only affected ./doc/notification_samples path. We are fixing it by defining our own functional job in nova tree [4]. [1] https://bugs.launchpad.net/nova/+bug/1742962 [2] https://github.com/openstack-infra/openstack-zuul-jobs/blob/1823e3ea20e6dfaf37786a6ff79c56cb786bf12c/zuul.d/jobs.yaml#L380 [3] https://github.com/openstack-infra/project-config/blob/1145ab1293f5fa4d34c026856403c22b091e673c/zuul.d/projects.yaml#L10509 [4] https://review.openstack.org/#/c/533210/ Then I started looking into other jobs to see if we made similar mistakes. I found two other examples in the nova related jobs where redefining the irrelevant-files of a job caused problems. In these examples nova tried to ignore more paths during the override than what was originally ignored in the job definition but that did not work [5][6]. [5] https://bugs.launchpad.net/nova/+bug/1745405 (temptest-full) [6] https://bugs.launchpad.net/nova/+bug/1745431 (neutron-grenade) So far the problem seemed to be consistent (i.e. override does not work). But then I looked into neutron-grenade-multinode. That job is defined in neutron tree (like neutron-grenade) but nova also refers to it in nova section of the project-config with different irrelevant-files than their original definition. So I assumed that this will lead to similar problem than in case of neutron-grenade, but it doesn't. The neutron-grenade-multinode original definition [1] does not try to ignore the 'nova/tests' path but the nova side of the definition in the project config does try to ignore that path [8]. Interestingly a patch in nova that only changes under the path: nova/tests/ does not trigger the job [9]. So in this case overriding the irrelevant-files of a job works. (It seems that overriding neutron-tempest-linuxbridge irrelevant-files works too). [7] https://github.com/openstack/neutron/blob/7e3d6a18fb928bcd303a44c1736d0d6ca9c7f0ab/.zuul.yaml#L140-L159 [8] https://github.com/openstack-infra/project-config/blob/5ddbd62a46e17dd2fdee07bec32aa65e3b637ff3/zuul.d/projects.yaml#L10516-L10530 [9] https://review.openstack.org/#/c/537936/ I don't see what is the difference between neutron-grenade and neutron-grenade-multinode jobs definitions from this perspective but it seems that the irrelevent-files attribute behaves inconsistently in these two jobs. Could you please help me undestand how irrelevant-files in overriden jobs supposed to work? cheers, gibi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Jan 26 14:46:21 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 26 Jan 2018 08:46:21 -0600 Subject: [openstack-dev] [ALL][requirements] A freeze is coming and you should be prepared In-Reply-To: <20180126061238.dayud3ayid5fibzd@gentoo.org> References: <20180123072350.2jby5zoeeyzaryv5@gentoo.org> <20180124072947.u4dv674dv6bcczb6@gentoo.org> <20180125043227.v3mfb5u2ndeennvu@mthode.org> <20180126061238.dayud3ayid5fibzd@gentoo.org> Message-ID: On 1/26/2018 12:12 AM, Matthew Thode wrote: > Last day, gate is sad and behind, but not my fault you waited til the > last minute :P (see my first comment). The Iceman Cometh! Well, the requirements jobs seem to be busted as well: http://logs.openstack.org/93/538093/1/check/legacy-requirements-integration-dsvm/157d877/job-output.txt.gz#_2018-01-26_11_17_31_182762 http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22error%3A%20can't%20copy%20'etc%2Fglance-image-import.conf'%3A%20doesn't%20exist%20or%20not%20a%20regular%20file%5C%22%20AND%20tags%3A%5C%22console%5C%22&from=7d -- Thanks, Matt From jim at jimrollenhagen.com Fri Jan 26 14:52:29 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 26 Jan 2018 09:52:29 -0500 Subject: [openstack-dev] [ALL][requirements] A freeze is coming and you should be prepared In-Reply-To: References: <20180123072350.2jby5zoeeyzaryv5@gentoo.org> <20180124072947.u4dv674dv6bcczb6@gentoo.org> <20180125043227.v3mfb5u2ndeennvu@mthode.org> <20180126061238.dayud3ayid5fibzd@gentoo.org> Message-ID: On Fri, Jan 26, 2018 at 9:46 AM, Matt Riedemann wrote: > > Well, the requirements jobs seem to be busted as well: > > http://logs.openstack.org/93/538093/1/check/legacy-requireme > nts-integration-dsvm/157d877/job-output.txt.gz#_2018-01-26_11_17_31_182762 The fix is in the gate, FWIW: https://review.openstack.org/#/c/537453/ // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Fri Jan 26 15:22:34 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Fri, 26 Jan 2018 09:22:34 -0600 Subject: [openstack-dev] [requirements] [vitrage][glance][ironic] global requirements update for python-vitrageclient In-Reply-To: References: Message-ID: <20180126152234.vnl6pnir72agv5ui@gentoo.org> On 18-01-26 10:47:38, Mark Goddard wrote: > Looks like this should be resolved by > https://review.openstack.org/#/c/537453/. > Mark > > On 26 January 2018 at 10:33, Mark Goddard wrote: > > > Also seeing this for the u-c [1] and g-r [2] bumps for python-ironicclient > > 2.2.0. These are required in order to use the ironic node traits feature in > > nova. > > > > [1] https://review.openstack.org/#/c/538093 > > [2] https://review.openstack.org/#/c/538066/3 > > > > On 25 January 2018 at 11:15, Afek, Ifat (Nokia - IL/Kfar Sava) < > > ifat.afek at nokia.com> wrote: > > > >> Adding Glance team. > >> Any idea what could be wrong? > >> > >> Thanks, > >> Ifat. > >> > >> > >> On 25/01/2018, 9:09, "Afek, Ifat (Nokia - IL/Kfar Sava)" < > >> ifat.afek at nokia.com> wrote: > >> > >> Hi, > >> > >> I tried to update the version of python-vitrageclient [1], but the > >> legacy-requirements-integration-dsvm test failed with an error that does > >> not seem related to my changes: > >> > >> error: can't copy 'etc/glance-image-import.conf': doesn't exist or > >> not a regular file > >> > >> I noticed that two other changes [2][3] failed with the same error. > >> > >> Can you please help? > >> > >> Thanks, > >> Ifat. > >> > >> > >> [1] https://review.openstack.org/#/c/537307 > >> [2] https://review.openstack.org/#/c/535460/ > >> [3] https://review.openstack.org/#/c/536142/ yep, requirements is hard blocked on that atm -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mriedemos at gmail.com Fri Jan 26 15:38:18 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 26 Jan 2018 09:38:18 -0600 Subject: [openstack-dev] [tc] [all] TC Report 18-04 In-Reply-To: References: Message-ID: <76a0d973-c4c8-1227-496d-f7b531eb732f@gmail.com> On 1/23/2018 12:40 PM, Chris Dent wrote: > ## OpenStack-wide Goals > > There are four proposed [OpenStack-wide > goals](https://governance.openstack.org/tc/goals/index.html): > > * [Add Cold upgrades >   capabilities](https://review.openstack.org/#/c/533544/) > * [Add Rocky goal to remove >   mox](https://review.openstack.org/#/c/532361/) > * [ > Add Rocky goal to enable mutable > configuration](https://review.openstack.org/#/c/534605/) > * [ > Add Rocky goal to ensure pagination > links](https://review.openstack.org/#/c/532627/) > > These need to be validated by the community, but they are not [getting > as much > feedback](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-23.log.html#t2018-01-23T09:16:51) > > as hoped. There are different theories as to why, from "people are > busy", to "people don't feel empowered to comment", to "people don't > care". Whatever it is, without input the onus falls on the TC to make > choices, increasing the risk that the goals will be perceived as a > diktat. As always, we need to work harder to have high fidelity > feedback loops. This is especially true in our "mature" phase. What's the due date on approving the community wide goals for Rocky? Given the PTG is around the corner (by a month I guess), why not just have the discussion in person at the PTG (for those attending in person anyway)? -- Thanks, Matt From doug at doughellmann.com Fri Jan 26 15:46:27 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 26 Jan 2018 10:46:27 -0500 Subject: [openstack-dev] [release][ptl][all] extending queens-3 milestone deadlines Message-ID: <1516981421-sup-2586@lrrr.local> This morning during the release team meeting [1] the team agreed that given the issues with CI this week it makes sense to extend the deadlines for the milestone. The queens-3 deadline for projects following the cycle-with-milestones release model is extended to Monday 29 January. The client-library release deadline is extended to Tuesday 30 January. Milestone-projects that have already tagged their 0b3 release for Queens should carry on with preparing their release candidates as planned. Doug [1] http://eavesdrop.openstack.org/meetings/releaseteam/2018/releaseteam.2018-01-26-15.03.html From doug at doughellmann.com Fri Jan 26 15:57:16 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 26 Jan 2018 10:57:16 -0500 Subject: [openstack-dev] [tc] [all] TC Report 18-04 In-Reply-To: <76a0d973-c4c8-1227-496d-f7b531eb732f@gmail.com> References: <76a0d973-c4c8-1227-496d-f7b531eb732f@gmail.com> Message-ID: <1516982221-sup-7012@lrrr.local> Excerpts from Matt Riedemann's message of 2018-01-26 09:38:18 -0600: > On 1/23/2018 12:40 PM, Chris Dent wrote: > > ## OpenStack-wide Goals > > > > There are four proposed [OpenStack-wide > > goals](https://governance.openstack.org/tc/goals/index.html): > > > > * [Add Cold upgrades > >   capabilities](https://review.openstack.org/#/c/533544/) > > * [Add Rocky goal to remove > >   mox](https://review.openstack.org/#/c/532361/) > > * [ > > Add Rocky goal to enable mutable > > configuration](https://review.openstack.org/#/c/534605/) > > * [ > > Add Rocky goal to ensure pagination > > links](https://review.openstack.org/#/c/532627/) > > > > These need to be validated by the community, but they are not [getting > > as much > > feedback](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-23.log.html#t2018-01-23T09:16:51) > > > > as hoped. There are different theories as to why, from "people are > > busy", to "people don't feel empowered to comment", to "people don't > > care". Whatever it is, without input the onus falls on the TC to make > > choices, increasing the risk that the goals will be perceived as a > > diktat. As always, we need to work harder to have high fidelity > > feedback loops. This is especially true in our "mature" phase. > > What's the due date on approving the community wide goals for Rocky? > Given the PTG is around the corner (by a month I guess), why not just > have the discussion in person at the PTG (for those attending in person > anyway)? > Ideally we would use the time at the PTG to discuss implementation details. Doug From mriedemos at gmail.com Fri Jan 26 16:08:46 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 26 Jan 2018 10:08:46 -0600 Subject: [openstack-dev] [tc] [all] TC Report 18-04 In-Reply-To: <1516982221-sup-7012@lrrr.local> References: <76a0d973-c4c8-1227-496d-f7b531eb732f@gmail.com> <1516982221-sup-7012@lrrr.local> Message-ID: <044b72e3-5b62-6871-e783-3a7e45b139da@gmail.com> On 1/26/2018 9:57 AM, Doug Hellmann wrote: > Ideally we would use the time at the PTG to discuss implementation > details. For something like the mox one, there really are no implementation details, are there? -- Thanks, Matt From doug at doughellmann.com Fri Jan 26 16:25:35 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 26 Jan 2018 11:25:35 -0500 Subject: [openstack-dev] [tc] [all] TC Report 18-04 In-Reply-To: <044b72e3-5b62-6871-e783-3a7e45b139da@gmail.com> References: <76a0d973-c4c8-1227-496d-f7b531eb732f@gmail.com> <1516982221-sup-7012@lrrr.local> <044b72e3-5b62-6871-e783-3a7e45b139da@gmail.com> Message-ID: <1516983917-sup-1488@lrrr.local> Excerpts from Matt Riedemann's message of 2018-01-26 10:08:46 -0600: > On 1/26/2018 9:57 AM, Doug Hellmann wrote: > > Ideally we would use the time at the PTG to discuss implementation > > details. > > For something like the mox one, there really are no implementation > details, are there? > That one is unusual in that regard, yes. From thierry at openstack.org Fri Jan 26 16:26:33 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 26 Jan 2018 17:26:33 +0100 Subject: [openstack-dev] [tc] Technical Committee Status update, January 26th Message-ID: <11a04045-2cbd-006c-04f9-a9da33018d6f@openstack.org> Hi! This is the weekly summary of Technical Committee initiatives. You can find the full list of all open topics (updated twice a week) at: https://wiki.openstack.org/wiki/Technical_Committee_Tracker If you are working on something (or plan to work on something) governance-related that is not reflected on the tracker yet, please feel free to add to it ! == Recently-approved changes == * Update Python PTI for tests to be specific and explicit [1] * Allow public polling to choose release names [2] * New repositories: networking-generic-switch-tempest-plugin, tempest-stress, charm-neutron-api-genericswitch, charm-ironic, charm-panko * Goal updates: trove, solum, monasca, ironic, heat [1] https://review.openstack.org/#/c/519751/ [2] https://review.openstack.org/#/c/534226/ This week we finally merged updates to the language used in the Python PTI to be much more specific about how projects should be running tests. The objective is to have a more consistent experience between projects when running tests. Please see the Python PTI (publication job pending) at: https://governance.openstack.org/tc/reference/pti/python.html We also approved release naming process changes to turn the vote into a public poll (and not crash CIVS again with tens of thousands of emails to send to Foundation members). Expect the naming process for the S release to start soon. The updated process will be found here once the publication job runs: https://governance.openstack.org/tc/reference/release-naming.html == Rocky goals == We are in the final steps of selecting a set of community goals for Rocky. We need wide community input on which goals are doable and make the most sense! Please see the list of proposed goals and associated champions: * Storyboard Migration [3] (diablo_rojo) * Remove mox [4] (chandankumar) * Ensure pagination links [5] (mordred) * Add Cold upgrades capabilities [6] (masayuki) * Enable mutable configuration [7] (gcb) [3] https://review.openstack.org/513875 [4] https://review.openstack.org/532361 [5] https://review.openstack.org/532627 [6] https://review.openstack.org/#/c/533544/ [7] https://review.openstack.org/534605 NB: mriedem suggested on the ML that we wait until the PTG in Dublin to make the final call. It gives more time to carefully consider the goals, but delays the start of the work and makes planning pre-PTG a bit more difficult. == Voting in progress == Doug proposed to use StoryBoard for tracking Rocky goal completion (rather than a truckload of governance changes). This change now reached majority votes, and will be approved on Tuesday unless new objections are posted: https://review.openstack.org/#/c/534443/ A new OpenStack project team was proposed to add a function-as-a-service component to OpenStack (called Qinling). The proposal has majority support at this point, but the review raised interesting side discussions and questions. Since the team would be added for the Rocky cycle, there is no hurry so let's continue those discussions for a bit more time: https://review.openstack.org/#/c/533827/ == Under discussion == The discussion started by Graham Hayes to clarify how the testing of interoperability programs should be organized in the age of add-on trademark programs is still going on, now on an active mailing-list thread. Please chime in to inform the TC choice: https://review.openstack.org/521602 http://lists.openstack.org/pipermail/openstack-dev/2018-January/126146.html == TC member actions for the coming week(s) == One month out, we need to start looking into the discussions we need to have at the PTG (and good post-lunch presentation topics). We have plenty of room for additional discussion topics during the Monday-Tuesday part of the event, so it is easy to dedicate a room for half a day to a full day to make good progress on critical issues. We might also finalize the Rocky goal selection, unless we take on mriedem's suggestion to wait until PTG to make the final cut. == Office hours == To be more inclusive of all timezones and more mindful of people for which English is not the primary language, the Technical Committee dropped its dependency on weekly meetings. So that you can still get hold of TC members on IRC, we instituted a series of office hours on #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays For the coming week, I expect discussions to be focused around Rocky goal selection and PTG prep. Cheers, -- Thierry Carrez (ttx) From corvus at inaugust.com Fri Jan 26 16:57:02 2018 From: corvus at inaugust.com (James E. Blair) Date: Fri, 26 Jan 2018 08:57:02 -0800 Subject: [openstack-dev] [infra][all] New Zuul Depends-On syntax In-Reply-To: ("Mathieu =?iso-8859-1?Q?Gagn=E9=22's?= message of "Thu, 25 Jan 2018 22:33:44 -0500") References: <87efmfz05l.fsf@meyer.lemoncheese.net> <005844f0-ea22-3e0b-fd2e-c1c889d9293b@inaugust.com> <2b04ea61-2b2c-d740-37e7-47215b10e3a0@nemebean.com> <87zi51v5uu.fsf@meyer.lemoncheese.net> Message-ID: <878tckv9r5.fsf@meyer.lemoncheese.net> Mathieu Gagné writes: > On Thu, Jan 25, 2018 at 7:08 PM, James E. Blair wrote: >> Mathieu Gagné writes: >> >>> On Thu, Jan 25, 2018 at 3:55 PM, Ben Nemec wrote: >>>> >>>> >>>> I'm curious what this means as far as best practices for inter-patch >>>> references. In the past my understanding was the the change id was >>>> preferred, both because if gerrit changed its URL format the change id links >>>> would be updated appropriately, and also because change ids can be looked up >>>> offline in git commit messages. Would that still be the case for everything >>>> except depends-on now? >> >> Yes, that's a down-side of URLs. I personally think it's fine to keep >> using change-ids for anything other than Depends-On, though in many of >> those cases the commit sha may work as well. >> >>> That's my concern too. Also AFAIK, Change-Id is branch agnostic. This >>> means you can more easily cherry-pick between branches without having >>> to change the URL to match the new branch for your dependencies. >> >> Yes, there is a positive and negative aspect to this issue. >> >> On the one hand, for those times where it was convenient to say "depend >> on this change in all its forms across all branches of all projects", >> one must now add a URL for each. >> >> On the other hand, with URLs, it is now possible to indicate that a >> change specifically depends on another change targeted to one branch, or >> targeted to several branches. Simply list each URL (or don't) as >> appropriate. That wasn't possible before -- it wall all or none. >> >> -Jim >> > >> The old syntax will continue to work for a while > > I still believe Change-Id should be supported and not removed as > suggested. The use of URL assumes you have access to Gerrit to fetch > more information about the change. > This might not always be true or possible, especially when Gerrit is > kept private and only the git repository is replicated publicly and > you which to cherry-pick something (and its dependencies) from it. Perhaps a method of automatically noting the dependencies in git notes could help with that case? Or maybe use a different way of communicating that information -- even with change-ids, there's still a lot of missing information in that scenario (for instance, which changes still haven't merged). -Jim From miguel at mlavalle.com Fri Jan 26 17:13:14 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 26 Jan 2018 11:13:14 -0600 Subject: [openstack-dev] [neutron][l3][flavors][floatingip] FFE request for patch no 523257 and 532993 In-Reply-To: References: Message-ID: Manjeets, This FFE has been approved. Tracking it here: https://launchpad.net/neutron/+milestone/queens-rc1 Cheers On Thu, Jan 25, 2018 at 12:17 PM, Bhatia, Manjeet S < manjeet.s.bhatia at intel.com> wrote: > Hello all ! > > > > I’d like to request a FFE for patch 523257 [1] that adds new resources and > events to handle operations > > For routers if L3 flavors framework is used. The neutron-lib part is > already merged [lib] thanks to Boden and > > Miguel for quick reviews on that. The second patch 53993 [2] adds the > missing notifications for floatingip update > > and delete events without which l3 flavor drivers Backends isn’t able to > perform the update and delete operations > > on floatingip’s correctly. These two patches are needed for L3 flavors > driver in networking-odl [nodll3]. > > > > > > [1]. https://review.openstack.org/#/c/523257 > > [2]. https://review.openstack.org/#/c/532993 > > > > [lib] https://review.openstack.org/#/c/535512/ > > [nodll3] https://review.openstack.org/#/c/504182/ > > > > > > Sorry for 2nd email on this I forgot to add [openstack-dev] subject in > last one. > > > > > > Thanks and regards ! > > Manjeet Singh Bhatia > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Jan 26 17:30:20 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 26 Jan 2018 17:30:20 +0000 Subject: [openstack-dev] [infra][all] New Zuul Depends-On syntax In-Reply-To: <878tckv9r5.fsf@meyer.lemoncheese.net> References: <87efmfz05l.fsf@meyer.lemoncheese.net> <005844f0-ea22-3e0b-fd2e-c1c889d9293b@inaugust.com> <2b04ea61-2b2c-d740-37e7-47215b10e3a0@nemebean.com> <87zi51v5uu.fsf@meyer.lemoncheese.net> <878tckv9r5.fsf@meyer.lemoncheese.net> Message-ID: <20180126173020.24xbtzbtuutf7awu@yuggoth.org> On 2018-01-26 08:57:02 -0800 (-0800), James E. Blair wrote: [...] > Perhaps a method of automatically noting the dependencies in git > notes could help with that case? Or maybe use a different way of > communicating that information -- even with change-ids, there's > still a lot of missing information in that scenario (for instance, > which changes still haven't merged). For what it's worth, Git notes for change commits already include their corresponding Gerrit change URLs so can be found offline that way regardless (as long as you configure your git client to retrieve them whenever you retrieve other Git objects, e.g. during a remote update). Even with a Change-Id you don't have any means of identifying the containing repository except through brute force means or online searching, so aside from being tracked in notes rather than commit messages there's not much else different in the new model. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Fri Jan 26 17:45:32 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 26 Jan 2018 17:45:32 +0000 Subject: [openstack-dev] [horizon][packaging] django-openstack-auth retirement In-Reply-To: <698eddb8-4136-b3e4-4bf7-d88aef7d2f89@inaugust.com> References: <20180122113012.xe42fi24v3ljm7rz@yuggoth.org> <20180124142526.cczgg2kgibb7k4rj@yuggoth.org> <698eddb8-4136-b3e4-4bf7-d88aef7d2f89@inaugust.com> Message-ID: <20180126174531.qmc6pdw3jpz67aus@yuggoth.org> On 2018-01-24 08:47:30 -0600 (-0600), Monty Taylor wrote: [...] > Horizon and neutron were updated to start publishing to PyPI > already. > > https://review.openstack.org/#/c/531822/ > > This is so that we can start working on unwinding the neutron and > horizon specific versions of jobs for neutron and horizon plugins. Nice! I somehow missed that merging a couple of weeks back. In that case, I suppose we could in theory do one final transitional package upload of DOA depending on the conflicting Horizon release if others think that's a good idea. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corvus at inaugust.com Fri Jan 26 17:57:29 2018 From: corvus at inaugust.com (James E. Blair) Date: Fri, 26 Jan 2018 09:57:29 -0800 Subject: [openstack-dev] [nova][neutron][infra] zuul job definitions overrides and the irrelevant-file attribute In-Reply-To: <1516975504.9811.5@smtp.office365.com> (=?iso-8859-1?Q?=22Bal?= =?iso-8859-1?Q?=E1zs?= Gibizer"'s message of "Fri, 26 Jan 2018 15:05:04 +0100") References: <1516975504.9811.5@smtp.office365.com> Message-ID: <874ln8v6ye.fsf@meyer.lemoncheese.net> Balázs Gibizer writes: > Hi, > > I'm getting more and more confused how the zuul job hierarchy works or > is supposed to work. Hi! First, you (or others) may or may not have seen this already -- some of it didn't exist when we first rolled out v3, and some of it has changed -- but here are the relevant bits of the documentation that should help explain what's going on. It helps to understand freezing: https://docs.openstack.org/infra/zuul/user/config.html#job and matching: https://docs.openstack.org/infra/zuul/user/config.html#matchers > First there was a bug in nova that some functional tests are not > triggered although the job (re-)definition in the nova part of the > project-config should not prevent it to run [1]. > > There we figured out that irrelevant-files parameter of the jobs are > not something that can be overriden during re-definition or through > parent-child relationship. The base job openstack-tox-functional has > an irrelevant-files attribute that lists '^doc/.*$' as a path to be > ignored [2]. In the other hand the nova part of the project-config > tries to make this ignore less broad by adding only '^doc/source/.*$' > . This does not work as we expected and the job did not run on changes > that only affected ./doc/notification_samples path. We are fixing it > by defining our own functional job in nova tree [4]. > > [1] https://bugs.launchpad.net/nova/+bug/1742962 > [2] > https://github.com/openstack-infra/openstack-zuul-jobs/blob/1823e3ea20e6dfaf37786a6ff79c56cb786bf12c/zuul.d/jobs.yaml#L380 > [3] > https://github.com/openstack-infra/project-config/blob/1145ab1293f5fa4d34c026856403c22b091e673c/zuul.d/projects.yaml#L10509 > [4] https://review.openstack.org/#/c/533210/ This is correct. The issue here is that the irrelevant-files definition on openstack-tox-functional is too broad. We need to be *extremely* careful applying matchers to jobs like that. Generally I think that irrelevant-files should be reserved for the project-pipeline invocations only. That's how they were effectively used in Zuul v2, after all. Essentially, when someone puts an irrelevant-files section on a job like that, they are saying "this job will never apply to these files, ever." That's clearly not correct in this case. So our solutions are to acknowledge that it's over-broad, and reduce or eliminate the list in [2] and expand it elsewhere (as in [3]). Or we can say "we were generally correct, but nova is extra special so it needs its own job". If that's the choice, then I think [4] is a fine solution. > Then I started looking into other jobs to see if we made similar > mistakes. I found two other examples in the nova related jobs where > redefining the irrelevant-files of a job caused problems. In these > examples nova tried to ignore more paths during the override than what > was originally ignored in the job definition but that did not work > [5][6]. > > [5] https://bugs.launchpad.net/nova/+bug/1745405 (temptest-full) As noted in that bug, the tempest-full job is invoked on nova via this stanza: https://github.com/openstack-infra/project-config/blob/5ddbd62a46e17dd2fdee07bec32aa65e3b637ff3/zuul.d/projects.yaml#L10674-L10688 As expected, that did not match. There is a second invocation of tempest-full on nova here: http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/zuul-legacy-project-templates.yaml#n126 That has no irrelevant-files matches, and so matches everything. If you drop the use of that template, it will work as expected. Or, if you can say with some certainty that nova's irrelevant-files set is not over-broad, you could move the irrelevant-files from nova's invocation into the template, or even the job, and drop nova's individual invocation. > [6] https://bugs.launchpad.net/nova/+bug/1745431 (neutron-grenade) The same template invokes this job as well. > So far the problem seemed to be consistent (i.e. override does not > work). But then I looked into neutron-grenade-multinode. That job is > defined in neutron tree (like neutron-grenade) but nova also refers to > it in nova section of the project-config with different > irrelevant-files than their original definition. So I assumed that > this will lead to similar problem than in case of neutron-grenade, but > it doesn't. > > The neutron-grenade-multinode original definition [1] does not try to > ignore the 'nova/tests' path but the nova side of the definition in > the project config does try to ignore that path [8]. Interestingly a > patch in nova that only changes under the path: nova/tests/ does not > trigger the job [9]. So in this case overriding the irrelevant-files > of a job works. (It seems that overriding neutron-tempest-linuxbridge > irrelevant-files works too). > > [7] > https://github.com/openstack/neutron/blob/7e3d6a18fb928bcd303a44c1736d0d6ca9c7f0ab/.zuul.yaml#L140-L159 > [8] > https://github.com/openstack-infra/project-config/blob/5ddbd62a46e17dd2fdee07bec32aa65e3b637ff3/zuul.d/projects.yaml#L10516-L10530 > [9] https://review.openstack.org/#/c/537936/ > > I don't see what is the difference between neutron-grenade and > neutron-grenade-multinode jobs definitions from this perspective but > it seems that the irrelevent-files attribute behaves inconsistently > in these two jobs. Could you please help me undestand how > irrelevant-files in overriden jobs supposed to work? These jobs only have the one invocation -- on the nova project -- and are not added via a template. Hopefully that explains the difference. Basically, the irrelevant-files on at least one project-pipeline invocation of a job have to match, as well as at least one definition of the job. So if both things have irrelevant-files, then it's effectively a union of the two. I used a tool to help verify some of the information in this message, especially the bugs [5] and [6]. You can ask Zuul to output debug information about its job selection if you're dealing with confusing situations like this. I went ahead and pushed a new patchset to your test change to demonstrate how: https://review.openstack.org/537936 When it finishes running all the tests (in a few hours), it should include in its report debug information about the decision-making process for the jobs it ran. It outputs similar information into the debug logs; so that we don't have to wait for it to see what it looks like here is that copy: http://paste.openstack.org/show/653729/ The relevant lines for [5] are: 2018-01-26 13:07:53,560 DEBUG zuul.layout: Pipeline variant matched 2018-01-26 13:07:53,560 DEBUG zuul.layout: Pipeline variant did not match Note the project-file-branch-line-number references are especially helpful. -Jim From miguel at mlavalle.com Fri Jan 26 19:39:45 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 26 Jan 2018 13:39:45 -0600 Subject: [openstack-dev] [Neutron] q-3 tag and FFE being tracked Message-ID: Hi Neutron Team, This is our Queens 3 milestone patch: https://review.openstack.org/#/c/537651/. Please note that we still have to create a tag for VPNaaS, which recently rejoined the Neutron Stadium We have also created a list that we are targeting for RC1: https://launchpad.net/neutron/+milestone/queens-rc1 We are going to block master branch to everything that is not in that list. If we have left out anything that is critical to land in Queens, please reach out to me Cheers Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Jan 26 20:13:24 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 26 Jan 2018 15:13:24 -0500 Subject: [openstack-dev] [Release-job-failures][release][neutron] Release of openstack/networking-bigswitch failed In-Reply-To: References: Message-ID: <1516997481-sup-5482@lrrr.local> Excerpts from zuul's message of 2018-01-26 19:52:26 +0000: > Build failed. > > - release-openstack-python finger://ze03.openstack.org/59b750c198424d8481e2b18421c3c32c : POST_FAILURE in 8m 16s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > I recommend that teams managing their own independent releases hold off until we give the all clear and start tagging official releases again because we're still hitting issues with the log server that breaks some of the release pipeline. Doug From Greg.Waines at windriver.com Fri Jan 26 20:59:10 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Fri, 26 Jan 2018 20:59:10 +0000 Subject: [openstack-dev] [masakari] BUG in Masakari Installation and Procedure and/or Documentation In-Reply-To: <265F454E-3330-4C9E-B2A2-1506F2843AA9@windriver.com> References: <265F454E-3330-4C9E-B2A2-1506F2843AA9@windriver.com> Message-ID: <35935713-49F4-46C4-A675-A8D3B483A980@windriver.com> Update on this. It turned out that i had incorrectly set the ‘project_name’ and ‘username’ in /etc/masakarimonitors/masakarimonitors.conf Setting both these attributes to ‘admin’ made it such that the instancemonitor’s notification to masakari-engine was successful. e.g. stack at devstack-masakari-louie:~/devstack$ masakari notification-list +--------------------------------------+----------------------------+---------+--------------------------------------+------+ | notification_uuid | generated_time | status | source_host_uuid | type | +--------------------------------------+----------------------------+---------+--------------------------------------+------+ | b8c6c561-7a93-40a2-8d73-3783024865b4 | 2018-01-26T19:41:29.000000 | running | 51bc8b8b-324f-499a-9166-38c22b3842cd | VM | +--------------------------------------+----------------------------+---------+--------------------------------------+------+ stack at devstack-masakari-louie:~/devstack$ However I now get the following error in masakari-engine, when the masakari-engine attempts to do the VM Recovery Jan 26 19:41:28 devstack-masakari-louie masakari-engine[11795]: 2018-01-26 19:41:28.968 TRACE masakari.engine.drivers.taskflow.driver EndpointNotFound: publicURL endpoint for compute service named Compute Service not found Why is masakari-engine looking for a publicURL endpoint for service_type=’compute’ and service_name=’Compute Service’ ? See below that the Service Name = ‘nova’ ... NOT ‘Compute Service’ stack at devstack-masakari-louie:~/devstack$ openstack endpoint list +----------------------------------+-----------+--------------+----------------+---------+-----------+----------------------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+----------------+---------+-----------+----------------------------------------------+ | 0111643ef1584decb523524a3db5ce18 | RegionOne | nova_legacy | compute_legacy | True | public | http://10.10.10.14/compute/v2/$(project_id)s | | 01790448c22f49e69774adf290fba728 | RegionOne | gnocchi | metric | True | internal | http://10.10.10.14/metric | | 0b31693c6650499a981d580721be9e48 | RegionOne | vitrage | rca | True | internal | http://10.10.10.14:8999 | | 40f66ed61b4e4310829aa69e11c75554 | RegionOne | neutron | network | True | public | http://10.10.10.14:9696/ | | 47479cf64af944b996b1fbca42efd945 | RegionOne | nova | compute | True | public | http://10.10.10.14/compute/v2.1 | | 49dccfc61e8246a2a2c0b8d12b3db91a | RegionOne | vitrage | rca | True | admin | http://10.10.10.14:8999 | | 5261ba0327de4c2d92842147636ee770 | RegionOne | masakari | ha | True | internal | http://10.10.10.14:15868/v1/$(tenant_id)s | | 5df28622c6f449ebad12d9b62110cd08 | RegionOne | gnocchi | metric | True | admin | http://10.10.10.14/metric | | 64f8f401431042a0ab1d053ca4f4df02 | RegionOne | glance | image | True | public | http://10.10.10.14/image | | 69ad6b9d0b0b4d0a8da6fa36af8289cb | RegionOne | masakari | ha | True | public | http://10.10.10.14:15868/v1/$(tenant_id)s | | 7dd9d5396e9c49d4a41e2865b841f6a0 | RegionOne | masakari | ha | True | admin | http://10.10.10.14:15868/v1/$(tenant_id)s | | 811fa7f4b3c14612b4aca354dc8ea77e | RegionOne | vitrage | rca | True | public | http://10.10.10.14:8999 | | 8535da724c424363bffe1d033ee033e5 | RegionOne | cinder | volume | True | public | http://10.10.10.14/volume/v1/$(project_id)s | | 853f1783f1014075a03c16f7c3a2568a | RegionOne | keystone | identity | True | admin | http://10.10.10.14/identity | | 9450f5611ca747f2a049f22ff0996dba | RegionOne | cinderv3 | volumev3 | True | public | http://10.10.10.14/volume/v3/$(project_id)s | | 9a73696d88a9438cb0ab75a754a08e9d | RegionOne | gnocchi | metric | True | public | http://10.10.10.14/metric | | b1ff2b4d683c4a58a3b27232699d0058 | RegionOne | cinderv2 | volumev2 | True | public | http://10.10.10.14/volume/v2/$(project_id)s | | d4e66240faff48f2b5e1d0fcfb73a74b | RegionOne | placement | placement | True | public | http://10.10.10.14/placement | | fda917fd368a4a479c9c186df1beb8e9 | RegionOne | keystone | identity | True | public | http://10.10.10.14/identity | +----------------------------------+-----------+--------------+----------------+---------+-----------+----------------------------------------------+ stack at devstack-masakari-louie:~/devstack$ let me know your thoughts, I don’t mind raising the required BUG in Launchpad if required, Greg. p.s. my masakari configurations ... wrt hosts and segments ... are as follows: stack at devstack-masakari-louie:~/devstack$ masakari segment-list +--------------------------------------+-----------+-------------+--------------+-----------------+ | uuid | name | description | service_type | recovery_method | +--------------------------------------+-----------+-------------+--------------+-----------------+ | 9c6e22bd-7fab-40cb-a8e0-3702137f3227 | segment-1 | - | COMPUTE | auto | +--------------------------------------+-----------+-------------+--------------+-----------------+ stack at devstack-masakari-louie:~/devstack$ masakari host-list --segment-id 9c6e22bd-7fab-40cb-a8e0-3702137f3227 +--------------------------------------+-------------------------+---------+--------------------+----------+----------------+--------------------------------------+ | uuid | name | type | control_attributes | reserved | on_maintenance | failover_segment_id | +--------------------------------------+-------------------------+---------+--------------------+----------+----------------+--------------------------------------+ | 51bc8b8b-324f-499a-9166-38c22b3842cd | devstack-masakari-louie | COMPUTE | SSH | False | False | 9c6e22bd-7fab-40cb-a8e0-3702137f3227 | +--------------------------------------+-------------------------+---------+--------------------+----------+----------------+--------------------------------------+ stack at devstack-masakari-louie:~/devstack$ From: Greg Waines Reply-To: "openstack-dev at lists.openstack.org" Date: Wednesday, January 24, 2018 at 4:13 PM To: "openstack-dev at lists.openstack.org" Subject: [openstack-dev] [masakari] BUG in Masakari Installation and Procedure and/or Documentation I am looking for some input before I raise a BUG. I reviewed the following commits which documented the Masakari and MasakariMonitors Installation and Procedures. i.e. https://review.openstack.org/#/c/489570/ https://review.openstack.org/#/c/489095/ I created an AIO devstack with Masakari on current/master ... this morning. I followed the above instructions on configuring and installing Masakari and MasakariMonitors. I created a VM and then ‘sudo kill -9 ’ and I got the following error from instance monitoring trying to send the notification message to masakari-engine. ( The request you have made requires authentication. ) ... see below, Is this a known BUG ? Greg. 2018-01-24 20:29:16.902 12473 INFO masakarimonitors.instancemonitor.libvirt_handler.callback [-] Libvirt Event: type=VM, hostname=devstack-masakari-new, uuid=6884cf13-5797-487b-9cb1-053a2e18b60e, time=2018-01-24 20:29:16.902347, event_id=LIFECYCLE, detail=STOPPED_FAILED) 2018-01-24 20:29:16.903 12473 INFO masakarimonitors.ha.masakari [-] Send a notification. {'notification': {'hostname': 'devstack-masakari-new', 'type': 'VM', 'payload': {'instance_uuid': '6884cf13-5797-487b-9cb1-053a2e18b60e', 'vir_domain_event': 'STOPPED_FAILED', 'event': 'LIFECYCLE'}, 'generated_time': datetime.datetime(2018, 1, 24, 20, 29, 16, 902347)}} 2018-01-24 20:29:16.977 12473 WARNING masakarimonitors.ha.masakari [-] Retry sending a notification. (HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-9c734f56-aca9-40a9-b2dd-3f372de8c34e), The request you have made requires authentication.): HttpException: HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-9c734f56-aca9-40a9-b2dd-3f372de8c34e), The request you have made requires authentication. ... 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari [-] Exception caught: HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-26a5de94-aaad-4f8f-949e-cbfeb5e31b8b), The request you have made requires authentication.: HttpException: HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-26a5de94-aaad-4f8f-949e-cbfeb5e31b8b), The request you have made requires authentication. 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari Traceback (most recent call last): 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/masakarimonitors/ha/masakari.py", line 91, in send_notification 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari payload=event['notification']['payload']) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/masakariclient/sdk/ha/v1/_proxy.py", line 65, in create_notification 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self._create(_notification.Notification, **attrs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/proxy2.py", line 194, in _create 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return res.create(self._session) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/resource2.py", line 588, in create 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari json=request.body, headers=request.headers) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 848, in post 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self.request(url, 'POST', **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/session.py", line 64, in map_exceptions_wrapper 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return func(*args, **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/session.py", line 352, in request 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return super(Session, self).request(*args, **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 573, in request 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari auth_headers = self.get_auth_headers(auth) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 900, in get_auth_headers 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return auth.get_headers(self, **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/plugin.py", line 95, in get_headers 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari token = self.get_token(session) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", line 88, in get_token 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self.get_access(session).auth_token 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", line 134, in get_access 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari self.auth_ref = self.get_auth_ref(session) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/generic/base.py", line 198, in get_auth_ref 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self._plugin.get_auth_ref(session, **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/v3/base.py", line 165, in get_auth_ref 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari authenticated=False, log=False, **rkwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 848, in post 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self.request(url, 'POST', **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/session.py", line 66, in map_exceptions_wrapper 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari raise exceptions.from_exception(e) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari HttpException: HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-26a5de94-aaad-4f8f-949e-cbfeb5e31b8b), The request you have made requires authentication. 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Jan 26 21:03:49 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 26 Jan 2018 16:03:49 -0500 Subject: [openstack-dev] [cliff][osc][barbican][oslo][sdk][all] avoiding option name conflicts with cliff and OSC plugins In-Reply-To: <1516287510-sup-5069@lrrr.local> References: <1516287510-sup-5069@lrrr.local> Message-ID: <1517000598-sup-4398@lrrr.local> Excerpts from Doug Hellmann's message of 2018-01-18 10:15:16 -0500: > We've been working this week to resolve an issue between cliff and > barbicanclient due to a collision in the option namespace [0]. > Barbicanclient was using the -s option, and then cliff's lister > command base class added that option as an alias for sort-columns. > > The first attempt at resolving the conflict was to set the conflict > handler in argparse to 'resolve' [1]. Several reviewers pointed out > that this would have the unwanted side-effect of making some OSC > commands support the -s as an alias for --sort-columns while the > barbicanclient commands would use it for a different purpose. > > For now we have removed the -s alias from within cliff. However, > we want to avoid this problem coming up in the future so we want a > better solution. > > The OSC project has a policy that its command plugins do not use > short options (single letter). There are relatively few of them, > and it's easy to introduce collisions. Therefore, they are seen > as reserved for more common "global" options such as provided by > the base classes in OSC and cliff. > > I propose that for Rocky we update cliff to change the way options > are registered so that conflicting options added by command subclasses > are ignored. This would effectively let cliff "own" the short option > namespace, and require command classes to use long option names. > > The implementation details need to be worked out a bit, but I think > we can do that by subclassing ArgumentParser and adding a new > conflict handler method similar to the existing _handle_conflict_error() > and _handle_conflict_resolve(). > > This is going to introduce backwards-incompatible changes in the > commands derived from cliff, so before we take any action I wanted > to solicit input in the plan. > > Please let me know what you think, > Doug > > [0] https://bugs.launchpad.net/python-barbicanclient/+bug/1743578 > [1] https://docs.python.org/3.5/library/argparse.html#conflict-handler I have a patch up to implement this in https://review.openstack.org/538335 Doug From kumarmn at us.ibm.com Fri Jan 26 21:22:10 2018 From: kumarmn at us.ibm.com (Manoj Kumar) Date: Fri, 26 Jan 2018 15:22:10 -0600 Subject: [openstack-dev] [trove] Retiring the trove-integration repository, final call In-Reply-To: References: <6e8813b1-c05b-e729-75dd-7c9863fd0730@catalyst.net.nz> Message-ID: Initial indication was provided in July last year, that the trove-integration repository was going away. All the elements have been merged into trove, and are being maintained there. I do not believe anyone spoke up then. If anyone is depending on the separate repository, do speak up. Cheers, - Manoj -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Jan 26 21:41:23 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 26 Jan 2018 16:41:23 -0500 Subject: [openstack-dev] [blazar][release] we need a new blazar client release Message-ID: <1517002522-sup-1657@lrrr.local> The PyPI service is now validating package metadata more strictly, and the classifier values for python-blazarclient do not pass the validation checks. This means the 1.0.0 package we built cannot be uploaded to PyPI [1]. The fix will take several steps. 1. dmsimard has proposed [2] to master to fix the classifiers. 2. However, since the repository has already been branched for queens we will also need to backport that fix to stable/queens. David has proposed that backport in [3]. 3. There are 2 other patches in stable/queens that need to be approved as well [4]. 4. After they are all merged we can release 1.0.1 from the stable/queens branch using the SHA for the merge commit created when [3] lands. So, blazar team, please approve all of those patches and then propose a new 1.0.1 release quickly. Doug [1] http://logs.openstack.org/1d/1d46185bf1e0c18f69038adedd37cf6f6eaf06ab/release/release-openstack-python/13aa058/ara/result/26cee65c-b3cd-4267-9a03-1fe45be043d4/ [2] https://review.openstack.org/538340 Remove commas in setup.cfg package classifiers [3] https://review.openstack.org/538343 [4] https://review.openstack.org/#/q/status:open+project:openstack/python-blazarclient+branch:stable/queens From doug at doughellmann.com Fri Jan 26 21:44:30 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 26 Jan 2018 16:44:30 -0500 Subject: [openstack-dev] [blazar][release] we need a new blazar client release In-Reply-To: <1517002522-sup-1657@lrrr.local> References: <1517002522-sup-1657@lrrr.local> Message-ID: <1517003026-sup-882@lrrr.local> Excerpts from Doug Hellmann's message of 2018-01-26 16:41:23 -0500: > The PyPI service is now validating package metadata more strictly, and > the classifier values for python-blazarclient do not pass the validation > checks. This means the 1.0.0 package we built cannot be uploaded to > PyPI [1]. > > The fix will take several steps. > > 1. dmsimard has proposed [2] to master to fix the classifiers. > > 2. However, since the repository has > already been branched for queens we will also need to backport > that fix to stable/queens. David has proposed that backport in > [3]. > > 3. There are 2 other patches in stable/queens that need to be > approved as well [4]. > > 4. After they are all merged we can release 1.0.1 from the stable/queens > branch using the SHA for the merge commit created when [3] lands. > > So, blazar team, please approve all of those patches and then propose a > new 1.0.1 release quickly. > > Doug > > [1] http://logs.openstack.org/1d/1d46185bf1e0c18f69038adedd37cf6f6eaf06ab/release/release-openstack-python/13aa058/ara/result/26cee65c-b3cd-4267-9a03-1fe45be043d4/ > [2] https://review.openstack.org/538340 Remove commas in setup.cfg package classifiers > [3] https://review.openstack.org/538343 > [4] https://review.openstack.org/#/q/status:open+project:openstack/python-blazarclient+branch:stable/queens In order to speed things along, I'm going to go ahead and use my release manager ACLs to approve those stable branch changes. So please approve the one on master so your next release there won't have the same issue. Doug From miguel at mlavalle.com Fri Jan 26 21:47:30 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 26 Jan 2018 15:47:30 -0600 Subject: [openstack-dev] [Neutron] q-3 tag and FFE being tracked In-Reply-To: References: Message-ID: This is the patch for VPNaaS: https://review.openstack.org/#/c/538295/ On Fri, Jan 26, 2018 at 1:39 PM, Miguel Lavalle wrote: > Hi Neutron Team, > > This is our Queens 3 milestone patch: > > https://review.openstack.org/#/c/537651/. > > Please note that we still have to create a tag for VPNaaS, which recently > rejoined the Neutron Stadium > > We have also created a list that we are targeting for RC1: > > https://launchpad.net/neutron/+milestone/queens-rc1 > > We are going to block master branch to everything that is not in that > list. If we have left out anything that is critical to land in Queens, > please reach out to me > > > Cheers > > Miguel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Fri Jan 26 21:55:42 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 26 Jan 2018 16:55:42 -0500 Subject: [openstack-dev] [requirements] [vitrage][glance][ironic] global requirements update for python-vitrageclient In-Reply-To: <20180126152234.vnl6pnir72agv5ui@gentoo.org> References: <20180126152234.vnl6pnir72agv5ui@gentoo.org> Message-ID: Sorry for the late reply. https://review.openstack.org/#/c/537453/ merged a few hours ago and the "can't copy" error should no longer occur. On Fri, Jan 26, 2018 at 10:22 AM, Matthew Thode wrote: > On 18-01-26 10:47:38, Mark Goddard wrote: >> Looks like this should be resolved by >> https://review.openstack.org/#/c/537453/. >> Mark >> >> On 26 January 2018 at 10:33, Mark Goddard wrote: >> >> > Also seeing this for the u-c [1] and g-r [2] bumps for python-ironicclient >> > 2.2.0. These are required in order to use the ironic node traits feature in >> > nova. >> > >> > [1] https://review.openstack.org/#/c/538093 >> > [2] https://review.openstack.org/#/c/538066/3 >> > >> > On 25 January 2018 at 11:15, Afek, Ifat (Nokia - IL/Kfar Sava) < >> > ifat.afek at nokia.com> wrote: >> > >> >> Adding Glance team. >> >> Any idea what could be wrong? >> >> >> >> Thanks, >> >> Ifat. >> >> >> >> >> >> On 25/01/2018, 9:09, "Afek, Ifat (Nokia - IL/Kfar Sava)" < >> >> ifat.afek at nokia.com> wrote: >> >> >> >> Hi, >> >> >> >> I tried to update the version of python-vitrageclient [1], but the >> >> legacy-requirements-integration-dsvm test failed with an error that does >> >> not seem related to my changes: >> >> >> >> error: can't copy 'etc/glance-image-import.conf': doesn't exist or >> >> not a regular file >> >> >> >> I noticed that two other changes [2][3] failed with the same error. >> >> >> >> Can you please help? >> >> >> >> Thanks, >> >> Ifat. >> >> >> >> >> >> [1] https://review.openstack.org/#/c/537307 >> >> [2] https://review.openstack.org/#/c/535460/ >> >> [3] https://review.openstack.org/#/c/536142/ > > yep, requirements is hard blocked on that atm > > -- > Matthew Thode (prometheanfire) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From johnsomor at gmail.com Fri Jan 26 22:10:52 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 26 Jan 2018 14:10:52 -0800 Subject: [openstack-dev] [cliff][osc][barbican][oslo][sdk][all] avoiding option name conflicts with cliff and OSC plugins In-Reply-To: <1517000598-sup-4398@lrrr.local> References: <1516287510-sup-5069@lrrr.local> <1517000598-sup-4398@lrrr.local> Message-ID: Should be no issues with python-octaviaclient, we do not use the short options. Michael On Fri, Jan 26, 2018 at 1:03 PM, Doug Hellmann wrote: > Excerpts from Doug Hellmann's message of 2018-01-18 10:15:16 -0500: >> We've been working this week to resolve an issue between cliff and >> barbicanclient due to a collision in the option namespace [0]. >> Barbicanclient was using the -s option, and then cliff's lister >> command base class added that option as an alias for sort-columns. >> >> The first attempt at resolving the conflict was to set the conflict >> handler in argparse to 'resolve' [1]. Several reviewers pointed out >> that this would have the unwanted side-effect of making some OSC >> commands support the -s as an alias for --sort-columns while the >> barbicanclient commands would use it for a different purpose. >> >> For now we have removed the -s alias from within cliff. However, >> we want to avoid this problem coming up in the future so we want a >> better solution. >> >> The OSC project has a policy that its command plugins do not use >> short options (single letter). There are relatively few of them, >> and it's easy to introduce collisions. Therefore, they are seen >> as reserved for more common "global" options such as provided by >> the base classes in OSC and cliff. >> >> I propose that for Rocky we update cliff to change the way options >> are registered so that conflicting options added by command subclasses >> are ignored. This would effectively let cliff "own" the short option >> namespace, and require command classes to use long option names. >> >> The implementation details need to be worked out a bit, but I think >> we can do that by subclassing ArgumentParser and adding a new >> conflict handler method similar to the existing _handle_conflict_error() >> and _handle_conflict_resolve(). >> >> This is going to introduce backwards-incompatible changes in the >> commands derived from cliff, so before we take any action I wanted >> to solicit input in the plan. >> >> Please let me know what you think, >> Doug >> >> [0] https://bugs.launchpad.net/python-barbicanclient/+bug/1743578 >> [1] https://docs.python.org/3.5/library/argparse.html#conflict-handler > > I have a patch up to implement this in https://review.openstack.org/538335 > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Fri Jan 26 23:07:44 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 26 Jan 2018 17:07:44 -0600 Subject: [openstack-dev] =?utf-8?q?Vancouver_Summit_CFP_is_open_-_what?= =?utf-8?b?4oCZcyBuZXc=?= In-Reply-To: <4572415A-B17D-44A2-967D-61376515BD24@openstack.org> References: <4572415A-B17D-44A2-967D-61376515BD24@openstack.org> Message-ID: On 1/12/2018 2:59 PM, Lauren Sell wrote: > Hi everyone, > > Today, we opened the Call for Presentations > for > the Vancouver Summit , > which will take place May 21-24. The deadline to submit your proposal is > February 8th. > > What’s New? > We’re focused on open infrastructure integration. The Summit has evolved > over the years to cover more than just OpenStack, but we’re making an > even bigger effort to attract speakers across the open infrastructure > ecosystem. In addition to OpenStack-related sessions, we’ll be featuring > the newest project at the Foundation -- Kata Containers -- as well as > recruiting many others from projects like Ansible, Ceph, Kubernetes, > ONAP and many more. > > We’ve also organized Tracks around specific problem domains. We > encourage you to submit proposals covering OpenStack and the “open > infrastructure” tools you’re using, as well as the integration work > needed to address these problem domains. We also encourage you to invite > peers from other open source communities to come speak and collaborate. > > The Tracks are: > > * > CI/CD > * > Container Infrastructure > * > Edge Computing > * > HPC / GPU / AI > * > Open Source Community > * > Private & Hybrid Cloud > * > Public Cloud > * > Telecom & NFV > > > Where previously we had Track Chairs, we now have Programming Committees > for > each Track, made up of both Members and a Chair (or co-chairs). We’re > also recruiting members and chairs from many different open source > communities working in open infrastructure, in addition to the many > familiar faces in the OpenStack community who will lead the effort. If > you’re interested in nominating yourself or someone else to be a member > of the Summit Programming Committee for a specific Track, please fill > out the nomination form > . > Nominations will close on January 26, 2018. > > Again, the deadline to submit proposals > is > February 8, 2018. Please note topic submissions for the OpenStack Forum > (planning/working sessions with OpenStack devs and operators) will open > at a later date. > > We can’t wait to see you in Vancouver! We’re working hard to make it the > best Summit yet, and look forward to bringing together different open > infrastructure communities to solve these hard problems together! > > Want to provide feedback on this process? Please focus discussion on the > openstack-community mailing list, or contact me or the OpenStack > Foundation Summit Team directly at summit at openstack.org > . > > Thank you, > Lauren > Will there be the usual project updates like in the last few summits, or do we need to specifically submit those as talks now? -- Thanks, Matt From mriedemos at gmail.com Fri Jan 26 23:21:00 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 26 Jan 2018 17:21:00 -0600 Subject: [openstack-dev] [tc] Technical Committee Status update, January 26th In-Reply-To: <11a04045-2cbd-006c-04f9-a9da33018d6f@openstack.org> References: <11a04045-2cbd-006c-04f9-a9da33018d6f@openstack.org> Message-ID: <7eb16933-b614-9cd7-f33d-217420310767@gmail.com> On 1/26/2018 10:26 AM, Thierry Carrez wrote: > NB: mriedem suggested on the ML that we wait until the PTG in Dublin to > make the final call. It gives more time to carefully consider the goals, > but delays the start of the work and makes planning pre-PTG a bit more > difficult. Sorry. Did anyone talk about goals for Rocky in Sydney? I remember talking about goals in Boston, I think for Queens. That worked out better since we had a lot more lead time. -- Thanks, Matt From ramamani.yeleswarapu at intel.com Fri Jan 26 23:49:42 2018 From: ramamani.yeleswarapu at intel.com (Yeleswarapu, Ramamani) Date: Fri, 26 Jan 2018 23:49:42 +0000 Subject: [openstack-dev] [ironic] this week's priorities and subteam reports Message-ID: Hi, We are glad to present this week's priorities and subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. Sorry about the delay. This Week's Priorities (as of the weekly ironic meeting) ======================================================== 1. ironicclient version negotiation (deadline: Thu, Jan 25th) 1.1. expose negotiated latest: https://review.openstack.org/531029 MERGED 1.2. accept list of versions: https://review.openstack.org/#/c/531271/ MERGED 2. Classic drivers deprecation 2.1. upgrade: https://review.openstack.org/534373 2x+2 3. Traits 3.1. RPC objects https://review.openstack.org/#/c/532268/ MERGED 3.2. RPC API https://review.openstack.org/#/c/535296 MERGED 3.3. API https://review.openstack.org/#/c/532269/ MERGED 3.4. Client https://review.openstack.org/#/c/532622/ MERGED 3.5. API ref https://review.openstack.org/#/c/536384 MERGED 4. Rescue: 4.1. network interface update: https://review.openstack.org/#/c/509342 MERGED 4.2. rescuewait timeout: https://review.openstack.org/#/c/353156/ MERGED 4.3. Agent rescue implementation: https://review.openstack.org/#/c/400437/ APPROVED 4.4. Add API methods for [un]rescue: https://review.openstack.org/#/c/350831/ APPROVED 4.5. Client Rescue Provision States https://review.openstack.org/#/c/408341 4.6. Client rescue_interface on node https://review.openstack.org/#/c/517302 Vendor priorities ----------------- cisco-ucs: Patches in works for SDK update, but not posted yet, currently rebuilding third party CI infra after a disaster... idrac: RFE and first several patches for adding UEFI support will be posted by Tuesday, 1/9 ilo: https://review.openstack.org/#/c/530838/ - OOB Raid spec for iLO5 irmc: None oneview: Remove python-oneviewclient from oneview hardware type - https://review.openstack.org/#/c/524729/ Subproject priorities --------------------- bifrost: (TheJulia): Fedora support fixes - https://review.openstack.org/#/c/471750/ ironic-inspector (or its client): (dtantsur) keystoneauth adapters https://review.openstack.org/#/c/515787/ MERGED networking-baremetal: neutron baremetal agent https://review.openstack.org/#/c/456235/ sushy and the redfish driver: (dtantsur) implement redfish sessions: https://review.openstack.org/#/c/471942/ MERGED Bugs (dtantsur, vdrok, TheJulia) -------------------------------- - Stats (diff between 08 Jan 2018 and 15 Jan 2018) - Ironic: 216 bugs (-3) + 260 wishlist items. 1 new (-1), 156 in progress (-2), 0 critical, 33 high (-1) and 27 incomplete (-1) - Inspector: 14 bugs (-1) + 28 wishlist items. 0 new, 10 in progress, 0 critical, 2 high (-1) and 6 incomplete (+1) - Nova bugs with Ironic tag: 13. 1 new, 0 critical, 0 high - via http://dashboard-ironic.7e14.starter-us-west-2.openshiftapps.com/ - the dashboard was abruptly deleted and needs a new home :( - HIGH bugs with patches to review: - Clean steps are not tested in gate https://bugs.launchpad.net/ironic/+bug/1523640: Add manual clean step ironic standalone test https://review.openstack.org/#/c/429770/15 - Needs to be reproposed to the ironic tempest plugin repository. - prepare_instance() is not called for whole disk images with 'agent' deploy interface https://bugs.launchpad.net/ironic/+bug/1713916: - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` https://review.openstack.org/#/c/499050/ - (TheJulia) Currently WF-1, as revision is required for deprecation. - If provisioning network is changed, Ironic conductor does not behave correctly https://bugs.launchpad.net/ironic/+bug/1679260: Ironic conductor works correctly on changes of networks: https://review.openstack.org/#/c/462931/ - (rloo) needs some direction - may be fixed as part of https://review.openstack.org/#/c/460564/ - IPA may not find partition created by conductor https://bugs.launchpad.net/ironic-lib/+bug/1739421 - Fix proposed: https://review.openstack.org/#/c/529325/ CI refactoring and missing test coverage ---------------------------------------- - not considered a priority, it's a 'do it always' thing - Standalone CI tests (vsaienk0) - next patch to be reviewed, needed for 3rd party CI: https://review.openstack.org/#/c/429770/ - localboot with partitioned image patches: - Ironic - add localboot partitioned image test: https://review.openstack.org/#/c/502886/ - when previous are merged TODO (vsaienko) - Upload tinycore partitioned image to tarbals.openstack.org - Switch ironic to use tinyipa partitioned image by default - Missing test coverage (all) - portgroups and attach/detach tempest tests: https://review.openstack.org/382476 - adoption: https://review.openstack.org/#/c/344975/ - should probably be changed to use standalone tests - root device hints: TODO - node take over - resource classes integration tests: https://review.openstack.org/#/c/443628/ - radosgw (https://bugs.launchpad.net/ironic/+bug/1737957) Essential Priorities ==================== Ironic client API version negotiation (TheJulia, dtantsur) ---------------------------------------------------------- - RFE https://bugs.launchpad.net/python-ironicclient/+bug/1671145 - Nova bug https://bugs.launchpad.net/nova/+bug/1739440 - gerrit topic: https://review.openstack.org/#/q/topic:bug/1671145 - status as of 22 Jan 2017: - Nova request was accepted as a bug for now: https://bugs.launchpad.net/nova/+bug/1739440 - we will upgrade it to a blueprint if it starts looking a feature; no spec is probably needed - TODO: - easier access to versions in ironicclient - see https://etherpad.openstack.org/p/ironic-api-version-negotiation - discussion of various ways to implement it happened on the midcycle - dtantsur wants to have an API-SIG guideline on consuming versions in SDKs - ready for review https://review.openstack.org/532814 - patches for ironicclient by TheJulia: - expose negotiated latest: https://review.openstack.org/531029 - accept list of versions: https://review.openstack.org/#/c/531271/ - establish foundation for using version negotiation in nova External project authentication rework (pas-ha, TheJulia) --------------------------------------------------------- - gerrit topic: https://review.openstack.org/#/q/topic:bug/1699547 - status as of 22 Jan 2017: - 0 inspector patch left - https://review.openstack.org/#/c/515786/ MERGED - https://review.openstack.org/#/c/515787 MERGED Classic drivers deprecation (dtantsur) -------------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/classic-drivers-future.html - status as of 23 Jan 2017: - dev documentation for hardware types: TODO - switch documentation to hardware types: - need help from vendors updating their pages - migration of classic drivers to hardware types: - updating spec based on actual code: https://review.openstack.org/#/c/536298/ - see the commit message for explanation - support options for migrations: https://review.openstack.org/535772 - upgrade (for IPMI, SNMP and fake): https://review.openstack.org/#/c/534373/ - other drivers TODO - migration of CI to hardware types - switch all jobs from -_ipmitool to ipmi: https://review.openstack.org/#/c/536875/ - switch inspector CI: https://review.openstack.org/#/c/537415/ - clean up job playbooks: https://review.openstack.org/#/c/535896/ - actual deprecation: https://review.openstack.org/#/c/536928/ Traits support planning (mgoddard, johnthetubaguy, dtantsur) ------------------------------------------------------------ - http://specs.openstack.org/openstack/ironic-specs/specs/approved/node-traits.html - Nova patches: https://review.openstack.org/#/q/topic:bp/ironic-driver-traits+(status:open+OR+status:merged) - status as of 8 Jan 2018: - deploy templates spec: https://review.openstack.org/504952 needs reviews - depends on deploy-steps spec: https://review.openstack.org/#/c/412523 - patches for traits API - DB model & DB API - https://review.openstack.org/#/c/528238 (MERGED) - https://review.openstack.org/#/c/530723 (MERGED) - Add version to DB object - https://review.openstack.org/#/c/535482 (MERGED) - RPC objects - https://review.openstack.org/#/c/532268 MERGED - RPC API & conductor - https://review.openstack.org/#/c/535296 MERGED - API - https://review.openstack.org/#/c/532269 MERGED - API ref - https://review.openstack.org/#/c/536384 - Client - https://review.openstack.org/#/c/532622/ APPROVED - johnthetubaguy is picking the ironic side of traits up now, mgoddard is taking a look at the nova virt driver side - If we don't land this code (at least the API) this week, highly unlikely the nova part will land before next week's FF. Reference architecture guide (dtantsur, sambetts) ------------------------------------------------- - status as of 22 Jan 2017: - dtantsur needs volunteers to help move this forward - list of cases from https://etherpad.openstack.org/p/ironic-queens-ptg-open-discussion - Admin-only provisioner - small and/or rare: TODO - large and/or frequent: TODO - Bare metal cloud for end users - smaller single-site: TODO - larger single-site: TODO - larger multi-site: TODO High Priorities =============== Neutron event processing (vdrok, vsaienk0, sambetts) ---------------------------------------------------- - status as of 27 Sep 2017: - spec at https://review.openstack.org/343684, ready for reviews, replies from authors - WIP code at https://review.openstack.org/440778 Routed network support (sambetts, vsaienk0, bfournie, hjensas) -------------------------------------------------------------- - status as of 19 Jan 2018: - Need reviews ... https://review.openstack.org/#/q/topic:bug/1658964+(status:open+OR+status:merged) - With neutron fixed; patch below; the dsvm job seems stable. - Fix for neutron issue https://review.openstack.org/#/c/534449/ merged. - hjensas taken over as main contributor from sambetts - There is challanges with integration to Placement due to the way the integration was done in neutron. Neutron will create a resource provider for network segments in Placement, then it creates an os-aggregate in Nova for the segment, adds nova compute hosts to this aggregate. Ironic nodes cannot be added to host-aggregates. I (hjensas) had a short discussion with neutron devs (mlavalle) on the issue: http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2018-01-12.log.html#t2018-01-12T17:05:38 There are patches in Nova to add support for ironic nodes in host-aggregates: - https://review.openstack.org/#/c/526753/ allow compute nodes to be associated with host agg - https://review.openstack.org/#/c/529135/ (Spec) - Patches: - https://review.openstack.org/456235 Add baremetal neutron agent - https://review.openstack.org/#/c/533707/ start_flag = True, only first time, or conf change - https://review.openstack.org/524709 Make the agent distributed using hashring and notifications (WIP) - https://review.openstack.org/521838 Switch from MechanismDriver to SimpleAgentMechanismDriverBase - https://review.openstack.org/#/c/532349/7 Add support to bind type vlan networks - CI Patches: - https://review.openstack.org/#/c/531275/ Devstack - use neutron segments (routed provider networks) - https://review.openstack.org/#/c/531637/ Wait for ironic-neutron-agent to report state - https://review.openstack.org/#/c/530117/ Devstack - Add ironic-neutron-agent - https://review.openstack.org/#/c/530409/ Add dsvm job Rescue mode (rloo, stendulker, aparnav) --------------------------------------- - Status as on 15 Jan 2018 - spec: http://specs.openstack.org/openstack/ironic-specs/specs/approved/implement-rescue-mode.html - code: https://review.openstack.org/#/q/topic:bug/1526449+status:open - ironic side: - All patches are up-to-date, being actively reviewed and updated - Tempest tests based on standalone ironic is WIP. - nova side: - https://blueprints.launchpad.net/nova/+spec/ironic-rescue-mode: approved for Queens; waiting for ironic part to be done first. Queens feature freeze is week of Jan 22. - To get the nova patch merged, we need: - release new python-ironicclient - update ironicclient version in upper-constraints (this patch will be posted automatically) - update ironicclient version in global-requirement (this patch needs to be posted manually) - code patch: https://review.openstack.org/#/c/416487/ - If we don't land this code (at least the API) this week, highly unlikely the nova part will land before next week's FF. - CI is needed for nova part to land Clean up deploy interfaces (vdrok) ---------------------------------- - status as of 9 Jan 2017: - patch https://review.openstack.org/524433 ready for reviews Zuul v3 jobs in-tree (sambetts, derekh, jlvillal, rloo) ------------------------------------------------------- - etherpad tracking zuul v3 -> intree: https://etherpad.openstack.org/p/ironic-zuulv3-intree-tracking - cleaning up/centralizing job descriptions (eg 'irrelevant-files'): DONE - Next TODO is to convert jobs on master, to proper ansible. NOT a high priority though. - (pas-ha) DNM experimental patch with "devstack-tempest" as base job https://review.openstack.org/#/c/520167/ Graphical console interface (pas-ha, vdrok, rpioso) --------------------------------------------------- - status as of 8 Jan 2017: - spec on review: https://review.openstack.org/#/c/306074/ - there is nova part here, which has to be approved too - dtantsur is worried by absence of progress here - (TheJulia) I think for rocky, it might be worth making it a prime focus, or making it a background goal. BIOS config framework (dtantsur, yolanda, rpioso) ------------------------------------------------- - status as of 8 Jan 2017: - spec under active review: https://review.openstack.org/#/c/496481/ Ansible deploy interface (pas-ha) --------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ansible-deploy-driver.html - status as of 22 Jan 2017: - code merged - TODO - CI job - https://review.openstack.org/529640 MERGED - https://review.openstack.org/#/c/529383/ MERGED - done? - docs: https://review.openstack.org/#/c/525501/ OpenStack Priorities ==================== Python 3.5 compatibility (Nisha, Ankit) --------------------------------------- - Topic: https://review.openstack.org/#/q/topic:goal-python35+NOT+project:openstack/governance+NOT+project:openstack/releases - this include all projects, not only ironic - please tag all reviews with topic "goal-python35" - TODO submit the python3 job for IPA - for ironic and ironic-inspector job enabled by disabling swift as swift is still lacking py3.5 support. - anupn to update the python3 job to build tinyipa with python3 - (anupn): Talked with swift folks and there is a bug upstream opened https://review.openstack.org/#/c/401397 for py3 support in swift. But this is not on their priority - Right now patch pass all gate jobs except agent_- drivers. - we need to make the ironic job voting eventually. but we need to check that nova, glance and neutron already have voting python 3 jobs, otherwise they may break us. - nova seems to have python 3 jobs voting, here are our patches: - ironic https://review.openstack.org/#/c/531398/ - ironic-inspector https://review.openstack.org/#/c/531400/ MERGED Deploying with Apache and WSGI in CI (pas-ha, vsaienk0) ------------------------------------------------------- - ironic is mostly finished - (pas-ha) needs to be rewritten for uWSGI, patches on review: - https://review.openstack.org/#/c/507011/ +A - https://review.openstack.org/#/c/507067 Needs revision - inspector is TODO and depends on https://review.openstack.org/#/q/topic:bug/1525218 - may be delayed to after Queens, as the HA work seems to take a different direction Split away the tempest plugin (jlvillal) ---------------------------------------- - https://etherpad.openstack.org/p/ironic-tempest-plugin-migration - Current (8-Jan-2018) (jlvillal): All projects now using tempest plugin code from openstack/ironic-tempest-plugin - Need to remove plugin code from master branch of openstack/ironic and openstack/ironic-inspector - Plugin code will NOT be removed from the stable branches of openstack/ironic and openstack/ironic-inspector - (jlvillal) 3rd Party CI has had over 3 weeks to prepare for removal. We should now move forward - README, setup.cfg and docs cleanup: https://review.openstack.org/#/c/529538/ MERGED - ironic-tempest-plugin 1.0.0 released Subprojects =========== Inspector (dtantsur) -------------------- - trying to flip dsvm-discovery to use the new dnsmasq pxe filter and failing because of bash :Dhttps://review.openstack.org/#/c/525685/6/devstack/plugin.sh at 202 - follow-ups being merged/reviewed; working on state consistency enhancements https://review.openstack.org/#/c/510928/ too (HA demo follow-up) Bifrost (TheJulia) ------------------ - Also seems a recent authenticaiton change in keystoneauth1 has broken processing of the clouds.yaml files, i.e. `openstack` command does not work. - TheJulia will try to look at this this week. Drivers: -------- DRAC (rpioso, dtantsur) ~~~~~~~~~~~~~~~~~~~~~~~ - Dell Ironic CI is being rebuilt, its back and running now (10/17/2017) OneView (ricardoas, nicodemos, gmonteiro) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Re-submitting reverted patches for migration from python-oneviewclient to python-hpOneView + python-ilorest-library - Check weekly priorities for most import patch to review Cisco UCS (sambetts) ~~~~~~~~~~~~~~~~~~~~ - Currently rebuilding third party CI from the ground up after it bit the dust - Patches for updating the UCS python SDKs are in the works and should be posted soon ......... Until next week, --Rama [0] https://etherpad.openstack.org/p/IronicWhiteBoard -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Sat Jan 27 00:02:58 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Fri, 26 Jan 2018 19:02:58 -0500 Subject: [openstack-dev] =?utf-8?q?Vancouver_Summit_CFP_is_open_-_what?= =?utf-8?b?4oCZcyBuZXc=?= In-Reply-To: References: <4572415A-B17D-44A2-967D-61376515BD24@openstack.org> Message-ID: We will reach out to PTLs directly to solicit Protect Updates. Thanks!! Jimmy McArthur 512.965.4846 > On Jan 26, 2018, at 6:07 PM, Matt Riedemann wrote: > >> On 1/12/2018 2:59 PM, Lauren Sell wrote: >> Hi everyone, >> Today, we opened the Call for Presentations for the Vancouver Summit , which will take place May 21-24. The deadline to submit your proposal is February 8th. >> What’s New? >> We’re focused on open infrastructure integration. The Summit has evolved over the years to cover more than just OpenStack, but we’re making an even bigger effort to attract speakers across the open infrastructure ecosystem. In addition to OpenStack-related sessions, we’ll be featuring the newest project at the Foundation -- Kata Containers -- as well as recruiting many others from projects like Ansible, Ceph, Kubernetes, ONAP and many more. >> We’ve also organized Tracks around specific problem domains. We encourage you to submit proposals covering OpenStack and the “open infrastructure” tools you’re using, as well as the integration work needed to address these problem domains. We also encourage you to invite peers from other open source communities to come speak and collaborate. >> The Tracks are: >> * >> CI/CD >> * >> Container Infrastructure >> * >> Edge Computing >> * >> HPC / GPU / AI >> * >> Open Source Community >> * >> Private & Hybrid Cloud >> * >> Public Cloud >> * >> Telecom & NFV >> Where previously we had Track Chairs, we now have Programming Committees for each Track, made up of both Members and a Chair (or co-chairs). We’re also recruiting members and chairs from many different open source communities working in open infrastructure, in addition to the many familiar faces in the OpenStack community who will lead the effort. If you’re interested in nominating yourself or someone else to be a member of the Summit Programming Committee for a specific Track, please fill out the nomination form . Nominations will close on January 26, 2018. >> Again, the deadline to submit proposals is February 8, 2018. Please note topic submissions for the OpenStack Forum (planning/working sessions with OpenStack devs and operators) will open at a later date. >> We can’t wait to see you in Vancouver! We’re working hard to make it the best Summit yet, and look forward to bringing together different open infrastructure communities to solve these hard problems together! >> Want to provide feedback on this process? Please focus discussion on the openstack-community mailing list, or contact me or the OpenStack Foundation Summit Team directly at summit at openstack.org . >> Thank you, >> Lauren > > Will there be the usual project updates like in the last few summits, or do we need to specifically submit those as talks now? > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ekcs.openstack at gmail.com Sat Jan 27 01:43:19 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Fri, 26 Jan 2018 17:43:19 -0800 Subject: [openstack-dev] [congress][requirements][RFE] adding tenacity to congress requirements Message-ID: Looking to add tenacity to congress requirements because it's needed by a forthcoming bug fix. No change to requirements repo. Does this need an exception? Thanks a lot! https://review.openstack.org/#/c/538369/ Eric Kao From muroi.masahito at lab.ntt.co.jp Sat Jan 27 01:49:23 2018 From: muroi.masahito at lab.ntt.co.jp (Masahito MUROI) Date: Sat, 27 Jan 2018 10:49:23 +0900 Subject: [openstack-dev] [blazar][release] we need a new blazar client release In-Reply-To: <1517003026-sup-882@lrrr.local> References: <1517002522-sup-1657@lrrr.local> <1517003026-sup-882@lrrr.local> Message-ID: Hi Doug, Thanks for the info and fixes. I pushed a patch for blazar client 1.0.1 release[1]. 1. https://review.openstack.org/538368 best regards, Masahito On 2018/01/27 6:44, Doug Hellmann wrote: > Excerpts from Doug Hellmann's message of 2018-01-26 16:41:23 -0500: >> The PyPI service is now validating package metadata more strictly, and >> the classifier values for python-blazarclient do not pass the validation >> checks. This means the 1.0.0 package we built cannot be uploaded to >> PyPI [1]. >> >> The fix will take several steps. >> >> 1. dmsimard has proposed [2] to master to fix the classifiers. >> >> 2. However, since the repository has >> already been branched for queens we will also need to backport >> that fix to stable/queens. David has proposed that backport in >> [3]. >> >> 3. There are 2 other patches in stable/queens that need to be >> approved as well [4]. >> >> 4. After they are all merged we can release 1.0.1 from the stable/queens >> branch using the SHA for the merge commit created when [3] lands. >> >> So, blazar team, please approve all of those patches and then propose a >> new 1.0.1 release quickly. >> >> Doug >> >> [1] http://logs.openstack.org/1d/1d46185bf1e0c18f69038adedd37cf6f6eaf06ab/release/release-openstack-python/13aa058/ara/result/26cee65c-b3cd-4267-9a03-1fe45be043d4/ >> [2] https://review.openstack.org/538340 Remove commas in setup.cfg package classifiers >> [3] https://review.openstack.org/538343 >> [4] https://review.openstack.org/#/q/status:open+project:openstack/python-blazarclient+branch:stable/queens > > In order to speed things along, I'm going to go ahead and use my release > manager ACLs to approve those stable branch changes. So please approve > the one on master so your next release there won't have the same issue. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From prometheanfire at gentoo.org Sat Jan 27 05:05:11 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Fri, 26 Jan 2018 23:05:11 -0600 Subject: [openstack-dev] [ALL][requirements] A freeze is coming and you should be prepared In-Reply-To: <20180126061238.dayud3ayid5fibzd@gentoo.org> References: <20180123072350.2jby5zoeeyzaryv5@gentoo.org> <20180124072947.u4dv674dv6bcczb6@gentoo.org> <20180125043227.v3mfb5u2ndeennvu@mthode.org> <20180126061238.dayud3ayid5fibzd@gentoo.org> Message-ID: <20180127050511.l526namzrrd6v6ue@gentoo.org> On 18-01-26 00:12:38, Matthew Thode wrote: > On 18-01-24 22:32:27, Matthew Thode wrote: > > On 18-01-24 01:29:47, Matthew Thode wrote: > > > On 18-01-23 01:23:50, Matthew Thode wrote: > > > > Requirements is freezing Friday at 23:59:59 UTC so any last > > > > global-requrements updates that need to get in need to get in now. > > > > > > > > I'm afraid that my condition has left me cold to your pleas of mercy. > > > > > > > > > > Just your daily reminder that the freeze will happen in about 3 days > > > time. Reviews seem to be winding down for requirements now (which is > > > a good sign this release will be chilled to perfection). > > > > > > > There's still a couple of things that may cause bumps for iso8601 and > > oslo.versionedobjects but those are the main things. The msgpack change > > is also rolling out (thanks dirk :D). Even with all these changes > > though, in this universe, there's only one absolute. Everything freezes! > > > > https://review.openstack.org/535520 (oslo.serialization) > > > > Last day, gate is sad and behind, but not my fault you waited til the > last minute :P (see my first comment). The Iceman Cometh! > All right everyone, Chill. Looks like we have another couple days to get stuff in for gate's slowness. The new deadline is 23:59:59 UTC 29-01-2018. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mthengva at redhat.com Sat Jan 27 07:46:50 2018 From: mthengva at redhat.com (Mary Thengvall) Date: Fri, 26 Jan 2018 23:46:50 -0800 Subject: [openstack-dev] Help still needed at FOSDEM! Message-ID: Boris - It's a fairly low commitment. It's just for an hour, standing at the OpenStack booth, answering questions that tend to be at a very basic level (e.g. What's OpenStack?). You can sign up for a slot via the etherpad here: https://etherpad.openstack.org/p/fosdem-2018 Best, -m. -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Sat Jan 27 15:10:07 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Sun, 28 Jan 2018 00:10:07 +0900 Subject: [openstack-dev] [horizon][packaging] django-openstack-auth retirement In-Reply-To: <20180126174531.qmc6pdw3jpz67aus@yuggoth.org> References: <20180122113012.xe42fi24v3ljm7rz@yuggoth.org> <20180124142526.cczgg2kgibb7k4rj@yuggoth.org> <698eddb8-4136-b3e4-4bf7-d88aef7d2f89@inaugust.com> <20180126174531.qmc6pdw3jpz67aus@yuggoth.org> Message-ID: 2018-01-27 2:45 GMT+09:00 Jeremy Stanley : > On 2018-01-24 08:47:30 -0600 (-0600), Monty Taylor wrote: > [...] >> Horizon and neutron were updated to start publishing to PyPI >> already. >> >> https://review.openstack.org/#/c/531822/ >> >> This is so that we can start working on unwinding the neutron and >> horizon specific versions of jobs for neutron and horizon plugins. > > Nice! I somehow missed that merging a couple of weeks back. In that > case, I suppose we could in theory do one final transitional package > upload of DOA depending on the conflicting Horizon release if others > think that's a good idea. Thanks for clarification. Then, does it make sense to release django-openstack-auth 4.0.0 and require it in horizon queens? Note that the current latest version is 3.5.0 and older horizon dependency is django-openstack-auth>=3.5.0. Luckily enough, the requirement freeze is extended one week. -- Akihiro Motoki From corvus at inaugust.com Sat Jan 27 15:36:38 2018 From: corvus at inaugust.com (James E. Blair) Date: Sat, 27 Jan 2018 07:36:38 -0800 Subject: [openstack-dev] [infra][all] New Zuul Depends-On syntax In-Reply-To: (Eric Fried's message of "Thu, 25 Jan 2018 22:59:42 -0600") References: <87efmfz05l.fsf@meyer.lemoncheese.net> <005844f0-ea22-3e0b-fd2e-c1c889d9293b@inaugust.com> <2b04ea61-2b2c-d740-37e7-47215b10e3a0@nemebean.com> <87zi51v5uu.fsf@meyer.lemoncheese.net> Message-ID: <87h8r7qpo9.fsf@meyer.lemoncheese.net> Eric Fried writes: > For my part, I tried it [1] and it doesn't seem to have worked. (The > functional test failure is what the dep is supposed to have fixed.) Did > I do something wrong? > > [1] https://review.openstack.org/#/c/533821/12 If you examine the "items:" section in this file: http://logs.openstack.org/21/533821/12/check/openstack-tox-functional/9066bb2/zuul-info/inventory.yaml You will see that Zuul collected the following changes to test together: 526541,19 533808,6 521098,29 521187,29 535463,3 536624,3 536625,4 537648,5 533821,12 All on the master branch of nova. The change you specified, "https://review.openstack.org/#/c/536545/" is not present. The reason is that, contrary to earlier replies in this thread, the /#/c/ version of the change URL does not work. I'm sure we can fix that, but for the moment, we'll need to use the permalink form. -Jim From openstack at fried.cc Sat Jan 27 18:23:44 2018 From: openstack at fried.cc (Eric Fried) Date: Sat, 27 Jan 2018 12:23:44 -0600 Subject: [openstack-dev] [nova][placement] Re: VMWare's resource pool / cluster and nested resource providers In-Reply-To: References: <9ad230ba-587e-9c60-f604-e817fcebd9e4@fried.cc> Message-ID: Rado-     [+dev ML.  We're getting pretty general here; maybe others will get some use out of this.] > is there a way to make the scheduler allocate only from one specific RP     "...one specific RP" - is that Resource Provider or Resource Pool?     And are we talking about scheduling an instance to a specific compute node, or are we talking about making sure that all the requested resources are pulled from the same compute node (but it could be any one of several compute nodes)?  Or justlimiting the scheduler to any node in a specific resource pool?     To make sure I'm fully grasping the VMWare-specific ratios/relationships between resource pools and compute nodes,I have been assuming: controller 1:many compute "host"(where n-cpu runs) compute "host"  1:many resource pool resource pool 1:many compute "node" (where instances can be scheduled) compute "node" 1:many instance     (I don't know if this "host" vs"node" terminology is correct, but I'm going to keep pretending it is for the purposes of this note.)     In particular, if that last line is true, then you do *not* want multiple compute "nodes" in the same provider tree. > if no custom trait is specified in the request?     I am not aware of anything current or planned that will allow you to specify an aggregate you want to deploy from; so the only way I'm aware of that you could pin a request to a resource pool is to create a custom trait for that resource pool, tag all compute nodes in the pool with that trait, and specify that trait in your flavor.  This way you don't use nested-ness at all.  And in this model, there's also no need to create resource providers corresponding to resource pools - their solemanifestation is via traits.     (Bonus: this model will work with what we've got merged in Queens - we didn't quiiite finish the piece of NRP that makes them work for allocation candidates, but we did merge trait support.  We're also *mostly* there with aggregates, but I wouldn't want to rely on them working perfectly and we're not claiming full support for them.)     To be explicit, in the model I'm suggesting, your compute "host", within update_provider_tree, would create new_root()s for each compute "node".  So the "tree" isn't really a tree - it's a flat list of computes, of which one happens to correspond to the `nodename` and represents the compute "host".  (I assume deploys can happen to the compute "host" just like they can to a compute "node"?  If not, just give that guy no inventory and he'll be avoided.)  It would then update_traits(node, ['CUSTOM_RPOOL_X']) for each.  It would also update_inventory() for each as appropriate.     Now on your deploys, to get scheduled to a particular resource pool, you would have to specify required=CUSTOM_RPOOL_X in your flavor.     That's it.  You never use new_child().  There are no providers corresponding to pools.  There are no aggregates.     Are we making progress, or am I confused/confusing? Eric On 01/27/2018 01:50 AM, Radoslav Gerganov wrote: > > +Chris > > > Hi Eric, > > Thanks a lot for sending this.  I must admit that I am still trying to > catch up with how the scheduler (will) work when there are nested RPs, > traits, etc.  I thought mostly about the case when we use a custom > trait to force allocations only from one resource pool.  However, if > no trait is specified then we can end up in the situation that you > describe (allocating different resources from different resource > pools) and this is not what we want.  If we go with the model that you > propose, is there a way to make the scheduler allocate only from one > specific RP if no custom trait is specified in the request? > > Thanks, > > Rado > > > ------------------------------------------------------------------------ > *From:* Eric Fried > *Sent:* Friday, January 26, 2018 10:20 PM > *To:* Radoslav Gerganov > *Cc:* Jay Pipes > *Subject:* VMWare's resource pool / cluster and nested resource providers >   > Rado- > >         It occurred to me just now that the model you described to me > [1] isn't > going to work, unless there's something I really misunderstood. > >         The problem is that the placement API will think it can allocate > resources from anywhere in the tree for a given allocation request > (unless you always use a single numbered request group [2] in your > flavors, which doesn't sound like a clean plan). > >         So if you have *any* model where multiple compute nodes reside > in the > same provider tree, and I come along with a request for say > VCPU:1,MEMORY_MB:2048,DISK_GB:512, placement will happily give you a > candidate with the VCPU from compute10, the memory from compute5, and > the disk from compute7.  I'm only guessing that this isn't a viable way > to boot an instance. > >         I go back to my earlier suggestion: I think you need to create the > compute nodes as root providers in your ProviderTree, and find some > other way to mark the resource pool associations.  You could do it with > custom traits (CUSTOM_RESOURCE_POOL_X, ..._Y, etc.); or you could do it > with aggregates (an aggregate maps to a resource pool; associate all the > compute providers in a given pool with its aggregate uuid). > >                         Thanks, >                         Eric > > [1] > http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-01-26.log.html#t2018-01-26T14:40:44 > [2] > https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/granular-resource-requests.html#numbered-request-groups From colleen at gazlene.net Sat Jan 27 19:16:47 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Sat, 27 Jan 2018 20:16:47 +0100 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 22 January 2018 Message-ID: # Keystone Team Update - Week of 22 January 2018 ## News ### Feature freeze This week was feature freeze and client freeze. While we approved everything we cared about for this release on time, some CI issues (some unexpected and some predictable) delayed these features being merged. The release team has extended the freeze deadline to Monday, which should (hopefully) give us enough time for the last few changes to land before we release RC1. ### RC bugs We've started compiling a list of potential release-critical bugs[1]. Please continue to report bugs as you find them in the RC, and also please focus your attention on fixing these bugs and reviewing bugfixes. [1] https://etherpad.openstack.org/p/keystone-queens-bug-list ### API Discovery We had some interesting discussions this week about experimental APIs and API discovery[2][3]. This was partly in the context of our new "unified limits" API, which is step 1 in providing a cross-project service where quotas for projects could be set and retrieved by other OpenStack services. We're marking this API as "experimental" for the time being while we shake out some of the cross-project usage patterns we'll need to support, but this poses a discoverabiltiy problem. We already expose a "home document" which lists all of our API routes and their statuses, e.g. whether they're tagged as "experimental". While this seems like a really useful feature for API consumers as well as a great way to expose experimental features without committing to stability, it seems like the JSON-home standard never quite made it off the ground, so it's not a standard we can rely on API consumers supporting. However, we could certainly build off of what we already have to enhance our API discoverability [2] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-01-24.log.html#t2018-01-24T22:27:50 [3] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-01-25.log.html#t2018-01-25T14:43:46 [4] https://mnot.github.io/I-D/json-home/ ### GSoC Projects OpenStack is applying to participate in the Google Summer of Code project[5]. We've started compiling a list of potential projects that a GSoC intern could work on[6]. Please help us add to the list! And if you're interested in being a mentor, please step up! We'll likely discuss this more at the next keystone meeting. [5] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-01-25.log.html#t2018-01-25T14:38:28 [6] https://etherpad.openstack.org/p/keystone-internship-ideas ## Recently Merged Changes Search query: https://goo.gl/hdD9Kw We merged 49 changes this week, though we approved quite a few that are still making their way through the gate, including changes that are part of our main feature objectives. ## Changes that need Attention Search query: https://goo.gl/h9knRA There are 36 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. Expect to see a lot more as we bugstomp over the next two weeks. ## Milestone Outlook https://releases.openstack.org/queens/schedule.html This week marked feature freeze and client freeze, but due to a number of CI problems the release team has extended the feature freeze till Monday and the client freeze until Tuesday[7]. This just means the approved changes that we still have moving through CI should hopefully have time to finish and be merged. [7] http://lists.openstack.org/pipermail/openstack-dev/2018-January/126621.html ## Shout-outs Thanks to the whole team for working so hard this week! ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From mriedemos at gmail.com Sat Jan 27 20:15:34 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 27 Jan 2018 14:15:34 -0600 Subject: [openstack-dev] [api] Idea for a simple way to expose compute driver capabilities in the REST API Message-ID: <032d1a68-e881-668c-cf99-0ad936630924@gmail.com> We've talked about a type of capabilities API within nova and across OpenStack for at least a couple of years (earliest formal session I remember is a session in BCN). At the PTG in Denver, I think there was general sentiment that rather than never do anything because we can't come up with the perfect design that would satisfy all requirements in all projects, we should just do our own things in the projects, at least as a start, and as long as things are well-documented, that's good enough rather than do nothing. Well I'm not sure why but I was thinking about this problem today and came up with something simple here: https://review.openstack.org/#/c/538498/ This builds on the change to pass ironic node traits through the nova-compute resource tracker and push those into placement on the compute node resource provider resource. These traits can then be tied to required traits in a flavor and used for scheduling. The patch takes the existing ComputeDriver.capabilities dict that is on all compute drivers, and for the supported capabilities, exposes those as a CUSTOM_COMPUTE_ trait on the compute node resource provider. So for example, a compute node backed by a libvirt driver with qemu<2.10 would have a CUSTOM_COMPUTE_SUPPORTS_MULTIATTACH trait. We could then add something to the request spec when booting a server from a mutltiattach volume to say this request requires a compute node that has that trait. That's one of the gaps we have with multiattach support today, which is there is no connection between the request for a multiattach-volume-backed server and the compute host the scheduler picks to build that server, which could lead to server create failures (which aren't rescheduled by the way). Anyway, it's just an idea and I wanted to see what others thought about this. Doing it would bake a certain behavior into how things are tied to the placement REST API, and I'm not sure if that's a good thing or not. It also opens up the question of whether or not these become standard traits in the os-traits library. Alternatively I've always thought we could do something simple like add a "GET /os-hypervisors/{hypervisor_id}/capabilities" API which either makes an RPC call to the compute to get the driver capabilities, or we could store the driver capabilities in the compute_nodes table and the API could pull them from there. Then we could build on that same type of idea by doing something like "GET /servers/{server_id}/capabilities" which would take into account the capabilities based on the compute host that the instance is running on, it's flavor, etc. That's all a bigger change though, but it's more direct than just passing things through to placement. I fear it's also something that might never happen because it'll get bogged down in a design committee. -- Thanks, Matt From sean.mcginnis at gmx.com Sat Jan 27 21:30:00 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Sat, 27 Jan 2018 15:30:00 -0600 Subject: [openstack-dev] [tc] Technical Committee Status update, January 26th In-Reply-To: <7eb16933-b614-9cd7-f33d-217420310767@gmail.com> References: <11a04045-2cbd-006c-04f9-a9da33018d6f@openstack.org> <7eb16933-b614-9cd7-f33d-217420310767@gmail.com> Message-ID: <20180127212959.GA27058@sm-xps> > > Sorry. Did anyone talk about goals for Rocky in Sydney? I remember talking > about goals in Boston, I think for Queens. That worked out better since we > had a lot more lead time. > > -- > > Thanks, > > Matt > I think the problem this time around was that, until relatively recently, we only had the Storyboard goal proposed. So since it took so long to actually have some things proposed, we weren't actually in a place where we could have discussed things in Sydney. I know it was raised multiple times to propose potential goals throughout this cycle. But at least for mine, I didn't think of it until we were getting close to the end. I would assume that is the case for the other recent proposals. Not sure how we can really improve that. One nice thing now though is we have a set of goals that definitely will not all be accepted for Rocky. So we will have an existing set of potential future goals to let stew. And for the ones that are not accepted for now, we will have some lead time to start getting a little more foundation laid to make it possible to complete some of them within one cycle. At least that is my hope. Sean From prometheanfire at gentoo.org Sun Jan 28 03:37:53 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Sat, 27 Jan 2018 21:37:53 -0600 Subject: [openstack-dev] [ALL][requirements] A freeze is coming and you should be prepared In-Reply-To: <20180127050511.l526namzrrd6v6ue@gentoo.org> References: <20180123072350.2jby5zoeeyzaryv5@gentoo.org> <20180124072947.u4dv674dv6bcczb6@gentoo.org> <20180125043227.v3mfb5u2ndeennvu@mthode.org> <20180126061238.dayud3ayid5fibzd@gentoo.org> <20180127050511.l526namzrrd6v6ue@gentoo.org> Message-ID: <20180128033753.gy3e2qkf562cqynm@gentoo.org> On 18-01-26 23:05:11, Matthew Thode wrote: > On 18-01-26 00:12:38, Matthew Thode wrote: > > On 18-01-24 22:32:27, Matthew Thode wrote: > > > On 18-01-24 01:29:47, Matthew Thode wrote: > > > > On 18-01-23 01:23:50, Matthew Thode wrote: > > > > > Requirements is freezing Friday at 23:59:59 UTC so any last > > > > > global-requrements updates that need to get in need to get in now. > > > > > > > > > > I'm afraid that my condition has left me cold to your pleas of mercy. > > > > > > > > > > > > > Just your daily reminder that the freeze will happen in about 3 days > > > > time. Reviews seem to be winding down for requirements now (which is > > > > a good sign this release will be chilled to perfection). > > > > > > > > > > There's still a couple of things that may cause bumps for iso8601 and > > > oslo.versionedobjects but those are the main things. The msgpack change > > > is also rolling out (thanks dirk :D). Even with all these changes > > > though, in this universe, there's only one absolute. Everything freezes! > > > > > > https://review.openstack.org/535520 (oslo.serialization) > > > > > > > Last day, gate is sad and behind, but not my fault you waited til the > > last minute :P (see my first comment). The Iceman Cometh! > > > > All right everyone, Chill. Looks like we have another couple days to > get stuff in for gate's slowness. The new deadline is 23:59:59 UTC > 29-01-2018. > It's a cold town. The current status is as follows. It looks like the gate is clearing up. oslo.versionedobjects-1.31.2 and iso8601 will be in a gr bump but that's it. monasca-tempest-plugin is not going to get in by freeze at this rate (has fixes needed in the review). There was some stuff needed to get nova-client/osc to work together again, but mriedem seems to have it in hand (and no gr updates it looks like). -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mikal at stillhq.com Sun Jan 28 20:54:17 2018 From: mikal at stillhq.com (Michael Still) Date: Mon, 29 Jan 2018 07:54:17 +1100 Subject: [openstack-dev] [all] [tc] Community Goals for Rocky -- privsep In-Reply-To: References: Message-ID: Sorry for the slow reply, I've spent the last month camping in a tent and it was wonderful. The privsep transition isn't complete in Nova, but it was never intended to be in Queens. We did get further than we envisaged and its doable to finish off in Rocky. That said, I feel like we have a nice established pattern for what we think works now, and the changes are largely mechanical -- the holdups tend to be when you encounter some weird history in the codebase that needs to be unravelled along the way. That said, I don't think we're proposing to remove rootwrap entirely, it would still be a supported mechanism for launching the privsep helpers. Michael On Fri, Jan 12, 2018 at 1:20 AM, Thierry Carrez wrote: > Emilien Macchi wrote: > > [...] > > Thierry mentioned privsep migration (done in Nova and Zun). (action, > > ping mikal about it). > > It's not "done" in Nova: Mikal planned to migrate all of nova-compute > (arguably the largest service using rootwrap) to privsep during Queens, > but AFAICT it's still work in progress. > > Other projects like cinder and neutron are using it. > > If support in Nova is almost there, it would make a great Queens goal to > get rid of the last rootwrap leftovers and deprecate it. > > Mikal: could you give us a quick update of where you are ? > Anyone interested in championing that as a goal? > > -- > Thierry Carrez (ttx) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamesbeedy at gmail.com Mon Jan 29 00:24:29 2018 From: jamesbeedy at gmail.com (James Beedy) Date: Sun, 28 Jan 2018 16:24:29 -0800 Subject: [openstack-dev] [charms][nova-compute] Services not running that should be: nova-api-metadata Message-ID: Trying to bring up an Openstack, I keep hitting this issue where nova-compute is giving a status "Services not running that should be: nova-api-metadata" - are others hitting this? juju status | https://paste.ubuntu.com/26480769/ Its possible something has changed in my infrastructure, but I feel like when I deployed this same config last week I had a solid deploy with no errors. I know there are too many components and configuration to start troubleshooting with this little detail, just thought I would check in before diving in head first. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Mon Jan 29 02:47:42 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Sun, 28 Jan 2018 20:47:42 -0600 Subject: [openstack-dev] [ALL][requirements] A freeze is coming and you should be prepared In-Reply-To: <20180128033753.gy3e2qkf562cqynm@gentoo.org> References: <20180123072350.2jby5zoeeyzaryv5@gentoo.org> <20180124072947.u4dv674dv6bcczb6@gentoo.org> <20180125043227.v3mfb5u2ndeennvu@mthode.org> <20180126061238.dayud3ayid5fibzd@gentoo.org> <20180127050511.l526namzrrd6v6ue@gentoo.org> <20180128033753.gy3e2qkf562cqynm@gentoo.org> Message-ID: <20180129024742.wbyf725c3yi2iquy@gentoo.org> On 18-01-27 21:37:53, Matthew Thode wrote: > On 18-01-26 23:05:11, Matthew Thode wrote: > > On 18-01-26 00:12:38, Matthew Thode wrote: > > > On 18-01-24 22:32:27, Matthew Thode wrote: > > > > On 18-01-24 01:29:47, Matthew Thode wrote: > > > > > On 18-01-23 01:23:50, Matthew Thode wrote: > > > > > > Requirements is freezing Friday at 23:59:59 UTC so any last > > > > > > global-requrements updates that need to get in need to get in now. > > > > > > > > > > > > I'm afraid that my condition has left me cold to your pleas of mercy. > > > > > > > > > > > > > > > > Just your daily reminder that the freeze will happen in about 3 days > > > > > time. Reviews seem to be winding down for requirements now (which is > > > > > a good sign this release will be chilled to perfection). > > > > > > > > > > > > > There's still a couple of things that may cause bumps for iso8601 and > > > > oslo.versionedobjects but those are the main things. The msgpack change > > > > is also rolling out (thanks dirk :D). Even with all these changes > > > > though, in this universe, there's only one absolute. Everything freezes! > > > > > > > > https://review.openstack.org/535520 (oslo.serialization) > > > > > > > > > > Last day, gate is sad and behind, but not my fault you waited til the > > > last minute :P (see my first comment). The Iceman Cometh! > > > > > > > All right everyone, Chill. Looks like we have another couple days to > > get stuff in for gate's slowness. The new deadline is 23:59:59 UTC > > 29-01-2018. > > > > It's a cold town. The current status is as follows. It looks like the > gate is clearing up. oslo.versionedobjects-1.31.2 and iso8601 will be > in a gr bump but that's it. monasca-tempest-plugin is not going to get > in by freeze at this rate (has fixes needed in the review). There was > some stuff needed to get nova-client/osc to work together again, but > mriedem seems to have it in hand (and no gr updates it looks like). > Allow me to break the Ice. My name is Freeze. Learn it well for it's the chilling sound of your doom! Can you feel it coming? The icy cold of space! It's less than 24 hours til the freeze fomrally happens, the only outstanding item is that oslo.versionedobjects seems to need another fix for the iso8601 bump. osc-placement won't be added to requirements at this point as there has been no responce on their review. https://review.openstack.org/538515 python-vitrageclient looks like it'll make it in if gate doesn't break. msgpack may also be late, but we'll see (just workflow'd). openstacksdk may need a gr bump, I'm waiting on a response from mordred https://review.openstack.org/538695 -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jean-philippe at evrard.me Mon Jan 29 07:58:28 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 29 Jan 2018 07:58:28 +0000 Subject: [openstack-dev] [openstack-ansible] Limiting pip wheel builds for OpenStack clients In-Reply-To: <1516827827-sup-2097@lrrr.local> References: <42ffc325-4162-5daa-b413-9c5d2cc60835@mhtx.net> <4B1BB321037C0849AAE171801564DFA6889B7F63@IRSMSX107.ger.corp.intel.com> <1516827827-sup-2097@lrrr.local> Message-ID: I added my comment/opinion on the bug. Thanks for reporting this, Major! From lijie at unitedstack.com Mon Jan 29 09:27:34 2018 From: lijie at unitedstack.com (=?utf-8?B?5p2O5p2w?=) Date: Mon, 29 Jan 2018 17:27:34 +0800 Subject: [openstack-dev] [nova]Nova rescue inject pasword failed Message-ID: Hi,all: I want to access to my instance under rescue state using temporary password which nova rescue gave me.But this password doesn't work. Can I ask how this password is injected to instance? I can't find any specification how is it done.I saw the code about rescue,But it displays the password has inject. I use the libvirt as the virt driver. The web said to set"[libvirt]inject_password=true",but it didn't work. Is it a bug?Can you give me some advice?Help in troubleshooting this issue will be appreciated. Best Regards Lijie -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbooth at redhat.com Mon Jan 29 09:57:14 2018 From: mbooth at redhat.com (Matthew Booth) Date: Mon, 29 Jan 2018 09:57:14 +0000 Subject: [openstack-dev] [nova]Nova rescue inject pasword failed In-Reply-To: References: Message-ID: On 29 January 2018 at 09:27, 李杰 wrote: > Hi,all: > I want to access to my instance under rescue state using > temporary password which nova rescue gave me.But this password doesn't > work. Can I ask how this password is injected to instance? I can't find any > specification how is it done.I saw the code about rescue,But it displays > the password has inject. > I use the libvirt as the virt driver. The web said to > set"[libvirt]inject_password=true",but it didn't work. Is it a bug?Can > you give me some advice?Help in troubleshooting this issue will be > appreciated. > Ideally your rescue image will support cloud-init and you would use a config disk. For password injection to work you need inject_password=True, inject_partition=-1 (*NOT* -2, which is the default), and for libguestfs to be correctly installed on your compute hosts. But to reiterate, ideally your rescue image would support cloud-init and you would use a config disk. Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) -------------- next part -------------- An HTML attachment was scrubbed... URL: From moshele at mellanox.com Mon Jan 29 10:27:10 2018 From: moshele at mellanox.com (Moshe Levi) Date: Mon, 29 Jan 2018 10:27:10 +0000 Subject: [openstack-dev] [tripleo] opendaylight OpenDaylightConnectionProtocol deprecation issue Message-ID: Hi all, It seem that this commit [1] deprecated the OpenDaylightConnectionProtocol, but it also remove it. This is causing the following issue when we deploy opendaylight non containerized. See [2] One solution is to add back the OpenDaylightConnectionProtocol [3] the other solution is to remove the OpenDaylightConnectionProtocol from the deprecated parameter_groups [4]. [1] - https://github.com/openstack/tripleo-heat-templates/commit/af4ce05dc5270b84864a382ddb2a1161d9082eab [2] - http://paste.openstack.org/show/656702/ [3] - https://github.com/openstack/tripleo-heat-templates/commit/af4ce05dc5270b84864a382ddb2a1161d9082eab#diff-21674daa44a327c016a80173efeb10e7L20 [4] - https://github.com/openstack/tripleo-heat-templates/commit/af4ce05dc5270b84864a382ddb2a1161d9082eab#diff-21674daa44a327c016a80173efeb10e7R112 -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.bourke at oracle.com Mon Jan 29 11:13:48 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Mon, 29 Jan 2018 11:13:48 +0000 Subject: [openstack-dev] [kolla] Policy regarding template customisation Message-ID: Hi all, I'd like to revisit our policy of not templating everything in kolla-ansible's template files. This is a policy that was set in place very early on in kolla-ansible's development, but I'm concerned we haven't been very consistent with it. This leads to confusion for contributors and operators - "should I template this and submit a patch, or do I need to start using my own config files?". The docs[0] are currently clear: "The Kolla upstream community does not want to place key/value pairs in the Ansible playbook configuration options that are not essential to obtaining a functional deployment." In practice though our templates contain many options that are not necessary, and plenty of patches have merged that while very useful to operators, are not necessary to an 'out of the box' deployment. So I'd like us to revisit the questions: 1) Is kolla-ansible attempting to be a 'batteries included' tool, which caters to operators via key/value config options? 2) Or, is it to be a solid reference implementation, where any degree of customisation implies a clear 'bring your own configs' type policy. If 1), then we should potentially: * Update ours docs to remove the referenced paragraph * Look at reorganising files like globals.yml into something more maintainable. If 2), * We should make it clear to reviewers that patches templating options that are non essential should not be accepted. * Encourage patches to strip down existing config files to an absolute minimum. * Make this policy more clear in docs / templates to avoid frustration on the part of operators. Thoughts? Thanks, -Paul [0] https://docs.openstack.org/kolla-ansible/latest/admin/deployment-philosophy.html#why-not-template-customization From lijie at unitedstack.com Mon Jan 29 11:50:27 2018 From: lijie at unitedstack.com (=?utf-8?B?5p2O5p2w?=) Date: Mon, 29 Jan 2018 19:50:27 +0800 Subject: [openstack-dev] [nova]Nova rescue inject pasword failed In-Reply-To: References: Message-ID: yeah,but Idon't know why we have to use a config disk,we can also gain the metadata by metadata RESTful service.Now I set my nova.conf inject_password=True, inject_partition=-1.And the libguestfs-1.36.3-6.el7_4.3.x86_64 is also installed.But it doesn't work. ------------------ Original ------------------ From: "Matthew Booth"; Date: Mon, Jan 29, 2018 05:57 PM To: "OpenStack Developmen"; Subject: Re: [openstack-dev] [nova]Nova rescue inject pasword failed On 29 January 2018 at 09:27, 李杰 wrote: Hi,all: I want to access to my instance under rescue state using temporary password which nova rescue gave me.But this password doesn't work. Can I ask how this password is injected to instance? I can't find any specification how is it done.I saw the code about rescue,But it displays the password has inject. I use the libvirt as the virt driver. The web said to set"[libvirt]inject_password=true",but it didn't work. Is it a bug?Can you give me some advice?Help in troubleshooting this issue will be appreciated. Ideally your rescue image will support cloud-init and you would use a config disk. For password injection to work you need inject_password=True, inject_partition=-1 (*NOT* -2, which is the default), and for libguestfs to be correctly installed on your compute hosts. But to reiterate, ideally your rescue image would support cloud-init and you would use a config disk. Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) -------------- next part -------------- An HTML attachment was scrubbed... URL: From lhinds at redhat.com Mon Jan 29 12:38:21 2018 From: lhinds at redhat.com (Luke Hinds) Date: Mon, 29 Jan 2018 12:38:21 +0000 Subject: [openstack-dev] [security] Security PTG Planning, x-project request for topics. In-Reply-To: References: Message-ID: Just a reminder as we have not had many uptakes yet.. Are there any projects (new and old) that would like to make use of the security SIG for either gaining another perspective on security challenges / blueprints etc or for help gaining some cross project collaboration? On Thu, Jan 11, 2018 at 3:33 PM, Luke Hinds wrote: > Hello All, > > I am seeking topics for the PTG from all projects, as this will be where > we try out are new form of being a SIG. > > For this PTG, we hope to facilitate more cross project collaboration > topics now that we are a SIG, so if your project has a security need / > problem / proposal than please do use the security SIG room where a larger > audience may be present to help solve problems and gain x-project consensus. > > Please see our PTG planning pad [0] where I encourage you to add to the > topics. > > [0] https://etherpad.openstack.org/p/security-ptg-rocky > > -- > Luke Hinds > Security Project PTL > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmsimard at redhat.com Mon Jan 29 13:29:35 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Mon, 29 Jan 2018 08:29:35 -0500 Subject: [openstack-dev] [all][kolla][rdo] Collaboration with Kolla for the RDO test days Message-ID: Hi ! For those who might be unfamiliar with the RDO [1] community project: we hang out in #rdo, we don't bite and we build vanilla OpenStack packages. These packages are what allows you to leverage one of the deployment projects such as TripleO, PackStack or Kolla to deploy on CentOS or RHEL. The RDO community collaborates with these deployment projects by providing trunk and stable packages in order to let them develop and test against the latest and the greatest of OpenStack. RDO test days typically happen around a week after an upstream milestone has been reached [2]. The purpose is to get everyone together in #rdo: developers, users, operators, maintainers -- and test not just RDO but OpenStack itself as installed by the different deployment projects. We tried something new at our last test day [3] and it worked out great. Instead of encouraging participants to install their own cloud for testing things, we supplied a cloud of our own... a bit like a limited duration TryStack [4]. This lets users without the operational knowledge, time or hardware to install an OpenStack environment to see what's coming in the upcoming release of OpenStack and get the feedback loop going ahead of the release. We used Packstack for the last deployment and invited Packstack cores to deploy, operate and troubleshoot the installation for the duration of the test days. The idea is to rotate between the different deployment projects to give every interested project a chance to participate. Last week, we reached out to Kolla to see if they would be interested in participating in our next RDO test days [5] around February 8th. We supply the bare metal hardware and their core contributors get to deploy and operate a cloud with real users and developers poking around. All around, this is a great opportunity to get feedback for RDO, Kolla and OpenStack. We'll be advertising the event a bit more as the test days draw closer but until then, I thought it was worthwhile to share some context for this new thing we're doing. Let me know if you have any questions ! Thanks, [1]: https://www.rdoproject.org/ [2]: https://www.rdoproject.org/testday/ [3]: https://dmsimard.com/2017/11/29/come-try-a-real-openstack-queens-deployment/ [4]: http://trystack.org/ [5]: http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-01-24-16.00.log.html David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] From balazs.gibizer at ericsson.com Mon Jan 29 13:48:57 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 29 Jan 2018 14:48:57 +0100 Subject: [openstack-dev] [nova][neutron][infra] zuul job definitions overrides and the irrelevant-file attribute In-Reply-To: <874ln8v6ye.fsf@meyer.lemoncheese.net> References: <1516975504.9811.5@smtp.office365.com> <=?iso-8859-1?Q?=22Bal?= =?iso-8859-1?Q?=E1zs?= Gibizer"'s message of "Fri, 26 Jan 2018 15:05:04 +0100"> Message-ID: <1517233737.9811.8@smtp.office365.com> On Fri, Jan 26, 2018 at 6:57 PM, James E. Blair wrote: > Balázs Gibizer writes: > >> Hi, >> >> I'm getting more and more confused how the zuul job hierarchy works >> or >> is supposed to work. > > Hi! > > First, you (or others) may or may not have seen this already -- some > of > it didn't exist when we first rolled out v3, and some of it has > changed > -- but here are the relevant bits of the documentation that should > help > explain what's going on. It helps to understand freezing: > > https://docs.openstack.org/infra/zuul/user/config.html#job > > and matching: > > https://docs.openstack.org/infra/zuul/user/config.html#matchers Thanks for the doc references they are really helpful. > >> First there was a bug in nova that some functional tests are not >> triggered although the job (re-)definition in the nova part of the >> project-config should not prevent it to run [1]. >> >> There we figured out that irrelevant-files parameter of the jobs are >> not something that can be overriden during re-definition or through >> parent-child relationship. The base job openstack-tox-functional has >> an irrelevant-files attribute that lists '^doc/.*$' as a path to be >> ignored [2]. In the other hand the nova part of the project-config >> tries to make this ignore less broad by adding only >> '^doc/source/.*$' >> . This does not work as we expected and the job did not run on >> changes >> that only affected ./doc/notification_samples path. We are fixing it >> by defining our own functional job in nova tree [4]. >> >> [1] https://bugs.launchpad.net/nova/+bug/1742962 >> [2] >> >> https://github.com/openstack-infra/openstack-zuul-jobs/blob/1823e3ea20e6dfaf37786a6ff79c56cb786bf12c/zuul.d/jobs.yaml#L380 >> [3] >> >> https://github.com/openstack-infra/project-config/blob/1145ab1293f5fa4d34c026856403c22b091e673c/zuul.d/projects.yaml#L10509 >> [4] https://review.openstack.org/#/c/533210/ > > This is correct. The issue here is that the irrelevant-files > definition > on openstack-tox-functional is too broad. We need to be *extremely* > careful applying matchers to jobs like that. Generally I think that > irrelevant-files should be reserved for the project-pipeline > invocations > only. That's how they were effectively used in Zuul v2, after all. > > Essentially, when someone puts an irrelevant-files section on a job > like > that, they are saying "this job will never apply to these files, > ever." > That's clearly not correct in this case. > > So our solutions are to acknowledge that it's over-broad, and reduce > or > eliminate the list in [2] and expand it elsewhere (as in [3]). Or we > can say "we were generally correct, but nova is extra special so it > needs its own job". If that's the choice, then I think [4] is a fine > solution. The [4] just get merged this morning so I think that is OK for us now. > >> Then I started looking into other jobs to see if we made similar >> mistakes. I found two other examples in the nova related jobs where >> redefining the irrelevant-files of a job caused problems. In these >> examples nova tried to ignore more paths during the override than >> what >> was originally ignored in the job definition but that did not work >> [5][6]. >> >> [5] https://bugs.launchpad.net/nova/+bug/1745405 (temptest-full) > > As noted in that bug, the tempest-full job is invoked on nova via this > stanza: > > https://github.com/openstack-infra/project-config/blob/5ddbd62a46e17dd2fdee07bec32aa65e3b637ff3/zuul.d/projects.yaml#L10674-L10688 > > As expected, that did not match. There is a second invocation of > tempest-full on nova here: > > http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/zuul-legacy-project-templates.yaml#n126 > > That has no irrelevant-files matches, and so matches everything. If > you > drop the use of that template, it will work as expected. Or, if you > can > say with some certainty that nova's irrelevant-files set is not > over-broad, you could move the irrelevant-files from nova's invocation > into the template, or even the job, and drop nova's individual > invocation. Thanks for the explanation, it is much clearer now. With this info I think I was able to propose a patcha that fixes the two bugs: https://review.openstack.org/#/c/538908/ > >> [6] https://bugs.launchpad.net/nova/+bug/1745431 (neutron-grenade) > > The same template invokes this job as well. > >> So far the problem seemed to be consistent (i.e. override does not >> work). But then I looked into neutron-grenade-multinode. That job is >> defined in neutron tree (like neutron-grenade) but nova also refers >> to >> it in nova section of the project-config with different >> irrelevant-files than their original definition. So I assumed that >> this will lead to similar problem than in case of neutron-grenade, >> but >> it doesn't. >> >> The neutron-grenade-multinode original definition [1] does not try >> to >> ignore the 'nova/tests' path but the nova side of the definition in >> the project config does try to ignore that path [8]. Interestingly a >> patch in nova that only changes under the path: nova/tests/ does not >> trigger the job [9]. So in this case overriding the irrelevant-files >> of a job works. (It seems that overriding >> neutron-tempest-linuxbridge >> irrelevant-files works too). >> >> [7] >> >> https://github.com/openstack/neutron/blob/7e3d6a18fb928bcd303a44c1736d0d6ca9c7f0ab/.zuul.yaml#L140-L159 >> [8] >> >> https://github.com/openstack-infra/project-config/blob/5ddbd62a46e17dd2fdee07bec32aa65e3b637ff3/zuul.d/projects.yaml#L10516-L10530 >> [9] https://review.openstack.org/#/c/537936/ >> >> I don't see what is the difference between neutron-grenade and >> neutron-grenade-multinode jobs definitions from this perspective but >> it seems that the irrelevent-files attribute behaves inconsistently >> in these two jobs. Could you please help me undestand how >> irrelevant-files in overriden jobs supposed to work? > > These jobs only have the one invocation -- on the nova project -- and > are not added via a template. > > Hopefully that explains the difference. OK, now I think I see the difference. So far I mixed the definition and the invocation of a job in my head. Thanks for the explanation. Your mail really helped me understand the whole situation. cheers, gibi > > Basically, the irrelevant-files on at least one project-pipeline > invocation of a job have to match, as well as at least one definition > of > the job. So if both things have irrelevant-files, then it's > effectively > a union of the two. > > I used a tool to help verify some of the information in this message, > especially the bugs [5] and [6]. You can ask Zuul to output debug > information about its job selection if you're dealing with confusing > situations like this. I went ahead and pushed a new patchset to your > test change to demonstrate how: > > https://review.openstack.org/537936 > > When it finishes running all the tests (in a few hours), it should > include in its report debug information about the decision-making > process for the jobs it ran. It outputs similar information into the > debug logs; so that we don't have to wait for it to see what it looks > like here is that copy: > > http://paste.openstack.org/show/653729/ > > The relevant lines for [5] are: > > 2018-01-26 13:07:53,560 DEBUG zuul.layout: Pipeline variant tempest-full branches: None source: > openstack-infra/openstack-zuul-jobs/zuul.d/zuul-legacy-project-templates.yaml at master#126> > matched > 2018-01-26 13:07:53,560 DEBUG zuul.layout: Pipeline variant tempest-full branches: None source: > openstack-infra/project-config/zuul.d/projects.yaml at master#10485> did > not match > > Note the project-file-branch-line-number references are especially > helpful. > > -Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Mon Jan 29 14:12:50 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Mon, 29 Jan 2018 22:12:50 +0800 Subject: [openstack-dev] [kolla] Policy regarding template customisation In-Reply-To: References: Message-ID: Thank Paul for pointing this out. for me, I prefer to consist with 2) There are thousands of configuration in OpenStack, it is hard for Kolla to add every key/value pair in playbooks. Currently, the merge_config is a more better solutions. On Mon, Jan 29, 2018 at 7:13 PM, Paul Bourke wrote: > Hi all, > > I'd like to revisit our policy of not templating everything in > kolla-ansible's template files. This is a policy that was set in place very > early on in kolla-ansible's development, but I'm concerned we haven't been > very consistent with it. This leads to confusion for contributors and > operators - "should I template this and submit a patch, or do I need to > start using my own config files?". > > The docs[0] are currently clear: > > "The Kolla upstream community does not want to place key/value pairs in > the Ansible playbook configuration options that are not essential to > obtaining a functional deployment." > > In practice though our templates contain many options that are not > necessary, and plenty of patches have merged that while very useful to > operators, are not necessary to an 'out of the box' deployment. > > So I'd like us to revisit the questions: > > 1) Is kolla-ansible attempting to be a 'batteries included' tool, which > caters to operators via key/value config options? > > 2) Or, is it to be a solid reference implementation, where any degree of > customisation implies a clear 'bring your own configs' type policy. > > If 1), then we should potentially: > > * Update ours docs to remove the referenced paragraph > * Look at reorganising files like globals.yml into something more > maintainable. > > If 2), > > * We should make it clear to reviewers that patches templating options > that are non essential should not be accepted. > * Encourage patches to strip down existing config files to an absolute > minimum. > * Make this policy more clear in docs / templates to avoid frustration on > the part of operators. > > Thoughts? > > Thanks, > -Paul > > [0] https://docs.openstack.org/kolla-ansible/latest/admin/deploy > ment-philosophy.html#why-not-template-customization > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From wolverine.av at gmail.com Mon Jan 29 14:24:15 2018 From: wolverine.av at gmail.com (AdityaVaja) Date: Mon, 29 Jan 2018 19:54:15 +0530 Subject: [openstack-dev] [Release-job-failures][release][neutron] Release of openstack/networking-bigswitch failed In-Reply-To: <1516997481-sup-5482@lrrr.local> References: <1516997481-sup-5482@lrrr.local> Message-ID: Ouch - sorry about that Doug. I bumped the version after a change merged. I'll watch out on the mailing list for an update when its good to proceed. On Sat, Jan 27, 2018 at 1:43 AM, Doug Hellmann wrote: > Excerpts from zuul's message of 2018-01-26 19:52:26 +0000: > > Build failed. > > > > - release-openstack-python finger://ze03.openstack.org/ > 59b750c198424d8481e2b18421c3c32c : POST_FAILURE in 8m 16s > > - announce-release announce-release : SKIPPED > > - propose-update-constraints propose-update-constraints : SKIPPED > > > > I recommend that teams managing their own independent releases hold off > until we give the all clear and start tagging official releases again > because we're still hitting issues with the log server that breaks some > of the release pipeline. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Aditya Vaja -------------- next part -------------- An HTML attachment was scrubbed... URL: From ayoung at redhat.com Mon Jan 29 14:29:18 2018 From: ayoung at redhat.com (Adam Young) Date: Mon, 29 Jan 2018 09:29:18 -0500 Subject: [openstack-dev] [security] Security PTG Planning, x-project request for topics. In-Reply-To: References: Message-ID: Bug 968696 and System Roles. Needs to be addressed across the Service catalog. On Mon, Jan 29, 2018 at 7:38 AM, Luke Hinds wrote: > Just a reminder as we have not had many uptakes yet.. > > Are there any projects (new and old) that would like to make use of the > security SIG for either gaining another perspective on security challenges > / blueprints etc or for help gaining some cross project collaboration? > > On Thu, Jan 11, 2018 at 3:33 PM, Luke Hinds wrote: > >> Hello All, >> >> I am seeking topics for the PTG from all projects, as this will be where >> we try out are new form of being a SIG. >> >> For this PTG, we hope to facilitate more cross project collaboration >> topics now that we are a SIG, so if your project has a security need / >> problem / proposal than please do use the security SIG room where a larger >> audience may be present to help solve problems and gain x-project consensus. >> >> Please see our PTG planning pad [0] where I encourage you to add to the >> topics. >> >> [0] https://etherpad.openstack.org/p/security-ptg-rocky >> >> -- >> Luke Hinds >> Security Project PTL >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Jan 29 14:30:37 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 29 Jan 2018 08:30:37 -0600 Subject: [openstack-dev] [ALL][requirements] A freeze is coming and you should be prepared In-Reply-To: <20180129024742.wbyf725c3yi2iquy@gentoo.org> References: <20180123072350.2jby5zoeeyzaryv5@gentoo.org> <20180124072947.u4dv674dv6bcczb6@gentoo.org> <20180125043227.v3mfb5u2ndeennvu@mthode.org> <20180126061238.dayud3ayid5fibzd@gentoo.org> <20180127050511.l526namzrrd6v6ue@gentoo.org> <20180128033753.gy3e2qkf562cqynm@gentoo.org> <20180129024742.wbyf725c3yi2iquy@gentoo.org> Message-ID: <20180129143036.GA27347@sm-xps> > > ... the > only outstanding item is that oslo.versionedobjects seems to need > another fix for the iso8601 bump. ... I took a look at the failing jobs for the oslo.versionobjects bump, and it appears this is not directly related. There are failures in nova, cinder, and keystone with the new oslo.versionedobjects. This appears to be due to a mix of UTC time handling in these projects between their own local implementations and usage of the timeutils inside oslo.versionedobjects. The right answer might be to get all of these local implementations moved out into something like oslo.utils, but for the time being, these patches will need to land before we can raise oslo.versionedobjects (and raise the iso8601 version that triggered this work). Cinder - https://review.openstack.org/#/c/536182/2 Nova - https://review.openstack.org/#/c/535700/3 Keystone - https://review.openstack.org/#/c/538263/1 There are similar patches in other projects (I think they are all using the same topic) that will need to land as well that don't appear to be covered in the requirements cross jobs. Sean From balazs.gibizer at ericsson.com Mon Jan 29 14:45:05 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 29 Jan 2018 15:45:05 +0100 Subject: [openstack-dev] [nova] Notification update week 5 Message-ID: <1517237105.9811.9@smtp.office365.com> Hi, Here is the status update / focus settings mail for w5. Bugs ---- [High] https://bugs.launchpad.net/nova/+bug/1742962 nova functional test does not triggered on notification sample only changes Fix merged to master, backports are on the gate. When backport lands we can merge the removal of the triggering of the old jobs for nova by merging https://review.openstack.org/#/c/533608/ As a followup I did some investigation to see if other jobs are affected with the same problem, see ML http://lists.openstack.org/pipermail/openstack-dev/2018-January/126616.html [High] https://bugs.launchpad.net/nova/+bug/1737201 TypeError when sending notification during attach_interface Fix merged to master. Backports have been proposed: * Pike: https://review.openstack.org/#/c/531745/ * Queens: https://review.openstack.org/#/c/531746/ [High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations fail to complete with versioned notifications if payload contains unset non-nullable fields We need to understand first how this can happen. Based on the comments from the bug it seems it happens after upgrading an old deployment. So it might be some problem with the online data migration that moves the flavor into the instance. [Low] https://bugs.launchpad.net/nova/+bug/1487038 nova.exception._cleanse_dict should use oslo_utils.strutils._SANITIZE_KEYS Old abandoned patches exist but need somebody to pick them up: * https://review.openstack.org/#/c/215308/ * https://review.openstack.org/#/c/388345/ Versioned notification transformation ------------------------------------- Feature Freeze hit but the team made a good last minute push. Altogether we merged 17 transformation patches in Queens. \o/ Thanks for everybody who contributed with code, review, or encuragement. We have 22 transformations left to reach feature parity which means we have a chance to finish this work in Rocky. I also put up this as a possible intership idea on the wiki: https://wiki.openstack.org/wiki/GSoC2018#Internship_ideas Reno for the Queens work is up to date: https://review.openstack.org/#/c/518018 Introduce instance.lock and instance.unlock notifications --------------------------------------------------------- A specless bp has been proposed to the Rocky cycle https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances Some preliminary discussion happened in an earlier patch https://review.openstack.org/#/c/526251/ Add the user id and project id of the user initiated the instance action to the notification ----------------------------------------------------------------- A new bp has been proposed https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications As the user who initiates the instance action (e.g. reboot) could be different from the user owning the instance it would make sense to include the user_id and project_id of the action initiatior to the versioned instance action notifications as well. Factor out duplicated notification sample ----------------------------------------- As https://bugs.launchpad.net/nova/+bug/1742962 is merged it is safe to look at the patches on https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open again. Weekly meeting -------------- The next meeting will be held on 30th of January on #openstack-meeting-4 https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180130T170000 Cheers, gibi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Jan 29 15:18:30 2018 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 29 Jan 2018 15:18:30 +0000 Subject: [openstack-dev] [kolla] Policy regarding template customisation In-Reply-To: References: Message-ID: I have to agree - I prefer the minimal approach of option 2. It keeps the kolla-ansible code base small and easy to understand. The required test matrix is therefore relatively small (although better coverage of services in CI would be good). Finally, the approach has allowed the project to move quickly and support deployment of many OpenStack projects. Customised options shouldn't be outlawed though. There are times when they are very useful and/or required: * some things that cannot be expressed in config files alone * some options apply to many/all services (sometimes with subtle differences in configuration) * some config files are not in a format that can be easily merged (HAProxy, dnsmasq, etc.) These should be the exception, rather than the rule, however. Mark On 29 January 2018 at 14:12, Jeffrey Zhang wrote: > Thank Paul for pointing this out. > > for me, I prefer to consist with 2) > > There are thousands of configuration in OpenStack, it is hard for Kolla to > add every key/value pair in playbooks. Currently, the merge_config is a > more > better solutions. > > > > > On Mon, Jan 29, 2018 at 7:13 PM, Paul Bourke > wrote: > >> Hi all, >> >> I'd like to revisit our policy of not templating everything in >> kolla-ansible's template files. This is a policy that was set in place very >> early on in kolla-ansible's development, but I'm concerned we haven't been >> very consistent with it. This leads to confusion for contributors and >> operators - "should I template this and submit a patch, or do I need to >> start using my own config files?". >> >> The docs[0] are currently clear: >> >> "The Kolla upstream community does not want to place key/value pairs in >> the Ansible playbook configuration options that are not essential to >> obtaining a functional deployment." >> >> In practice though our templates contain many options that are not >> necessary, and plenty of patches have merged that while very useful to >> operators, are not necessary to an 'out of the box' deployment. >> >> So I'd like us to revisit the questions: >> >> 1) Is kolla-ansible attempting to be a 'batteries included' tool, which >> caters to operators via key/value config options? >> >> 2) Or, is it to be a solid reference implementation, where any degree of >> customisation implies a clear 'bring your own configs' type policy. >> >> If 1), then we should potentially: >> >> * Update ours docs to remove the referenced paragraph >> * Look at reorganising files like globals.yml into something more >> maintainable. >> >> If 2), >> >> * We should make it clear to reviewers that patches templating options >> that are non essential should not be accepted. >> * Encourage patches to strip down existing config files to an absolute >> minimum. >> * Make this policy more clear in docs / templates to avoid frustration on >> the part of operators. >> >> Thoughts? >> >> Thanks, >> -Paul >> >> [0] https://docs.openstack.org/kolla-ansible/latest/admin/deploy >> ment-philosophy.html#why-not-template-customization >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Mon Jan 29 15:36:30 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 30 Jan 2018 00:36:30 +0900 Subject: [openstack-dev] [release][requirements][horizon] django-openstack-auth retirement Message-ID: Hi the release team and the requirements team, I would like to have on django-openstack-auth (DOA) retirement. In the thread of the announce of DOA retirement last week, I was advised to release a transition package which provides no python module and make horizon depend on it so that the transition can be smooth. http://lists.openstack.org/pipermail/openstack-dev/2018-January/thread.html#126428 To achieve this, the horizon team needs: * to release django-openstack-auth 4.0.0 (the current version is 3.5.0 so 4.0.0 makes sense) https://review.openstack.org/#/c/538709/ * to add django-openstack-auth 4.0.0 to g-r and u-c (for queens) * to add django-openstack-auth 4.0.0 to horizon queens RC1 I think there are two options in horizon queens: - to release the transition package of django-openstack-auth 4.0.0 as described above, or - to just document the retirement of django-openstack-auth The requirement release is in 9 hours. I would like to ask advices from the release and requirements team. Thanks, Akihiro 2018-01-27 2:45 GMT+09:00 Jeremy Stanley : > On 2018-01-24 08:47:30 -0600 (-0600), Monty Taylor wrote: > [...] >> Horizon and neutron were updated to start publishing to PyPI >> already. >> >> https://review.openstack.org/#/c/531822/ >> >> This is so that we can start working on unwinding the neutron and >> horizon specific versions of jobs for neutron and horizon plugins. > > Nice! I somehow missed that merging a couple of weeks back. In that > case, I suppose we could in theory do one final transitional package > upload of DOA depending on the conflicting Horizon release if others > think that's a good idea. > -- > Jeremy Stanley > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Mon Jan 29 15:38:58 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 29 Jan 2018 10:38:58 -0500 Subject: [openstack-dev] [ALL][requirements] A freeze is coming and you should be prepared In-Reply-To: <20180129024742.wbyf725c3yi2iquy@gentoo.org> References: <20180123072350.2jby5zoeeyzaryv5@gentoo.org> <20180124072947.u4dv674dv6bcczb6@gentoo.org> <20180125043227.v3mfb5u2ndeennvu@mthode.org> <20180126061238.dayud3ayid5fibzd@gentoo.org> <20180127050511.l526namzrrd6v6ue@gentoo.org> <20180128033753.gy3e2qkf562cqynm@gentoo.org> <20180129024742.wbyf725c3yi2iquy@gentoo.org> Message-ID: <1517240157-sup-500@lrrr.local> Excerpts from Matthew Thode's message of 2018-01-28 20:47:42 -0600: > On 18-01-27 21:37:53, Matthew Thode wrote: > > On 18-01-26 23:05:11, Matthew Thode wrote: > > > On 18-01-26 00:12:38, Matthew Thode wrote: > > > > On 18-01-24 22:32:27, Matthew Thode wrote: > > > > > On 18-01-24 01:29:47, Matthew Thode wrote: > > > > > > On 18-01-23 01:23:50, Matthew Thode wrote: > > > > > > > Requirements is freezing Friday at 23:59:59 UTC so any last > > > > > > > global-requrements updates that need to get in need to get in now. > > > > > > > > > > > > > > I'm afraid that my condition has left me cold to your pleas of mercy. > > > > > > > > > > > > > > > > > > > Just your daily reminder that the freeze will happen in about 3 days > > > > > > time. Reviews seem to be winding down for requirements now (which is > > > > > > a good sign this release will be chilled to perfection). > > > > > > > > > > > > > > > > There's still a couple of things that may cause bumps for iso8601 and > > > > > oslo.versionedobjects but those are the main things. The msgpack change > > > > > is also rolling out (thanks dirk :D). Even with all these changes > > > > > though, in this universe, there's only one absolute. Everything freezes! > > > > > > > > > > https://review.openstack.org/535520 (oslo.serialization) > > > > > > > > > > > > > Last day, gate is sad and behind, but not my fault you waited til the > > > > last minute :P (see my first comment). The Iceman Cometh! > > > > > > > > > > All right everyone, Chill. Looks like we have another couple days to > > > get stuff in for gate's slowness. The new deadline is 23:59:59 UTC > > > 29-01-2018. > > > > > > > It's a cold town. The current status is as follows. It looks like the > > gate is clearing up. oslo.versionedobjects-1.31.2 and iso8601 will be > > in a gr bump but that's it. monasca-tempest-plugin is not going to get > > in by freeze at this rate (has fixes needed in the review). There was > > some stuff needed to get nova-client/osc to work together again, but > > mriedem seems to have it in hand (and no gr updates it looks like). > > > > Allow me to break the Ice. My name is Freeze. Learn it well for it's > the chilling sound of your doom! Can you feel it coming? The icy cold > of space! It's less than 24 hours til the freeze fomrally happens, the > only outstanding item is that oslo.versionedobjects seems to need > another fix for the iso8601 bump. osc-placement won't be added to > requirements at this point as there has been no responce on their > review. > > https://review.openstack.org/538515 > > python-vitrageclient looks like it'll make it in if gate doesn't break. > msgpack may also be late, but we'll see (just workflow'd). > openstacksdk may need a gr bump, I'm waiting on a response from mordred > > https://review.openstack.org/538695 > We also have pending releases for cloudkittyclient, blazarclient, django-openstack-auth, swiftclient, and zaqarclient. Those are blocked by the current infra issues, and we really should not freeze the requirements list until we those libraries are released and we have a chance to try to update the constraints list to include them. From juliaashleykreger at gmail.com Mon Jan 29 15:50:56 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 29 Jan 2018 07:50:56 -0800 Subject: [openstack-dev] [ironic] FFE request for deprecating python-oneviewclient from OneView interfaces In-Reply-To: <1278623784.157763.1516726644546.JavaMail.zimbra@lsd.ufcg.edu.br> References: <1278623784.157763.1516726644546.JavaMail.zimbra@lsd.ufcg.edu.br> Message-ID: Circling back to this, Since Dmitry and myself agreed to continue reviewing this work, I believe we have implicitly agreed to grant this FFE and continue to land this work. Should anyone disagree, please reply indicating as such. I will also bring this up during our weekly meeting that is in about an hour. -Julia On Tue, Jan 23, 2018 at 8:57 AM, Ricardo Araújo wrote: > Hi, > > I'd like to request an FFE for deprecating python-oneviewclient and > introduce python-hpOneView in OneView interfaces [1]. This migration was > performed in Pike cycle but it was reverted due to the lack of a CA > certificate validation in python-hpOneView (available since 4.4.0 [2]). > > As the introduction of the new lib was already merged [3], following changes > are in scope of this FFE: > 1. Replace python-oneviewclient by python-hpOneView in power, management, > inspect and deployment interfaces for OneView hardware type [4] > 2. Move existing ironic related validation hosted in python-oneviewclient to > ironic code base [5] > 3. Remove python-oneviewclient dependency from Ironic [6] > > By performing this migration in Queens we will be able to concentrate > efforts in maintaining a single python lib for accessing HPE OneView while > being able to enhance current interfaces with features already provided in > python-hpOneView like soft power operations [7] and timeout for power > operations [8]. > > Despite being a big change to merge close to the end of the cycle, all > migration patches have received core reviewers attention lately and a few > positive reviews. They're also passing in both the community and UFCG > OneView CI (running deployment tests with HPE OneView). Postponing this will > be a blocker for the teams responsible for maintaining this hardware type > and both python libs for the next cycle. > > dtantsur and TheJulia have kindly agreed to keep reviewing this work during > the feature freeze window, if it gets an exception. > > Thanks, > Ricardo (ricardoas) > > [1] - https://bugs.launchpad.net/ironic/+bug/1693788 > [2] - https://github.com/HewlettPackard/python-hpOneView/releases/tag/v4.4.0 > [3] - https://review.openstack.org/#/c/523943/ > [4] - https://review.openstack.org/#/c/524310/ > [5] - https://review.openstack.org/#/c/524599/ > [6] - https://review.openstack.org/#/c/524729/ > [7] - https://review.openstack.org/#/c/510685/ > [8] - https://review.openstack.org/#/c/524624/ > > Ricardo Araújo Santos - > www.lsd.ufcg.edu.br/~ricardo > > M.Sc in Computer Science at UFCG - www.ufcg.edu.br > Researcher and Developer at Distributed Systems Laboratory - > www.lsd.ufcg.edu.br > Paraíba - Brasil > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From colleen at gazlene.net Mon Jan 29 15:53:37 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Mon, 29 Jan 2018 16:53:37 +0100 Subject: [openstack-dev] [keystone] FFE for application credentials In-Reply-To: References: <0595c36a-5ae8-e0ec-f3f8-1e56fa5777f6@gmail.com> Message-ID: > On Thu, Jan 25, 2018 at 10:15 PM, Lance Bragstad wrote: >> Hey all, >> >> The work for application credentials [0] has been up for a while, >> reviewers are happy with it, and it is slowly making it's way through >> the gate. I propose we consider a feature freeze exception given the >> state of the gate and the frequency of rechecks/failures. >> >> Thoughts, comments, or concerns? >> >> [0] >> https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/application-credentials These changes were approved on Wednesday (Jan 24). They are still not merged as of now (Monday, Jan 29, about 16:00 UTC) because of * tempest failures related to issues with cinder * the log server falling over * tempest timing out * merge conflicts with the system-scope patches that managed to land * hosting provider maintenance that caused zuul to fall over and jobs needing to be reenqueued and start over * unit test jobs timing out (https://bugs.launchpad.net/keystone/+bug/1746016) * zuul running out of memory and jobs needing to be reenqueued and start over As of now, the base patch in this change series is about 21st in the integrated gate queue. With any luck, there is a chance it might be merged some time tomorrow. I'd like to request that we keep the feature freeze exception open for these changes. Colleen From stdake at cisco.com Mon Jan 29 16:03:24 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Mon, 29 Jan 2018 16:03:24 +0000 Subject: [openstack-dev] [kolla] Policy regarding template customisation In-Reply-To: References: Message-ID: Agree, the “why” of this policy is stated here: https://docs.openstack.org/developer/kolla-ansible/deployment-philosophy.html Paul, I think your corrective actions sound good. Perhaps we should also reword “essential” to some other word that is more lenient. Cheers -steve From: Jeffrey Zhang Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Monday, January 29, 2018 at 7:14 AM To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [kolla] Policy regarding template customisation Thank Paul for pointing this out. for me, I prefer to consist with 2) There are thousands of configuration in OpenStack, it is hard for Kolla to add every key/value pair in playbooks. Currently, the merge_config is a more better solutions. On Mon, Jan 29, 2018 at 7:13 PM, Paul Bourke > wrote: Hi all, I'd like to revisit our policy of not templating everything in kolla-ansible's template files. This is a policy that was set in place very early on in kolla-ansible's development, but I'm concerned we haven't been very consistent with it. This leads to confusion for contributors and operators - "should I template this and submit a patch, or do I need to start using my own config files?". The docs[0] are currently clear: "The Kolla upstream community does not want to place key/value pairs in the Ansible playbook configuration options that are not essential to obtaining a functional deployment." In practice though our templates contain many options that are not necessary, and plenty of patches have merged that while very useful to operators, are not necessary to an 'out of the box' deployment. So I'd like us to revisit the questions: 1) Is kolla-ansible attempting to be a 'batteries included' tool, which caters to operators via key/value config options? 2) Or, is it to be a solid reference implementation, where any degree of customisation implies a clear 'bring your own configs' type policy. If 1), then we should potentially: * Update ours docs to remove the referenced paragraph * Look at reorganising files like globals.yml into something more maintainable. If 2), * We should make it clear to reviewers that patches templating options that are non essential should not be accepted. * Encourage patches to strip down existing config files to an absolute minimum. * Make this policy more clear in docs / templates to avoid frustration on the part of operators. Thoughts? Thanks, -Paul [0] https://docs.openstack.org/kolla-ansible/latest/admin/deployment-philosophy.html#why-not-template-customization __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Mon Jan 29 16:04:13 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 29 Jan 2018 11:04:13 -0500 Subject: [openstack-dev] [ptg] [glance] Dublin PTG planning Message-ID: We've been talking about this at the weekly glance meeting, but I forgot to put out a wider shout on the ML. The Glance planning etherpad is here: https://etherpad.openstack.org/p/glance-rocky-ptg-planning Right now it contains some excellent proposals*, but we could use some more. cheers, brian *They're all from me, so YMMV in terms of excellence. From lbragstad at gmail.com Mon Jan 29 16:09:58 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 29 Jan 2018 10:09:58 -0600 Subject: [openstack-dev] [keystone] FFE for application credentials In-Reply-To: References: <0595c36a-5ae8-e0ec-f3f8-1e56fa5777f6@gmail.com> Message-ID: <38b13b26-0280-ba3a-4611-9da0864258f4@gmail.com> +1 I agree. Thanks for the heads up, Colleen. On 01/29/2018 09:53 AM, Colleen Murphy wrote: >> On Thu, Jan 25, 2018 at 10:15 PM, Lance Bragstad wrote: >>> Hey all, >>> >>> The work for application credentials [0] has been up for a while, >>> reviewers are happy with it, and it is slowly making it's way through >>> the gate. I propose we consider a feature freeze exception given the >>> state of the gate and the frequency of rechecks/failures. >>> >>> Thoughts, comments, or concerns? >>> >>> [0] >>> https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/application-credentials > These changes were approved on Wednesday (Jan 24). They are still not > merged as of now (Monday, Jan 29, about 16:00 UTC) because of > > * tempest failures related to issues with cinder > * the log server falling over > * tempest timing out > * merge conflicts with the system-scope patches that managed to land > * hosting provider maintenance that caused zuul to fall over and jobs > needing to be reenqueued and start over > * unit test jobs timing out (https://bugs.launchpad.net/keystone/+bug/1746016) > * zuul running out of memory and jobs needing to be reenqueued and start over > > As of now, the base patch in this change series is about 21st in the > integrated gate queue. With any luck, there is a chance it might be > merged some time tomorrow. > > I'd like to request that we keep the feature freeze exception open for > these changes. > > Colleen > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From gr at ham.ie Mon Jan 29 16:15:20 2018 From: gr at ham.ie (Graham Hayes) Date: Mon, 29 Jan 2018 16:15:20 +0000 Subject: [openstack-dev] [designate] designate-core updates Message-ID: Another update to the designate-core team: + eandersson - timsim - kiall eandersson has been a long term reviewer and end user of designate who has consistently performed good, and detail orientated reviews. Unfortunately both Kiall and Tim have moved on to other areas, and as such have not had the time to be consistent with their reviews. I would like to thank Kiall (the projects original founder) and Tim for the help they have provided over the years, and for taking the time to do reviews even after they were working on other areas. If anyone thinks that they, or someone else would be a good core reviewer for Designate, please let me know, on this email, or on IRC (mugsie on freenode). Thanks - Graham -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From doug at doughellmann.com Mon Jan 29 16:36:24 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 29 Jan 2018 11:36:24 -0500 Subject: [openstack-dev] [release][requirements][horizon] django-openstack-auth retirement In-Reply-To: References: Message-ID: <1517243649-sup-3366@lrrr.local> Excerpts from Akihiro Motoki's message of 2018-01-30 00:36:30 +0900: > Hi the release team and the requirements team, > > I would like to have on django-openstack-auth (DOA) retirement. > In the thread of the announce of DOA retirement last week, I was > advised to release a transition package which provides no python > module and make horizon depend on it so that the transition can be > smooth. > http://lists.openstack.org/pipermail/openstack-dev/2018-January/thread.html#126428 > > To achieve this, the horizon team needs: > * to release django-openstack-auth 4.0.0 (the current version is 3.5.0 > so 4.0.0 makes sense) https://review.openstack.org/#/c/538709/ > * to add django-openstack-auth 4.0.0 to g-r and u-c (for queens) > * to add django-openstack-auth 4.0.0 to horizon queens RC1 I think what Jeremy was proposing in the thread you linked to was that the new version of django-openstack-auth should depend on Horizon, so that any projects that depend on django-openstack-auth but that do not depend on Horizon will still have the relevant packages installed when they install django_openstack_auth. We would not need to update the global requirements or constraints lists to do that. Doug > > I think there are two options in horizon queens: > - to release the transition package of django-openstack-auth 4.0.0 as > described above, or > - to just document the retirement of django-openstack-auth > > The requirement release is in 9 hours. > I would like to ask advices from the release and requirements team. > > Thanks, > Akihiro > > 2018-01-27 2:45 GMT+09:00 Jeremy Stanley : > > On 2018-01-24 08:47:30 -0600 (-0600), Monty Taylor wrote: > > [...] > >> Horizon and neutron were updated to start publishing to PyPI > >> already. > >> > >> https://review.openstack.org/#/c/531822/ > >> > >> This is so that we can start working on unwinding the neutron and > >> horizon specific versions of jobs for neutron and horizon plugins. > > > > Nice! I somehow missed that merging a couple of weeks back. In that > > case, I suppose we could in theory do one final transitional package > > upload of DOA depending on the conflicting Horizon release if others > > think that's a good idea. > > -- > > Jeremy Stanley > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From prometheanfire at gentoo.org Mon Jan 29 16:47:19 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 29 Jan 2018 10:47:19 -0600 Subject: [openstack-dev] [ALL][requirements] A freeze is coming and you should be prepared In-Reply-To: <20180129143036.GA27347@sm-xps> References: <20180123072350.2jby5zoeeyzaryv5@gentoo.org> <20180124072947.u4dv674dv6bcczb6@gentoo.org> <20180125043227.v3mfb5u2ndeennvu@mthode.org> <20180126061238.dayud3ayid5fibzd@gentoo.org> <20180127050511.l526namzrrd6v6ue@gentoo.org> <20180128033753.gy3e2qkf562cqynm@gentoo.org> <20180129024742.wbyf725c3yi2iquy@gentoo.org> <20180129143036.GA27347@sm-xps> Message-ID: <20180129164719.a3r6dn6zu5ymeefu@gentoo.org> On 18-01-29 08:30:37, Sean McGinnis wrote: > > > > ... the > > only outstanding item is that oslo.versionedobjects seems to need > > another fix for the iso8601 bump. ... > > I took a look at the failing jobs for the oslo.versionobjects bump, and it > appears this is not directly related. > > There are failures in nova, cinder, and keystone with the new > oslo.versionedobjects. This appears to be due to a mix of UTC time handling in > these projects between their own local implementations and usage of the > timeutils inside oslo.versionedobjects. > > The right answer might be to get all of these local implementations moved out > into something like oslo.utils, but for the time being, these patches will need > to land before we can raise oslo.versionedobjects (and raise the iso8601 > version that triggered this work). > > Cinder - https://review.openstack.org/#/c/536182/2 > Nova - https://review.openstack.org/#/c/535700/3 > Keystone - https://review.openstack.org/#/c/538263/1 > > There are similar patches in other projects (I think they are all using the > same topic) that will need to land as well that don't appear to be covered in > the requirements cross jobs. > Added them as depends-on to https://review.openstack.org/538549 -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ramamani.yeleswarapu at intel.com Mon Jan 29 18:14:50 2018 From: ramamani.yeleswarapu at intel.com (Yeleswarapu, Ramamani) Date: Mon, 29 Jan 2018 18:14:50 +0000 Subject: [openstack-dev] [ironic] this week's priorities and subteam reports Message-ID: Hi, We are glad to present this week's priorities and subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. This Week's Priorities (as of the weekly ironic meeting) ======================================================== Bugs that we want to land in this release: 1. ironic - Don't try to lock upfront for vif removal: https://review.openstack.org/#/c/534441/ FFEs that have been granted, need to land by Feb 2: 1. Classic drivers deprecation: - champions: rloo, stendulker - https://review.openstack.org/#/q/topic:bug/1690185+(status:open+OR+status:merged) 1.1. Deprecate classic drivers: https://review.openstack.org/#/c/536928/ 1.2. Switch contributor documentation to hardware types: https://review.openstack.org/#/c/537959/ 1.3. Switch the CI to hardware types: https://review.openstack.org/#/c/536875/ 2. Routed Networks support - champions: TheJulia, sambetts - https://review.openstack.org/#/q/project:openstack/networking-baremetal - https://review.openstack.org/521838 Switch from MechanismDriver to SimpleAgentMechanismDriverBase. ** - https://review.openstack.org/#/c/536792/ Use reporting_interval option from neutron - https://review.openstack.org/#/c/536040/ Flat networks use node.uuid when binding ports. ** - https://review.openstack.org/#/c/537353 Add documentation for baremetal mech ** - https://review.openstack.org/#/c/532349/7 Add support to bind type vlan networks - https://review.openstack.org/524709 Make the agent distributed using hashring and notifications - CI patches: - https://review.openstack.org/#/c/531275/ Devstack - use neutron segments (routed provider networks) - https://review.openstack.org/#/c/531637/ Wait for ironic-neutron-agent to report state - https://review.openstack.org/#/c/530117/ Devstack - Add ironic-neutron-agent - https://review.openstack.org/#/c/530409/ Add dsvm job 3. Traits: - champions: rloo, TheJulia - https://review.openstack.org/#/q/topic:bug/1722194+(status:open+OR+status:merged) 3.1. Add traits field to node notifications: https://review.openstack.org/#/c/536979/ 3.2. Fix nits found in node traits: https://review.openstack.org/#/c/537386/ 3.3. Add documentation for node traits: https://review.openstack.org/#/c/536980/ 3.4. Sort node traits in comparisons: https://review.openstack.org/#/c/538653/ 4. Rescue 4.1. Requires quick review for devstack changes. We cannot land devstack changes as the client calls did not land in Queens. 4.2. TheJuia to do so after Monday meeting. - champions: dtantsur, TheJulia - https://review.openstack.org/#/q/topic:bug/1526449+(status:open+OR+status:merged) 4.1. devstack: add support for rescue mode: https://review.openstack.org/#/c/524118/ - rest of test patches can't land since they depend on a nova-related patch 4.2. Update "standalone" job for supporting rescue mode: https://review.openstack.org/#/c/537821/ 4.3. Rescue mode standalone tests: https://review.openstack.org/#/c/538119/ (failing CI, not ready for reviews) 4.4. Follow-up for agent rescue implementation: https://review.openstack.org/#/c/538252/ 4.5. Add documentation for rescue interface: https://review.openstack.org/#/c/419606/ (needs update) 4.6. Follow-up patch for rescue extension for CoreOS: https://review.openstack.org/#/c/538429/ 4.7. Add documentation for rescue mode: https://review.openstack.org/#/c/431622/ (needs update) 5. Implementation for UEFI iSCSI boot for ILO: - champions: TheJulia, stendulker 5.1. follow up patch needed, for https://review.openstack.org/#/c/468288/ 6. deprecating python-oneviewclient from OneView interfaces - champions: dtantsur, TheJulia - https://review.openstack.org/#/q/status:merged+project:openstack/ironic+branch:master+topic:bug/1693788 - Appears to be in good shape - Reno should be updated - https://review.openstack.org/#/c/524729/11/releasenotes/notes/remove-python-oneviewclient-b1d345ef861e156e.yaml Vendor priorities ----------------- cisco-ucs: Patches in works for SDK update, but not posted yet, currently rebuilding third party CI infra after a disaster... idrac: RFE and first several patches for adding UEFI support will be posted by Tuesday, 1/9 ilo: https://review.openstack.org/#/c/530838/ - OOB Raid spec for iLO5 irmc: None oneview: Remove python-oneviewclient from oneview hardware type - https://review.openstack.org/#/c/524729/ MERGED Subproject priorities --------------------- bifrost: (TheJulia): Fedora support fixes - https://review.openstack.org/#/c/471750/ ironic-inspector (or its client): (dtantsur) keystoneauth adapters https://review.openstack.org/#/c/515787/ MERGED networking-baremetal: neutron baremetal agent https://review.openstack.org/#/c/456235/ MERGED sushy and the redfish driver: (dtantsur) implement redfish sessions: https://review.openstack.org/#/c/471942/ MERGED Bugs (dtantsur, vdrok, TheJulia) -------------------------------- - Stats (diff between 08 Jan 2018 and 15 Jan 2018) - Ironic: 216 bugs (-3) + 260 wishlist items. 1 new (-1), 156 in progress (-2), 0 critical, 33 high (-1) and 27 incomplete (-1) - Inspector: 14 bugs (-1) + 28 wishlist items. 0 new, 10 in progress, 0 critical, 2 high (-1) and 6 incomplete (+1) - Nova bugs with Ironic tag: 13. 1 new, 0 critical, 0 high - via http://dashboard-ironic.7e14.starter-us-west-2.openshiftapps.com/ - the dashboard was abruptly deleted and needs a new home :( - HIGH bugs with patches to review: - Clean steps are not tested in gate https://bugs.launchpad.net/ironic/+bug/1523640: Add manual clean step ironic standalone test https://review.openstack.org/#/c/429770/15 - Needs to be reproposed to the ironic tempest plugin repository. - prepare_instance() is not called for whole disk images with 'agent' deploy interface https://bugs.launchpad.net/ironic/+bug/1713916: - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` https://review.openstack.org/#/c/499050/ - (TheJulia) Currently WF-1, as revision is required for deprecation. - If provisioning network is changed, Ironic conductor does not behave correctly https://bugs.launchpad.net/ironic/+bug/1679260: Ironic conductor works correctly on changes of networks: https://review.openstack.org/#/c/462931/ - (rloo) needs some direction - may be fixed as part of https://review.openstack.org/#/c/460564/ - IPA may not find partition created by conductor https://bugs.launchpad.net/ironic-lib/+bug/1739421 - Fix proposed: https://review.openstack.org/#/c/529325/ MERGED CI refactoring and missing test coverage ---------------------------------------- - not considered a priority, it's a 'do it always' thing - Standalone CI tests (vsaienk0) - next patch to be reviewed, needed for 3rd party CI: https://review.openstack.org/#/c/429770/ - localboot with partitioned image patches: - Ironic - add localboot partitioned image test: https://review.openstack.org/#/c/502886/ - when previous are merged TODO (vsaienko) - Upload tinycore partitioned image to tarbals.openstack.org - Switch ironic to use tinyipa partitioned image by default - Missing test coverage (all) - portgroups and attach/detach tempest tests: https://review.openstack.org/382476 - adoption: https://review.openstack.org/#/c/344975/ - should probably be changed to use standalone tests - root device hints: TODO - node take over - resource classes integration tests: https://review.openstack.org/#/c/443628/ - radosgw (https://bugs.launchpad.net/ironic/+bug/1737957) Essential Priorities ==================== Ironic client API version negotiation (TheJulia, dtantsur) ---------------------------------------------------------- - RFE https://bugs.launchpad.net/python-ironicclient/+bug/1671145 - Nova bug https://bugs.launchpad.net/nova/+bug/1739440 - gerrit topic: https://review.openstack.org/#/q/topic:bug/1671145 - status as of 29 Jan 2017: - Nova bug: https://bugs.launchpad.net/nova/+bug/1739440 - TODO: - easier access to versions in ironicclient - see https://etherpad.openstack.org/p/ironic-api-version-negotiation - discussion of various ways to implement it happened on the midcycle - dtantsur wants to have an API-SIG guideline on consuming versions in SDKs - ready for review https://review.openstack.org/532814 - patches for ironicclient by TheJulia: - expose negotiated latest: https://review.openstack.org/531029 MERGED - accept list of versions: https://review.openstack.org/#/c/531271/ MERGED - establish foundation for using version negotiation in nova - (rloo) nova would not approve it for valid reasons, and grenade started working again... - nothing more for Queens. Stay tuned... - need to make sure that we discuss/agree with nova about how to do this External project authentication rework (pas-ha, TheJulia) --------------------------------------------------------- - gerrit topic: https://review.openstack.org/#/q/topic:bug/1699547 - status as of 29 Jan 2017: - 0 inspector patch left - https://review.openstack.org/#/c/515786/ MERGED - https://review.openstack.org/#/c/515787 MERGED - This is DONE! Classic drivers deprecation (dtantsur) -------------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/classic-drivers-future.html - status as of 29 Jan 2017: - dev documentation for hardware types: TODO - switch documentation to hardware types: - need help from vendors updating their pages - migration of classic drivers to hardware types: - updating spec based on actual code: https://review.openstack.org/#/c/536298/ MERGED - see the commit message for explanation - support options for migrations: https://review.openstack.org/535772 MERGED - upgrade (for IPMI, SNMP and fake): https://review.openstack.org/#/c/534373/ MERGED - other drivers TODO - migration of CI to hardware types - switch all jobs from -_ipmitool to ipmi: https://review.openstack.org/#/c/536875/ - switch inspector CI: https://review.openstack.org/#/c/537415/ - clean up job playbooks: https://review.openstack.org/#/c/535896/ - actual deprecation: https://review.openstack.org/#/c/536928/ - there is an FFE for this; trying to get the above landed by Feb 2 Traits support planning (mgoddard, johnthetubaguy, dtantsur) ------------------------------------------------------------ - http://specs.openstack.org/openstack/ironic-specs/specs/approved/node-traits.html - Nova patches: https://review.openstack.org/#/q/topic:bp/ironic-driver-traits+(status:open+OR+status:merged) - have been approved, waiting to merge - status as of 29 Jan 2018: - deploy templates spec: https://review.openstack.org/504952 needs reviews - depends on deploy-steps spec: https://review.openstack.org/#/c/412523 - patches for traits API - DB model & DB API - https://review.openstack.org/#/c/528238 (MERGED) - https://review.openstack.org/#/c/530723 (MERGED) - Add version to DB object - https://review.openstack.org/#/c/535482 (MERGED) - RPC objects - https://review.openstack.org/#/c/532268 MERGED - RPC API & conductor - https://review.openstack.org/#/c/535296 MERGED - API - https://review.openstack.org/#/c/532269 MERGED - API ref - https://review.openstack.org/#/c/536384 MERGED - Client - https://review.openstack.org/#/c/532622/ MERGED - There is an FFE for this; most of the code, including client has landed, trying to get rest landed by Feb 2. Reference architecture guide (dtantsur, sambetts) ------------------------------------------------- - status as of 22 Jan 2017: - dtantsur needs volunteers to help move this forward - list of cases from https://etherpad.openstack.org/p/ironic-queens-ptg-open-discussion - Admin-only provisioner - small and/or rare: TODO - large and/or frequent: TODO - Bare metal cloud for end users - smaller single-site: TODO - larger single-site: TODO - larger multi-site: TODO High Priorities =============== Neutron event processing (vdrok, vsaienk0, sambetts) ---------------------------------------------------- - status as of 27 Sep 2017: - spec at https://review.openstack.org/343684, ready for reviews, replies from authors - WIP code at https://review.openstack.org/440778 Routed network support (sambetts, vsaienk0, bfournie, hjensas) -------------------------------------------------------------- - status as of 29 Jan 2018: - The first couple of patches merged ... we need some more landing before we have something of use. (e.g this one to actually use the agent_db data https://review.openstack.org/#/c/521838/ ) - Need reviews ... https://review.openstack.org/#/q/topic:bug/1658964+(status:open+OR+status:merged) - With neutron fixed; patch below; the dsvm job seems stable. - Fix for neutron issue https://review.openstack.org/#/c/534449/ (Merged). - hjensas taken over as main contributor from sambetts - There is challenges with integration to Placement due to the way the integration was done in neutron. Neutron will create a resource provider for network segments in Placement, then it creates an os-aggregate in Nova for the segment, adds nova compute hosts to this aggregate. Ironic nodes cannot be added to host-aggregates. I (hjensas) had a short discussion with neutron devs (mlavalle) on the issue: http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2018-01-12.log.html#t2018-01-12T17:05:38 There are patches in Nova to add support for ironic nodes in host-aggregates: - https://review.openstack.org/#/c/526753/ allow compute nodes to be associated with host agg - https://review.openstack.org/#/c/529135/ (Spec) - Patches: - https://review.openstack.org/456235 Add baremetal neutron agent (Merged) - https://review.openstack.org/#/c/533707/ start_flag = True, only first time, or conf change (Merged) - https://review.openstack.org/521838 Switch from MechanismDriver to SimpleAgentMechanismDriverBase - https://review.openstack.org/#/c/536040/ Flat networks use node.uuid when binding ports. - https://review.openstack.org/#/c/537353 Add documentation for baremetal mech - https://review.openstack.org/#/c/532349/7 Add support to bind type vlan networks - https://review.openstack.org/524709 Make the agent distributed using hashring and notifications - CI Patches: - https://review.openstack.org/#/c/531275/ Devstack - use neutron segments (routed provider networks) - https://review.openstack.org/#/c/531637/ Wait for ironic-neutron-agent to report state - https://review.openstack.org/#/c/530117/ Devstack - Add ironic-neutron-agent - https://review.openstack.org/#/c/530409/ Add dsvm job - https://review.openstack.org/#/c/392959/ Rework Ironic devstack baremetal network simulation Rescue mode (rloo, stendulker, aparnav) --------------------------------------- - Status as on 29 Jan 2018 - spec: http://specs.openstack.org/openstack/ironic-specs/specs/approved/implement-rescue-mode.html - code: https://review.openstack.org/#/q/topic:bug/1526449+status:open - ironic side: - All patches are up-to-date, being actively reviewed and updated - Tempest tests based on standalone ironic is WIP. - Tempest tests with nova is also WIP: https://review.openstack.org/#/c/528699/ - Add documentation for rescue interface https://review.openstack.org/419606 - Follow-up for agent rescue implementation https://review.openstack.org/538252 - nova side: - https://blueprints.launchpad.net/nova/+spec/ironic-rescue-mode: approved for Queens; waiting for ironic part to be done first. Queens feature freeze is week of Jan 22. (TheJulia) Nova has indicated that this is deferred until Rocky. - To get the nova patch merged, we need: - release new python-ironicclient - update ironicclient version in upper-constraints (this patch will be posted automatically) - update ironicclient version in global-requirement (this patch needs to be posted manually) - code patch: https://review.openstack.org/#/c/416487/ - There is a FFE for this. However, the client has been released without the rescue code, so this won't all land. (Can't land anything that needs the client code). - CI is needed for nova part to land - tiendc is working for CI Clean up deploy interfaces (vdrok) ---------------------------------- - status as of 9 Jan 2017: - patch https://review.openstack.org/524433 ready for reviews Zuul v3 jobs in-tree (sambetts, derekh, jlvillal, rloo) ------------------------------------------------------- - etherpad tracking zuul v3 -> intree: https://etherpad.openstack.org/p/ironic-zuulv3-intree-tracking - cleaning up/centralizing job descriptions (eg 'irrelevant-files'): DONE - Next TODO is to convert jobs on master, to proper ansible. NOT a high priority though. - (pas-ha) DNM experimental patch with "devstack-tempest" as base job https://review.openstack.org/#/c/520167/ Graphical console interface (pas-ha, vdrok, rpioso) --------------------------------------------------- - status as of 8 Jan 2017: - spec on review: https://review.openstack.org/#/c/306074/ - there is nova part here, which has to be approved too - dtantsur is worried by absence of progress here - (TheJulia) I think for rocky, it might be worth making it a prime focus, or making it a background goal. BIOS config framework (dtantsur, yolanda, rpioso) ------------------------------------------------- - status as of 8 Jan 2017: - spec under active review: https://review.openstack.org/#/c/496481/ Ansible deploy interface (pas-ha) --------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ansible-deploy-driver.html - status as of 22 Jan 2017: - code merged - TODO - CI job - https://review.openstack.org/529640 MERGED - https://review.openstack.org/#/c/529383/ MERGED - done? - docs: https://review.openstack.org/#/c/525501/ OpenStack Priorities ==================== Python 3.5 compatibility (Nisha, Ankit) --------------------------------------- - Topic: https://review.openstack.org/#/q/topic:goal-python35+NOT+project:openstack/governance+NOT+project:openstack/releases - this include all projects, not only ironic - please tag all reviews with topic "goal-python35" - TODO submit the python3 job for IPA - for ironic and ironic-inspector job enabled by disabling swift as swift is still lacking py3.5 support. - anupn to update the python3 job to build tinyipa with python3 - (anupn): Talked with swift folks and there is a bug upstream opened https://review.openstack.org/#/c/401397 for py3 support in swift. But this is not on their priority - Right now patch pass all gate jobs except agent_- drivers. - we need to make the ironic job voting eventually. but we need to check that nova, glance and neutron already have voting python 3 jobs, otherwise they may break us. - nova seems to have python 3 jobs voting, here are our patches: - ironic https://review.openstack.org/#/c/531398/ - ironic-inspector https://review.openstack.org/#/c/531400/ MERGED Deploying with Apache and WSGI in CI (pas-ha, vsaienk0) ------------------------------------------------------- - ironic is mostly finished - (pas-ha) needs to be rewritten for uWSGI, patches on review: - https://review.openstack.org/#/c/507011/ +A - https://review.openstack.org/#/c/507067 Needs revision - inspector is TODO and depends on https://review.openstack.org/#/q/topic:bug/1525218 - may be delayed to after Queens, as the HA work seems to take a different direction Split away the tempest plugin (jlvillal) ---------------------------------------- - https://etherpad.openstack.org/p/ironic-tempest-plugin-migration - Current (8-Jan-2018) (jlvillal): All projects now using tempest plugin code from openstack/ironic-tempest-plugin - Need to remove plugin code from master branch of openstack/ironic and openstack/ironic-inspector - Plugin code will NOT be removed from the stable branches of openstack/ironic and openstack/ironic-inspector - (jlvillal) 3rd Party CI has had over 3 weeks to prepare for removal. We should now move forward - README, setup.cfg and docs cleanup: https://review.openstack.org/#/c/529538/ MERGED - ironic-tempest-plugin 1.0.0 released Subprojects =========== Inspector (dtantsur) -------------------- - trying to flip dsvm-discovery to use the new dnsmasq pxe filter and failing because of bash :Dhttps://review.openstack.org/#/c/525685/6/devstack/plugin.sh at 202 - follow-ups being merged/reviewed; working on state consistency enhancements https://review.openstack.org/#/c/510928/ too (HA demo follow-up) Bifrost (TheJulia) ------------------ - Also seems a recent authentication change in keystoneauth1 has broken processing of the clouds.yaml files, i.e. `openstack` command does not work. - TheJulia will try to look at this this week. Drivers: -------- DRAC (rpioso, dtantsur) ~~~~~~~~~~~~~~~~~~~~~~~ - Dell Ironic CI is being rebuilt, its back and running now (10/17/2017) OneView (ricardoas, nicodemos, gmonteiro) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Re-submitting reverted patches for migration from python-oneviewclient to python-hpOneView + python-ilorest-library [MERGED] - Check weekly priorities for most import patch to review Cisco UCS (sambetts) ~~~~~~~~~~~~~~~~~~~~ - Currently rebuilding third party CI from the ground up after it bit the dust - Patches for updating the UCS python SDKs are in the works and should be posted soon ......... Until next week, --Rama [0] https://etherpad.openstack.org/p/IronicWhiteBoard -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Mon Jan 29 18:27:05 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 29 Jan 2018 12:27:05 -0600 Subject: [openstack-dev] [nova][placement] Re: VMWare's resource pool / cluster and nested resource providers In-Reply-To: References: <9ad230ba-587e-9c60-f604-e817fcebd9e4@fried.cc> Message-ID: <0298c0ed-ff9d-76e6-4040-89666105b3ae@fried.cc> We had some lively discussion in #openstack-nova today, which I'll try to summarize here. First of all, the hierarchy: controller (n-cond) / \ cluster/n-cpu cluster/n-cpu / \ / \ res. pool res. pool ... ... / \ / \ host host ... ... / \ / \ ... ... inst inst Important points: (1) Instances do indeed get deployed to individual hosts, BUT vCenter can and does move them around within a cluster independent of nova-isms like live migration. (2) VMWare wants the ability to specify that an instance should be deployed to a specific resource pool. (3) VMWare accounts for resources at the level of the resource pool (not host). (4) Hosts can move fluidly among resource pools. (5) Conceptually, VMWare would like you not to see or think about the 'host' layer at all. (6) It has been suggested that resource pools may be best represented via aggregates. But to satisfy (2), this would require support for doing allocation requests that specify one (e.g. porting the GET /resource_providers ?member_of= queryparam to GET /allocation_candidates, and the corresponding flavor enhancements). And doing so would mean getting past our reluctance up to this point of exposing aggregates by name/ID to users. Here are some possible models: (A) Today's model, where the cluster/n-cpu is represented as a single provider owning all resources. This requires some creative finagling of inventory fields to ensure that a resource request might actually be satisfied by a single host under this broad umbrella. (An example cited was to set VCPU's max_unit to whatever one host could provide.) It is not clear to me if/how resource pools have been represented in this model thus far, or if/how it is currently possible to (2) target an instance to a specific one. I also don't see how anything we've done with traits or aggregates would help with that aspect in this model. (B) Representing each host as a root provider, each owning its own actual inventory, each possessing a CUSTOM_RESOURCE_POOL_X trait indicating which pool it belongs to at the moment; or representing pools via aggregates as in (6). This model breaks because of (1), unless we give virt drivers some mechanism to modify allocations (e.g. via POST /allocations) without doing an actual migration. (C) Representing each resource pool as a root provider which presents the collective inventory of all its hosts. Each could possess its own unique CUSTOM_RESOURCE_POOL_X trait. Or we could possibly adapt whatever mechanism Ironic uses when it targets a particular baremetal node. Or we could use aggregates as in (6), where each aggregate is associated with just one provider. This one breaks down because we don't currently have a way for nova to know that, when an instance's resources were allocated from the provider corresponding to resource pool X, that means we should schedule the instance to (nova, n-cpu) host Y. There may be some clever solution for this involving aggregates (NOT sharing providers!), but it has not been thought through. It also entails the same "creative finagling of inventory" described in (A). (D) Using actual nested resource providers: the "cluster" is the (inventory-less) root provider, and each resource pool is a child of the cluster. This is closest to representing the real logical hierarchy, and is desirable for that reason. The drawback is that you then MUST use some mechanism to ensure allocations are never spread across pools. If your request *always* targets a specific resource pool, that works. Otherwise, you would have to use a numbered request group, as described below. It also entails the same "creative finagling of inventory" described in (A). (E) Take (D) a step further by adding each 'host' as a child of its respective resource pool. No "creative finagling", but same "moving allocations" issue as (B). I'm sure I've missed/misrepresented things. Please correct and refine as necessary. Thanks, Eric On 01/27/2018 12:23 PM, Eric Fried wrote: > Rado- > >     [+dev ML.  We're getting pretty general here; maybe others will get > some use out of this.] > >> is there a way to make the scheduler allocate only from one specific RP > >     "...one specific RP" - is that Resource Provider or Resource Pool? > >     And are we talking about scheduling an instance to a specific > compute node, or are we talking about making sure that all the requested > resources are pulled from the same compute node (but it could be any one > of several compute nodes)?  Or justlimiting the scheduler to any node in > a specific resource pool? > >     To make sure I'm fully grasping the VMWare-specific > ratios/relationships between resource pools and compute nodes,I have > been assuming: > > controller 1:many compute "host"(where n-cpu runs) > compute "host"  1:many resource pool > resource pool 1:many compute "node" (where instances can be scheduled) > compute "node" 1:many instance > >     (I don't know if this "host" vs"node" terminology is correct, but > I'm going to keep pretending it is for the purposes of this note.) > >     In particular, if that last line is true, then you do *not* want > multiple compute "nodes" in the same provider tree. > >> if no custom trait is specified in the request? > >     I am not aware of anything current or planned that will allow you to > specify an aggregate you want to deploy from; so the only way I'm aware > of that you could pin a request to a resource pool is to create a custom > trait for that resource pool, tag all compute nodes in the pool with > that trait, and specify that trait in your flavor.  This way you don't > use nested-ness at all.  And in this model, there's also no need to > create resource providers corresponding to resource pools - their > solemanifestation is via traits. > >     (Bonus: this model will work with what we've got merged in Queens - > we didn't quiiite finish the piece of NRP that makes them work for > allocation candidates, but we did merge trait support.  We're also > *mostly* there with aggregates, but I wouldn't want to rely on them > working perfectly and we're not claiming full support for them.) > >     To be explicit, in the model I'm suggesting, your compute "host", > within update_provider_tree, would create new_root()s for each compute > "node".  So the "tree" isn't really a tree - it's a flat list of > computes, of which one happens to correspond to the `nodename` and > represents the compute "host".  (I assume deploys can happen to the > compute "host" just like they can to a compute "node"?  If not, just > give that guy no inventory and he'll be avoided.)  It would then > update_traits(node, ['CUSTOM_RPOOL_X']) for each.  It would also > update_inventory() for each as appropriate. > >     Now on your deploys, to get scheduled to a particular resource pool, > you would have to specify required=CUSTOM_RPOOL_X in your flavor. > >     That's it.  You never use new_child().  There are no providers > corresponding to pools.  There are no aggregates. > >     Are we making progress, or am I confused/confusing? > > Eric > > > On 01/27/2018 01:50 AM, Radoslav Gerganov wrote: >> >> +Chris >> >> >> Hi Eric, >> >> Thanks a lot for sending this.  I must admit that I am still trying to >> catch up with how the scheduler (will) work when there are nested RPs, >> traits, etc.  I thought mostly about the case when we use a custom >> trait to force allocations only from one resource pool.  However, if >> no trait is specified then we can end up in the situation that you >> describe (allocating different resources from different resource >> pools) and this is not what we want.  If we go with the model that you >> propose, is there a way to make the scheduler allocate only from one >> specific RP if no custom trait is specified in the request? >> >> Thanks, >> >> Rado >> >> >> ------------------------------------------------------------------------ >> *From:* Eric Fried >> *Sent:* Friday, January 26, 2018 10:20 PM >> *To:* Radoslav Gerganov >> *Cc:* Jay Pipes >> *Subject:* VMWare's resource pool / cluster and nested resource providers >>   >> Rado- >> >>         It occurred to me just now that the model you described to me >> [1] isn't >> going to work, unless there's something I really misunderstood. >> >>         The problem is that the placement API will think it can allocate >> resources from anywhere in the tree for a given allocation request >> (unless you always use a single numbered request group [2] in your >> flavors, which doesn't sound like a clean plan). >> >>         So if you have *any* model where multiple compute nodes reside >> in the >> same provider tree, and I come along with a request for say >> VCPU:1,MEMORY_MB:2048,DISK_GB:512, placement will happily give you a >> candidate with the VCPU from compute10, the memory from compute5, and >> the disk from compute7.  I'm only guessing that this isn't a viable way >> to boot an instance. >> >>         I go back to my earlier suggestion: I think you need to create the >> compute nodes as root providers in your ProviderTree, and find some >> other way to mark the resource pool associations.  You could do it with >> custom traits (CUSTOM_RESOURCE_POOL_X, ..._Y, etc.); or you could do it >> with aggregates (an aggregate maps to a resource pool; associate all the >> compute providers in a given pool with its aggregate uuid). >> >>                         Thanks, >>                         Eric >> >> [1] >> http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-01-26.log.html#t2018-01-26T14:40:44 >> [2] >> https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/granular-resource-requests.html#numbered-request-groups > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gjayavelu at vmware.com Mon Jan 29 18:56:04 2018 From: gjayavelu at vmware.com (Giridhar Jayavelu) Date: Mon, 29 Jan 2018 18:56:04 +0000 Subject: [openstack-dev] [nova][placement] Re: VMWare's resource pool / cluster and nested resource providers In-Reply-To: <0298c0ed-ff9d-76e6-4040-89666105b3ae@fried.cc> References: <9ad230ba-587e-9c60-f604-e817fcebd9e4@fried.cc> <0298c0ed-ff9d-76e6-4040-89666105b3ae@fried.cc> Message-ID: <1E91A761-0324-474C-913C-E3A3C70FCC5B@vmware.com> Eric, Response inline. On 1/29/18, 10:27 AM, "Eric Fried" wrote: >We had some lively discussion in #openstack-nova today, which I'll try >to summarize here. > >First of all, the hierarchy: > > controller (n-cond) > / \ > cluster/n-cpu cluster/n-cpu > / \ / \ > res. pool res. pool ... ... > / \ / \ > host host ... ... > / \ / \ >... ... inst inst > >Important points: > >(1) Instances do indeed get deployed to individual hosts, BUT vCenter >can and does move them around within a cluster independent of nova-isms >like live migration. > >(2) VMWare wants the ability to specify that an instance should be >deployed to a specific resource pool. > >(3) VMWare accounts for resources at the level of the resource pool (not >host). > >(4) Hosts can move fluidly among resource pools. > >(5) Conceptually, VMWare would like you not to see or think about the >'host' layer at all. > >(6) It has been suggested that resource pools may be best represented >via aggregates. But to satisfy (2), this would require support for >doing allocation requests that specify one (e.g. porting the GET >/resource_providers ?member_of= queryparam to GET >/allocation_candidates, and the corresponding flavor enhancements). And >doing so would mean getting past our reluctance up to this point of >exposing aggregates by name/ID to users. > >Here are some possible models: > >(A) Today's model, where the cluster/n-cpu is represented as a single >provider owning all resources. This requires some creative finagling of >inventory fields to ensure that a resource request might actually be >satisfied by a single host under this broad umbrella. (An example cited >was to set VCPU's max_unit to whatever one host could provide.) It is >not clear to me if/how resource pools have been represented in this >model thus far, or if/how it is currently possible to (2) target an >instance to a specific one. I also don't see how anything we've done >with traits or aggregates would help with that aspect in this model. > >(B) Representing each host as a root provider, each owning its own >actual inventory, each possessing a CUSTOM_RESOURCE_POOL_X trait >indicating which pool it belongs to at the moment; or representing pools >via aggregates as in (6). This model breaks because of (1), unless we >give virt drivers some mechanism to modify allocations (e.g. via POST >/allocations) without doing an actual migration. > >(C) Representing each resource pool as a root provider which presents >the collective inventory of all its hosts. Each could possess its own >unique CUSTOM_RESOURCE_POOL_X trait. Or we could possibly adapt >whatever mechanism Ironic uses when it targets a particular baremetal >node. Or we could use aggregates as in (6), where each aggregate is >associated with just one provider. This one breaks down because we >don't currently have a way for nova to know that, when an instance's >resources were allocated from the provider corresponding to resource >pool X, that means we should schedule the instance to (nova, n-cpu) host >Y. There may be some clever solution for this involving aggregates (NOT >sharing providers!), but it has not been thought through. It also >entails the same "creative finagling of inventory" described in (A). > >(D) Using actual nested resource providers: the "cluster" is the >(inventory-less) root provider, and each resource pool is a child of the >cluster. This is closest to representing the real logical hierarchy, >and is desirable for that reason. The drawback is that you then MUST >use some mechanism to ensure allocations are never spread across pools. >If your request *always* targets a specific resource pool, that works. >Otherwise, you would have to use a numbered request group, as described >below. It also entails the same "creative finagling of inventory" >described in (A). I think nested resource provider is better option for another reason. Every resource pool could have it's own limits. So, it is important to track the allocations/usage and ensure that the scheduler can throw error if there are no sufficient resources on the vcenter resource pool. NOTE: a vcenter cluster, which compute node, might have more capacity left. But, resource pool limit could prevent placing a VM on that pool. And yes, the request would always target a specific resource pool. > >(E) Take (D) a step further by adding each 'host' as a child of its >respective resource pool. No "creative finagling", but same "moving >allocations" issue as (B). This might not work because resource pool is a logical construct. They may not exist under vcenter cluster too. Vms can be placed on vcenter cluster with or without resource pool. > >I'm sure I've missed/misrepresented things. Please correct and refine >as necessary. > >Thanks, >Eric Thanks, Giri > >On 01/27/2018 12:23 PM, Eric Fried wrote: >> Rado- >> >> [+dev ML. We're getting pretty general here; maybe others will get >> some use out of this.] >> >>> is there a way to make the scheduler allocate only from one specific RP >> >> "...one specific RP" - is that Resource Provider or Resource Pool? >> >> And are we talking about scheduling an instance to a specific >> compute node, or are we talking about making sure that all the requested >> resources are pulled from the same compute node (but it could be any one >> of several compute nodes)? Or justlimiting the scheduler to any node in >> a specific resource pool? >> >> To make sure I'm fully grasping the VMWare-specific >> ratios/relationships between resource pools and compute nodes,I have >> been assuming: >> >> controller 1:many compute "host"(where n-cpu runs) >> compute "host" 1:many resource pool >> resource pool 1:many compute "node" (where instances can be scheduled) >> compute "node" 1:many instance >> >> (I don't know if this "host" vs"node" terminology is correct, but >> I'm going to keep pretending it is for the purposes of this note.) >> >> In particular, if that last line is true, then you do *not* want >> multiple compute "nodes" in the same provider tree. >> >>> if no custom trait is specified in the request? >> >> I am not aware of anything current or planned that will allow you to >> specify an aggregate you want to deploy from; so the only way I'm aware >> of that you could pin a request to a resource pool is to create a custom >> trait for that resource pool, tag all compute nodes in the pool with >> that trait, and specify that trait in your flavor. This way you don't >> use nested-ness at all. And in this model, there's also no need to >> create resource providers corresponding to resource pools - their >> solemanifestation is via traits. >> >> (Bonus: this model will work with what we've got merged in Queens - >> we didn't quiiite finish the piece of NRP that makes them work for >> allocation candidates, but we did merge trait support. We're also >> *mostly* there with aggregates, but I wouldn't want to rely on them >> working perfectly and we're not claiming full support for them.) >> >> To be explicit, in the model I'm suggesting, your compute "host", >> within update_provider_tree, would create new_root()s for each compute >> "node". So the "tree" isn't really a tree - it's a flat list of >> computes, of which one happens to correspond to the `nodename` and >> represents the compute "host". (I assume deploys can happen to the >> compute "host" just like they can to a compute "node"? If not, just >> give that guy no inventory and he'll be avoided.) It would then >> update_traits(node, ['CUSTOM_RPOOL_X']) for each. It would also >> update_inventory() for each as appropriate. >> >> Now on your deploys, to get scheduled to a particular resource pool, >> you would have to specify required=CUSTOM_RPOOL_X in your flavor. >> >> That's it. You never use new_child(). There are no providers >> corresponding to pools. There are no aggregates. >> >> Are we making progress, or am I confused/confusing? >> >> Eric >> >> >> On 01/27/2018 01:50 AM, Radoslav Gerganov wrote: >>> >>> +Chris >>> >>> >>> Hi Eric, >>> >>> Thanks a lot for sending this. I must admit that I am still trying to >>> catch up with how the scheduler (will) work when there are nested RPs, >>> traits, etc. I thought mostly about the case when we use a custom >>> trait to force allocations only from one resource pool. However, if >>> no trait is specified then we can end up in the situation that you >>> describe (allocating different resources from different resource >>> pools) and this is not what we want. If we go with the model that you >>> propose, is there a way to make the scheduler allocate only from one >>> specific RP if no custom trait is specified in the request? >>> >>> Thanks, >>> >>> Rado >>> >>> >>> ------------------------------------------------------------------------ >>> *From:* Eric Fried >>> *Sent:* Friday, January 26, 2018 10:20 PM >>> *To:* Radoslav Gerganov >>> *Cc:* Jay Pipes >>> *Subject:* VMWare's resource pool / cluster and nested resource providers >>> >>> Rado- >>> >>> It occurred to me just now that the model you described to me >>> [1] isn't >>> going to work, unless there's something I really misunderstood. >>> >>> The problem is that the placement API will think it can allocate >>> resources from anywhere in the tree for a given allocation request >>> (unless you always use a single numbered request group [2] in your >>> flavors, which doesn't sound like a clean plan). >>> >>> So if you have *any* model where multiple compute nodes reside >>> in the >>> same provider tree, and I come along with a request for say >>> VCPU:1,MEMORY_MB:2048,DISK_GB:512, placement will happily give you a >>> candidate with the VCPU from compute10, the memory from compute5, and >>> the disk from compute7. I'm only guessing that this isn't a viable way >>> to boot an instance. >>> >>> I go back to my earlier suggestion: I think you need to create the >>> compute nodes as root providers in your ProviderTree, and find some >>> other way to mark the resource pool associations. You could do it with >>> custom traits (CUSTOM_RESOURCE_POOL_X, ..._Y, etc.); or you could do it >>> with aggregates (an aggregate maps to a resource pool; associate all the >>> compute providers in a given pool with its aggregate uuid). >>> >>> Thanks, >>> Eric >>> >>> [1] >>> http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-01-26.log.html#t2018-01-26T14:40:44 >>> [2] >>> https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/granular-resource-requests.html#numbered-request-groups >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ildiko.vancsa at gmail.com Mon Jan 29 19:02:03 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 29 Jan 2018 20:02:03 +0100 Subject: [openstack-dev] [os-upstream-institute] Meeting reminder Message-ID: <2864A227-2969-4AD2-97FD-5723765941AA@gmail.com> Hi Training Team, Friendly reminder that we have our next meeting in an hour (2000 UTC) on #openstack-meeting-3. You can find the agenda here: https://etherpad.openstack.org/p/openstack-upstream-institute-meetings See you soon! :) Thanks, Ildikó (IRC: ildikov) From mgagne at calavera.ca Mon Jan 29 19:05:32 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Mon, 29 Jan 2018 14:05:32 -0500 Subject: [openstack-dev] [nova]Nova rescue inject pasword failed In-Reply-To: References: Message-ID: On Mon, Jan 29, 2018 at 4:57 AM, Matthew Booth wrote: > On 29 January 2018 at 09:27, 李杰 wrote: >> >> Hi,all: >> I want to access to my instance under rescue state using >> temporary password which nova rescue gave me.But this password doesn't work. >> Can I ask how this password is injected to instance? I can't find any >> specification how is it done.I saw the code about rescue,But it displays the >> password has inject. >> I use the libvirt as the virt driver. The web said to >> set"[libvirt]inject_password=true",but it didn't work. Is it a bug?Can you >> give me some advice?Help in troubleshooting this issue will be appreciated. > > > Ideally your rescue image will support cloud-init and you would use a config > disk. > > But to reiterate, ideally your rescue image would support cloud-init and you > would use a config disk. > > Matt > -- > Matthew Booth > Red Hat OpenStack Engineer, Compute DFG > Just so you know, cloud-init does not read/support the admin_pass injected in the config-drive: https://bugs.launchpad.net/cloud-init/+bug/1236883 Known bug for years and no fix has been approved yet for various non-technical reasons. -- Mathieu From rosmaita.fossdev at gmail.com Mon Jan 29 19:18:18 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 29 Jan 2018 14:18:18 -0500 Subject: [openstack-dev] [glance] PTL non-candidacy Message-ID: I've been PTL of Glance through some rocky times, but have decided not to stand for election for the Rocky cycle. My plan is to stick around, attend to my duties as a glance core contributor, and support my successor in whatever way I can to make for a smooth transition. After three consecutive cycles of me, it's time for some new ideas and new approaches. For anyone out there who hasn't contributed to Glance yet, the Glance community is friendly and welcoming, and we've got a backlog of "untargeted" specs ready for you to pick up. Weekly meetings are 14:00 UTC on Thursdays in #openstack-meeting-4. cheers, brian From sean.mcginnis at gmx.com Mon Jan 29 19:27:25 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 29 Jan 2018 13:27:25 -0600 Subject: [openstack-dev] [glance] PTL non-candidacy In-Reply-To: References: Message-ID: <20180129192725.GA10349@sm-xps> On Mon, Jan 29, 2018 at 02:18:18PM -0500, Brian Rosmaita wrote: > I've been PTL of Glance through some rocky times, but have decided not > to stand for election for the Rocky cycle. My plan is to stick > around, attend to my duties as a glance core contributor, and support > my successor in whatever way I can to make for a smooth transition. > After three consecutive cycles of me, it's time for some new ideas and > new approaches. > > For anyone out there who hasn't contributed to Glance yet, the Glance > community is friendly and welcoming, and we've got a backlog of > "untargeted" specs ready for you to pick up. Weekly meetings are > 14:00 UTC on Thursdays in #openstack-meeting-4. > > cheers, > brian > Thanks for all your hard work as Glance PTL Brian. Great to hear you are not going anywhere. From akekane at redhat.com Mon Jan 29 19:36:31 2018 From: akekane at redhat.com (Abhishek Kekane) Date: Tue, 30 Jan 2018 01:06:31 +0530 Subject: [openstack-dev] [glance] PTL non-candidacy In-Reply-To: References: Message-ID: Thanks so much for your remarkable work for glance for the couple of cycles. You have introduced very good processes like weekly priorities which helped community members to keep focused on priorities. It’s my pleasure to work with you, and yet I need to learn lot from you. Wish you all the best Brian! Cheers, Abhishek On 30-Jan-2018 00:49, "Brian Rosmaita" wrote: > I've been PTL of Glance through some rocky times, but have decided not > to stand for election for the Rocky cycle. My plan is to stick > around, attend to my duties as a glance core contributor, and support > my successor in whatever way I can to make for a smooth transition. > After three consecutive cycles of me, it's time for some new ideas and > new approaches. > > For anyone out there who hasn't contributed to Glance yet, the Glance > community is friendly and welcoming, and we've got a backlog of > "untargeted" specs ready for you to pick up. Weekly meetings are > 14:00 UTC on Thursdays in #openstack-meeting-4. > > cheers, > brian > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at cloudnull.com Mon Jan 29 19:39:21 2018 From: kevin at cloudnull.com (Carter, Kevin) Date: Mon, 29 Jan 2018 13:39:21 -0600 Subject: [openstack-dev] [glance] PTL non-candidacy In-Reply-To: <20180129192725.GA10349@sm-xps> References: <20180129192725.GA10349@sm-xps> Message-ID: ++ Thanks for leadership within Glance and everything else you've done in the community! -- Kevin Carter IRC: Cloudnull On Mon, Jan 29, 2018 at 1:27 PM, Sean McGinnis wrote: > On Mon, Jan 29, 2018 at 02:18:18PM -0500, Brian Rosmaita wrote: > > I've been PTL of Glance through some rocky times, but have decided not > > to stand for election for the Rocky cycle. My plan is to stick > > around, attend to my duties as a glance core contributor, and support > > my successor in whatever way I can to make for a smooth transition. > > After three consecutive cycles of me, it's time for some new ideas and > > new approaches. > > > > For anyone out there who hasn't contributed to Glance yet, the Glance > > community is friendly and welcoming, and we've got a backlog of > > "untargeted" specs ready for you to pick up. Weekly meetings are > > 14:00 UTC on Thursdays in #openstack-meeting-4. > > > > cheers, > > brian > > > > Thanks for all your hard work as Glance PTL Brian. Great to hear you are > not > going anywhere. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tenobreg at redhat.com Mon Jan 29 19:45:45 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Mon, 29 Jan 2018 19:45:45 +0000 Subject: [openstack-dev] [sahara] PTL Nomination Message-ID: Hi Saharans, I would like to nominate myself to act as PTL for Sahara during the Rocky cycle. I've been acting as PTL for the last two cycles (Pike and Queens) and I believe that even though we lost a lot of resources we were able to improve Sahara considerably in the last year. Moving forward I aim to continue working on the direction of making Sahara more user oriented. * Bug triaging: We need to start testing and cleaning the bug list and sadly this queue did not decrease significantly and we need to keep working on it. * Documentation: We already had improvements this lasy cycle but we need to keep going and for that we are already planning a documentation day pre-PTG and during PTG. * Final APIv2 work We need to finally release APIv2 in Rocky. We released APIv2 as experimental in Queens and will work to have it as main API in Rocky. In the overall picture we need to continue improving user experience and asking what is necessary to make Sahara more usable so we can have Sahara in more and more OpenStack deployments. -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lajos.katona at ericsson.com Mon Jan 29 20:13:00 2018 From: lajos.katona at ericsson.com (Lajos Katona) Date: Mon, 29 Jan 2018 21:13:00 +0100 Subject: [openstack-dev] [horizon] FFE Request for Queens Message-ID: <483d507b-4c81-1058-f498-03dc9b2495be@ericsson.com> Hi, I would like to ask for FFE on the neutron-trunk-ui blueprint to let the admin panel for trunks be accepted for Queens. Based on discussion on IRC (http://eavesdrop.openstack.org/irclogs/%23openstack-horizon/%23openstack-horizon.2018-01-29.log.html#t2018-01-29T14:36:58 ) the remaining part of the blueprint neutron-trunk-ui (https://blueprints.launchpad.net/horizon/+spec/neutron-trunk-ui) should be handled separately: * The admin panel (https://review.openstack.org/516657) should be part of the Queens release, as now that is not dependent on the ngDetails patches. With this the blueprint should be set to implemented. * The links (https://review.openstack.org/524619) for the ports details (trunk parent and subports) from the trunk panel should be handled in a bug report: o https://bugs.launchpad.net/horizon/+bug/1746082 Regards Lajos Katona -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Mon Jan 29 20:44:20 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 29 Jan 2018 14:44:20 -0600 Subject: [openstack-dev] [ALL][requirements] Tonight Hell freezes over! In-Reply-To: <20180129024742.wbyf725c3yi2iquy@gentoo.org> References: <20180123072350.2jby5zoeeyzaryv5@gentoo.org> <20180124072947.u4dv674dv6bcczb6@gentoo.org> <20180125043227.v3mfb5u2ndeennvu@mthode.org> <20180126061238.dayud3ayid5fibzd@gentoo.org> <20180127050511.l526namzrrd6v6ue@gentoo.org> <20180128033753.gy3e2qkf562cqynm@gentoo.org> <20180129024742.wbyf725c3yi2iquy@gentoo.org> Message-ID: <20180129204420.v74trkp2bickgzjv@gentoo.org> On 18-01-28 20:47:42, Matthew Thode wrote: > On 18-01-27 21:37:53, Matthew Thode wrote: > > On 18-01-26 23:05:11, Matthew Thode wrote: > > > On 18-01-26 00:12:38, Matthew Thode wrote: > > > > On 18-01-24 22:32:27, Matthew Thode wrote: > > > > > On 18-01-24 01:29:47, Matthew Thode wrote: > > > > > > On 18-01-23 01:23:50, Matthew Thode wrote: > > > > > > > Requirements is freezing Friday at 23:59:59 UTC so any last > > > > > > > global-requrements updates that need to get in need to get in now. > > > > > > > > > > > > > > I'm afraid that my condition has left me cold to your pleas of mercy. > > > > > > > > > > > > > > > > > > > Just your daily reminder that the freeze will happen in about 3 days > > > > > > time. Reviews seem to be winding down for requirements now (which is > > > > > > a good sign this release will be chilled to perfection). > > > > > > > > > > > > > > > > There's still a couple of things that may cause bumps for iso8601 and > > > > > oslo.versionedobjects but those are the main things. The msgpack change > > > > > is also rolling out (thanks dirk :D). Even with all these changes > > > > > though, in this universe, there's only one absolute. Everything freezes! > > > > > > > > > > https://review.openstack.org/535520 (oslo.serialization) > > > > > > > > > > > > > Last day, gate is sad and behind, but not my fault you waited til the > > > > last minute :P (see my first comment). The Iceman Cometh! > > > > > > > > > > All right everyone, Chill. Looks like we have another couple days to > > > get stuff in for gate's slowness. The new deadline is 23:59:59 UTC > > > 29-01-2018. > > > > > > > It's a cold town. The current status is as follows. It looks like the > > gate is clearing up. oslo.versionedobjects-1.31.2 and iso8601 will be > > in a gr bump but that's it. monasca-tempest-plugin is not going to get > > in by freeze at this rate (has fixes needed in the review). There was > > some stuff needed to get nova-client/osc to work together again, but > > mriedem seems to have it in hand (and no gr updates it looks like). > > > > Allow me to break the Ice. My name is Freeze. Learn it well for it's > the chilling sound of your doom! Can you feel it coming? The icy cold > of space! It's less than 24 hours til the freeze fomrally happens, the > only outstanding item is that oslo.versionedobjects seems to need > another fix for the iso8601 bump. osc-placement won't be added to > requirements at this point as there has been no responce on their > review. > > https://review.openstack.org/538515 > > python-vitrageclient looks like it'll make it in if gate doesn't break. > msgpack may also be late, but we'll see (just workflow'd). > openstacksdk may need a gr bump, I'm waiting on a response from mordred > > https://review.openstack.org/538695 > Tonight Hell freezes over! At just about 3 hours til your frozen doom I thought I'd send a final update. Since gate is still being slow the current plan is to stop accepting any new reviews to requirements (procedural -W) at the cutoff time. At that point we'll work on getting the existing approved items through gate, then work on branching. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From doug at doughellmann.com Mon Jan 29 21:27:09 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 29 Jan 2018 16:27:09 -0500 Subject: [openstack-dev] [glance] PTL non-candidacy In-Reply-To: References: Message-ID: <1517260987-sup-811@lrrr.local> Excerpts from Brian Rosmaita's message of 2018-01-29 14:18:18 -0500: > I've been PTL of Glance through some rocky times, but have decided not > to stand for election for the Rocky cycle. My plan is to stick > around, attend to my duties as a glance core contributor, and support > my successor in whatever way I can to make for a smooth transition. > After three consecutive cycles of me, it's time for some new ideas and > new approaches. > > For anyone out there who hasn't contributed to Glance yet, the Glance > community is friendly and welcoming, and we've got a backlog of > "untargeted" specs ready for you to pick up. Weekly meetings are > 14:00 UTC on Thursdays in #openstack-meeting-4. > > cheers, > brian > Thank you for carrying the mantle for so long, Brian. I know it hasn't necessarily been easy but you've dealt with the challenges well and helped the team move to a healthier state than it was in when you started in the role. Doug From jeremyfreudberg at gmail.com Mon Jan 29 21:31:44 2018 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Mon, 29 Jan 2018 16:31:44 -0500 Subject: [openstack-dev] [sahara] PTL Nomination In-Reply-To: References: Message-ID: Thanks for volunteering again, Telles. The project is in good hands under your leadership. On Mon, Jan 29, 2018 at 2:45 PM, Telles Nobrega wrote: > Hi Saharans, I would like to nominate myself to act as PTL for Sahara during > the Rocky cycle. > > I've been acting as PTL for the last two cycles (Pike and Queens) and I > believe that even though we lost a lot of resources we were able to improve > Sahara considerably in the last year. > > Moving forward I aim to continue working on the direction of making Sahara > more user oriented. > > * Bug triaging: > > We need to start testing and cleaning the bug list and sadly this queue did > not decrease significantly and we need to keep working on it. > > * Documentation: > > We already had improvements this lasy cycle but we need to keep going and > for that we are already planning a documentation day pre-PTG and during PTG. > > * Final APIv2 work > > We need to finally release APIv2 in Rocky. We released APIv2 as experimental > in Queens and will work to have it as main API in Rocky. > > In the overall picture we need to continue improving user experience and > asking what is necessary to make Sahara more usable so we can have Sahara in > more and more OpenStack deployments. +1, I could not have said it better myself. This is an admirable focus to have for the coming cycle. > -- > > TELLES NOBREGA > > SOFTWARE ENGINEER > > Red Hat Brasil > > Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo > > tenobreg at redhat.com > > TRIED. TESTED. TRUSTED. > Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil > pelo Great Place to Work. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From pgrist at redhat.com Mon Jan 29 21:51:03 2018 From: pgrist at redhat.com (Paul Grist) Date: Mon, 29 Jan 2018 16:51:03 -0500 Subject: [openstack-dev] [glance] PTL non-candidacy In-Reply-To: References: Message-ID: On Mon, Jan 29, 2018 at 2:18 PM, Brian Rosmaita wrote: > I've been PTL of Glance through some rocky times, but have decided not > to stand for election for the Rocky cycle. My plan is to stick > around, attend to my duties as a glance core contributor, and support > my successor in whatever way I can to make for a smooth transition. > After three consecutive cycles of me, it's time for some new ideas and > new approaches. > > For anyone out there who hasn't contributed to Glance yet, the Glance > community is friendly and welcoming, and we've got a backlog of > "untargeted" specs ready for you to pick up. Weekly meetings are > 14:00 UTC on Thursdays in #openstack-meeting-4. > > cheers, > brian > Many thanks for all the work you've done for glance and the community. Your leadership and commitment was remarkable at the most challenging of times this past year. Glad to hear you are staying with Glance! Paul > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Mon Jan 29 23:59:59 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 29 Jan 2018 17:59:59 -0600 Subject: [openstack-dev] [ALL][requirements] Prepare for a bitter harvest, winter has come at last! In-Reply-To: <20180129204420.v74trkp2bickgzjv@gentoo.org> References: <20180123072350.2jby5zoeeyzaryv5@gentoo.org> <20180124072947.u4dv674dv6bcczb6@gentoo.org> <20180125043227.v3mfb5u2ndeennvu@mthode.org> <20180126061238.dayud3ayid5fibzd@gentoo.org> <20180127050511.l526namzrrd6v6ue@gentoo.org> <20180128033753.gy3e2qkf562cqynm@gentoo.org> <20180129024742.wbyf725c3yi2iquy@gentoo.org> <20180129204420.v74trkp2bickgzjv@gentoo.org> Message-ID: <20180129235959.glcowbt6vn4yo5r7@gentoo.org> On 18-01-29 14:44:20, Matthew Thode wrote: > On 18-01-28 20:47:42, Matthew Thode wrote: > > On 18-01-27 21:37:53, Matthew Thode wrote: > > > On 18-01-26 23:05:11, Matthew Thode wrote: > > > > On 18-01-26 00:12:38, Matthew Thode wrote: > > > > > On 18-01-24 22:32:27, Matthew Thode wrote: > > > > > > On 18-01-24 01:29:47, Matthew Thode wrote: > > > > > > > On 18-01-23 01:23:50, Matthew Thode wrote: > > > > > > > > Requirements is freezing Friday at 23:59:59 UTC so any last > > > > > > > > global-requrements updates that need to get in need to get in now. > > > > > > > > > > > > > > > > I'm afraid that my condition has left me cold to your pleas of mercy. > > > > > > > > > > > > > > > > > > > > > > Just your daily reminder that the freeze will happen in about 3 days > > > > > > > time. Reviews seem to be winding down for requirements now (which is > > > > > > > a good sign this release will be chilled to perfection). > > > > > > > > > > > > > > > > > > > There's still a couple of things that may cause bumps for iso8601 and > > > > > > oslo.versionedobjects but those are the main things. The msgpack change > > > > > > is also rolling out (thanks dirk :D). Even with all these changes > > > > > > though, in this universe, there's only one absolute. Everything freezes! > > > > > > > > > > > > https://review.openstack.org/535520 (oslo.serialization) > > > > > > > > > > > > > > > > Last day, gate is sad and behind, but not my fault you waited til the > > > > > last minute :P (see my first comment). The Iceman Cometh! > > > > > > > > > > > > > All right everyone, Chill. Looks like we have another couple days to > > > > get stuff in for gate's slowness. The new deadline is 23:59:59 UTC > > > > 29-01-2018. > > > > > > > > > > It's a cold town. The current status is as follows. It looks like the > > > gate is clearing up. oslo.versionedobjects-1.31.2 and iso8601 will be > > > in a gr bump but that's it. monasca-tempest-plugin is not going to get > > > in by freeze at this rate (has fixes needed in the review). There was > > > some stuff needed to get nova-client/osc to work together again, but > > > mriedem seems to have it in hand (and no gr updates it looks like). > > > > > > > Allow me to break the Ice. My name is Freeze. Learn it well for it's > > the chilling sound of your doom! Can you feel it coming? The icy cold > > of space! It's less than 24 hours til the freeze fomrally happens, the > > only outstanding item is that oslo.versionedobjects seems to need > > another fix for the iso8601 bump. osc-placement won't be added to > > requirements at this point as there has been no responce on their > > review. > > > > https://review.openstack.org/538515 > > > > python-vitrageclient looks like it'll make it in if gate doesn't break. > > msgpack may also be late, but we'll see (just workflow'd). > > openstacksdk may need a gr bump, I'm waiting on a response from mordred > > > > https://review.openstack.org/538695 > > > > Tonight Hell freezes over! > > At just about 3 hours til your frozen doom I thought I'd send a final > update. Since gate is still being slow the current plan is to stop > accepting any new reviews to requirements (procedural -W) at the cutoff > time. At that point we'll work on getting the existing approved items > through gate, then work on branching. > requirements is now frozen, any review after 538994 will require a FFE. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From kennelson11 at gmail.com Tue Jan 30 00:01:17 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 30 Jan 2018 00:01:17 +0000 Subject: [openstack-dev] [All] [Elections] Rocky PTL Nominations Are Now Open Message-ID: Hello All! Nominations for OpenStack PTLs (Program Team Leads) are now open and will remain open until Feb 07, 2018 23:45 UTC. All nominations must be submitted as a text file to the openstack/election repository as explained at http://governance.openstack.org/election/#how-to-submit-your-candidacy Please make sure to follow the new candidacy file naming convention: $cycle_name/$project_name/$ircname.txt. In order to be an eligible candidate (and be allowed to vote) in a given PTL election, you need to have contributed an accepted patch to one of the corresponding project teams[0] during the Pike-Queens timeframe (22 Feb 2017 to 29 Jan 2018). Additional information about the nomination process can be found here: https://governance.openstack.org/election/ Shortly after election officials approve candidates, they will be listed here: https://governance.openstack.org/election/#Rocky-ptl-candidates The electorate is requested to confirm their email address in gerrit[1], prior to 1 Feb 0:00 UTC so that the emailed ballots are mailed to the correct email address. This email address should match that which was provided in your foundation member profile[2] as well. Happy running, Kendall Nelson (diablo_rojo) [0] https://governance.openstack.org/tc/reference/projects/ [1] https://review.openstack.org/#/settings/contact [2] https://www.openstack.org/profile/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Jan 30 00:21:31 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 29 Jan 2018 19:21:31 -0500 Subject: [openstack-dev] [release][searchlight] problem with release job configurations Message-ID: <1517271277-sup-4518@lrrr.local> Both searchlight-ui has a configuration issue that the release team cannot fix by ourselves. We need input from the searchlight team about how to resolve it. As you'll see from [2] the release validation logic is categorizing searchlight-ui as a horizon-plugin. It is then rejecting the release request [1] because, according to the settings in project-config, the repository is configured to use publish-to-pypi instead of the expected publish-to-pypi-horizon. The difference between the two jobs is the latter installs horizon before trying to build the package. Many horizon plugins apparently needed this. We don't know if searchlight does. There are 2 possible ways to fix the issue: 1. Set release-type to "python-pypi" in [1] to tell the validation code that publish-to-pypi is the expected job. 2. Change the release job for the repository in project-config. Please let us know which fix is correct by either updating [1] with the release-type or a Depends-On link to the change in project-config to use the correct release job. Doug [1] https://review.openstack.org/#/c/538321/ [2] http://logs.openstack.org/21/538321/1/check/openstack-tox-validate/3afbe28/tox/validate-request-results.log From doug at doughellmann.com Tue Jan 30 00:27:54 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 29 Jan 2018 19:27:54 -0500 Subject: [openstack-dev] [blazar][release] release job configuration issues Message-ID: <1517271724-sup-5083@lrrr.local> Both blazar-dashboard and blazar-nova have configuration issues blocking their release and the release team needs input from the blazar team to resolve the problems. The validation output for blazar-dashboard [2] shows that the repo is being treated as a horizon plugin but it is configured to use the release-openstack-server jobs. We think the correct way to resolve this is to update project-config to use publish-to-pypi-horizon. However, if horizon is not needed then project-config should be updated to use publish-to-pypi and the release-type in [1] should be updated to "python-pypi". The validation output for blazar-nova shows a similar problem [4]. In this case, we think the correct solution is to change project-config so that the repo uses publish-to-pypi instead of release-openstack-server. Please update those settings and update the release requests with Depends-On links to the project-config patches so we can process the releases. Doug [1] https://review.openstack.org/#/c/538175/ [2] http://logs.openstack.org/75/538175/3/check/openstack-tox-validate/7ed5005/tox/validate-request-results.log [3] https://review.openstack.org/#/c/538139/ [4] http://logs.openstack.org/39/538139/5/check/openstack-tox-validate/05a7503/tox/validate-request-results.log From sagarun at gmail.com Tue Jan 30 00:45:46 2018 From: sagarun at gmail.com (Arun SAG) Date: Mon, 29 Jan 2018 16:45:46 -0800 Subject: [openstack-dev] Race in FixedIP.associate_pool In-Reply-To: References: Message-ID: Hello, On Tue, Dec 12, 2017 at 12:22 PM, Arun SAG wrote: > Hello, > > We are running nova-network in ocata. We use mysql in a master-slave > configuration, The master is read/write, and all reads go to the slave > (slave_connection is set). When we tried to boot multiple VMs in > parallel (lets say 15), we see a race in allocate_for_instance's > FixedIP.associate_pool. We see FixedIP.associate_pool associates an > IP, but later in the code we try to read the allocated FixedIP using > objects.FixedIPList.get_by_instance_uuid and it throws > FixedIPNotFoundException. We also checked the slave replication status > and Seconds_Behind_Master: 0 > [snip] > > This kind of how the logs look like > 2017-12-08 22:33:37,124 DEBUG > [yahoo.contrib.ocata_openstack_yahoo_plugins.nova.network.manager] > /opt/openstack/venv/nova/lib/python2.7/site-packages/yahoo/contrib/ocata_openstack_yahoo_plugins/nova/network/manager.py:get_instance_nw_info:894 > Fixed IP NOT found for instance > 2017-12-08 22:33:37,125 DEBUG > [yahoo.contrib.ocata_openstack_yahoo_plugins.nova.network.manager] > /opt/openstack/venv/nova/lib/python2.7/site-packages/yahoo/contrib/ocata_openstack_yahoo_plugins/nova/network/manager.py:get_instance_nw_info:965 > Built network info: |[]| > 2017-12-08 22:33:37,126 INFO [nova.network.manager] > /opt/openstack/venv/nova/lib/python2.7/site-packages/nova/network/manager.py:allocate_for_instance:428 > Allocated network: '[]' for instance > 2017-12-08 22:33:37,126 ERROR [oslo_messaging.rpc.server] > /opt/openstack/venv/nova/lib/python2.7/site-packages/oslo_messaging/rpc/server.py:_process_incoming:164 > Exception during message handling > Traceback (most recent call last): > File "/opt/openstack/venv/nova/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", > line 155, in _process_incoming > res = self.dispatcher.dispatch(message) > File "/opt/openstack/venv/nova/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", > line 222, in dispatch > return self._do_dispatch(endpoint, method, ctxt, args) > File "/opt/openstack/venv/nova/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", > line 192, in _do_dispatch > result = func(ctxt, **new_args) > File "/opt/openstack/venv/nova/lib/python2.7/site-packages/yahoo/contrib/ocata_openstack_yahoo_plugins/nova/network/manager.py", > line 347, in allocate_for_instance > vif = nw_info[0] > IndexError: list index out of range > > > This problem goes way when we get rid of the slave_connection setting > and just use single master. Has any one else seen this? Any > recommendation to fix this issue? > > This issue is kind of similar to https://bugs.launchpad.net/nova/+bug/1249065 > If anyone is running into db race while running database in master-slave mode with async replication, The bug has been identified and getting fixed here https://bugs.launchpad.net/oslo.db/+bug/1746116 -- Arun S A G http://zer0c00l.in/ From sean.mcginnis at gmx.com Tue Jan 30 00:55:20 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 29 Jan 2018 18:55:20 -0600 Subject: [openstack-dev] [Release-job-failures] [mistral] Pre-release of openstack/mistral-extra failed Message-ID: <20180130005519.GA26116@sm-xps> The mistral-extra package is failing the pre-release check. The commit sha for the queens-3 milestone is the same as it was for queens-2. This appears to be the cause of the issue, as the constraints have that last release. Please take a look and let us know in #openstack-release if there is anything we can do to help. Sean ----- Forwarded message from zuul at openstack.org ----- Date: Tue, 30 Jan 2018 00:40:13 +0000 From: zuul at openstack.org To: release-job-failures at lists.openstack.org Subject: [Release-job-failures] Pre-release of openstack/mistral-extra failed Reply-To: openstack-dev at lists.openstack.org Build failed. - release-openstack-python http://logs.openstack.org/53/533a5ee424ebccccf6937f03d3b1d9d5b52e8ecb/pre-release/release-openstack-python/44f2fd4/ : FAILURE in 7m 58s - announce-release announce-release : SKIPPED - propose-update-constraints propose-update-constraints : SKIPPED _______________________________________________ Release-job-failures mailing list Release-job-failures at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures ----- End forwarded message ----- From inc007 at gmail.com Tue Jan 30 01:00:35 2018 From: inc007 at gmail.com (=?UTF-8?B?TWljaGHFgiBKYXN0cnrEmWJza2k=?=) Date: Mon, 29 Jan 2018 17:00:35 -0800 Subject: [openstack-dev] [Openstack-operators] [all][kolla][rdo] Collaboration with Kolla for the RDO test days In-Reply-To: References: Message-ID: Cool, thank you David, sign me up!:) On 29 January 2018 at 05:30, David Moreau Simard wrote: > Hi ! > > For those who might be unfamiliar with the RDO [1] community project: > we hang out in #rdo, we don't bite and we build vanilla OpenStack > packages. > > These packages are what allows you to leverage one of the deployment > projects such as TripleO, PackStack or Kolla to deploy on CentOS or > RHEL. > The RDO community collaborates with these deployment projects by > providing trunk and stable packages in order to let them develop and > test against the latest and the greatest of OpenStack. > > RDO test days typically happen around a week after an upstream > milestone has been reached [2]. > The purpose is to get everyone together in #rdo: developers, users, > operators, maintainers -- and test not just RDO but OpenStack itself > as installed by the different deployment projects. > > We tried something new at our last test day [3] and it worked out great. > Instead of encouraging participants to install their own cloud for > testing things, we supplied a cloud of our own... a bit like a limited > duration TryStack [4]. > This lets users without the operational knowledge, time or hardware to > install an OpenStack environment to see what's coming in the upcoming > release of OpenStack and get the feedback loop going ahead of the > release. > > We used Packstack for the last deployment and invited Packstack cores > to deploy, operate and troubleshoot the installation for the duration > of the test days. > The idea is to rotate between the different deployment projects to > give every interested project a chance to participate. > > Last week, we reached out to Kolla to see if they would be interested > in participating in our next RDO test days [5] around February 8th. > We supply the bare metal hardware and their core contributors get to > deploy and operate a cloud with real users and developers poking > around. > All around, this is a great opportunity to get feedback for RDO, Kolla > and OpenStack. > > We'll be advertising the event a bit more as the test days draw closer > but until then, I thought it was worthwhile to share some context for > this new thing we're doing. > > Let me know if you have any questions ! > > Thanks, > > [1]: https://www.rdoproject.org/ > [2]: https://www.rdoproject.org/testday/ > [3]: https://dmsimard.com/2017/11/29/come-try-a-real-openstack-queens-deployment/ > [4]: http://trystack.org/ > [5]: http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-01-24-16.00.log.html > > David Moreau Simard > Senior Software Engineer | OpenStack RDO > > dmsimard = [irc, github, twitter] > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From doug at doughellmann.com Tue Jan 30 01:02:20 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 29 Jan 2018 20:02:20 -0500 Subject: [openstack-dev] [Release-job-failures][mistral][release][requirements] Pre-release of openstack/mistral-extra failed In-Reply-To: References: Message-ID: <1517273885-sup-7518@lrrr.local> Excerpts from zuul's message of 2018-01-30 00:40:13 +0000: > Build failed. > > - release-openstack-python http://logs.openstack.org/53/533a5ee424ebccccf6937f03d3b1d9d5b52e8ecb/pre-release/release-openstack-python/44f2fd4/ : FAILURE in 7m 58s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > This release appears to have failed because tox.ini is set up to use the old style of constraints list management and mistral-extra appears in the constraints list. I don't know why the tox environment is being used to build the package; I thought we stopped doing that. One solution is to fix the tox.ini to put the constraints specification in the "deps" field. The patch [1] to oslo.config making a similar change should show you what is needed. Doug [1] https://review.openstack.org/#/c/524496/1/tox.ini From inc007 at gmail.com Tue Jan 30 01:04:36 2018 From: inc007 at gmail.com (=?UTF-8?B?TWljaGHFgiBKYXN0cnrEmWJza2k=?=) Date: Mon, 29 Jan 2018 17:04:36 -0800 Subject: [openstack-dev] [kolla] Policy regarding template customisation In-Reply-To: References: Message-ID: Hey, So I'm also for option 2. There was big discussion in Atlanta about "how hard it is to keep configs up to date and remove deprecated options". merge_config makes it easier for us to handle this. With amount of services we support I don't think we have enough time to keep tabs on every config change across OpenStack. On 29 January 2018 at 08:03, Steven Dake (stdake) wrote: > Agree, the “why” of this policy is stated here: > > https://docs.openstack.org/developer/kolla-ansible/deployment-philosophy.html > > > > Paul, I think your corrective actions sound good. Perhaps we should also > reword “essential” to some other word that is more lenient. > > > > Cheers > > -steve > > > > From: Jeffrey Zhang > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > > Date: Monday, January 29, 2018 at 7:14 AM > To: "OpenStack Development Mailing List (not for usage questions)" > > Subject: Re: [openstack-dev] [kolla] Policy regarding template customisation > > > > Thank Paul for pointing this out. > > > > for me, I prefer to consist with 2) > > > > There are thousands of configuration in OpenStack, it is hard for Kolla to > > add every key/value pair in playbooks. Currently, the merge_config is a more > > better solutions. > > > > > > > > > > On Mon, Jan 29, 2018 at 7:13 PM, Paul Bourke wrote: > > Hi all, > > I'd like to revisit our policy of not templating everything in > kolla-ansible's template files. This is a policy that was set in place very > early on in kolla-ansible's development, but I'm concerned we haven't been > very consistent with it. This leads to confusion for contributors and > operators - "should I template this and submit a patch, or do I need to > start using my own config files?". > > The docs[0] are currently clear: > > "The Kolla upstream community does not want to place key/value pairs in the > Ansible playbook configuration options that are not essential to obtaining a > functional deployment." > > In practice though our templates contain many options that are not > necessary, and plenty of patches have merged that while very useful to > operators, are not necessary to an 'out of the box' deployment. > > So I'd like us to revisit the questions: > > 1) Is kolla-ansible attempting to be a 'batteries included' tool, which > caters to operators via key/value config options? > > 2) Or, is it to be a solid reference implementation, where any degree of > customisation implies a clear 'bring your own configs' type policy. > > If 1), then we should potentially: > > * Update ours docs to remove the referenced paragraph > * Look at reorganising files like globals.yml into something more > maintainable. > > If 2), > > * We should make it clear to reviewers that patches templating options that > are non essential should not be accepted. > * Encourage patches to strip down existing config files to an absolute > minimum. > * Make this policy more clear in docs / templates to avoid frustration on > the part of operators. > > Thoughts? > > Thanks, > -Paul > > [0] > https://docs.openstack.org/kolla-ansible/latest/admin/deployment-philosophy.html#why-not-template-customization > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > > Regards, > > Jeffrey Zhang > > Blog: http://xcodest.me > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From muroi.masahito at lab.ntt.co.jp Tue Jan 30 02:09:43 2018 From: muroi.masahito at lab.ntt.co.jp (Masahito MUROI) Date: Tue, 30 Jan 2018 11:09:43 +0900 Subject: [openstack-dev] [blazar][release] release job configuration issues In-Reply-To: <1517271724-sup-5083@lrrr.local> References: <1517271724-sup-5083@lrrr.local> Message-ID: <7b92a41c-878b-0906-fa8c-0fac97e17ff3@lab.ntt.co.jp> Thanks for the help. I've already pushed patches for updating the release job of blazar-nova[1] and blazar-dashboard[2]. The two patches are under review now and added as Depends-On links. 1. https://review.openstack.org/#/c/538182/ 2. https://review.openstack.org/#/c/538185/ best regards, Masahito On 2018/01/30 9:27, Doug Hellmann wrote: > Both blazar-dashboard and blazar-nova have configuration issues blocking > their release and the release team needs input from the blazar team to > resolve the problems. > > The validation output for blazar-dashboard [2] shows that the repo is > being treated as a horizon plugin but it is configured to use the > release-openstack-server jobs. We think the correct way to resolve this > is to update project-config to use publish-to-pypi-horizon. However, if > horizon is not needed then project-config should be updated to use > publish-to-pypi and the release-type in [1] should be updated to > "python-pypi". > > The validation output for blazar-nova shows a similar problem [4]. In > this case, we think the correct solution is to change project-config so > that the repo uses publish-to-pypi instead of release-openstack-server. > > Please update those settings and update the release requests with > Depends-On links to the project-config patches so we can process the > releases. > > Doug > > [1] https://review.openstack.org/#/c/538175/ > [2] http://logs.openstack.org/75/538175/3/check/openstack-tox-validate/7ed5005/tox/validate-request-results.log > [3] https://review.openstack.org/#/c/538139/ > [4] http://logs.openstack.org/39/538139/5/check/openstack-tox-validate/05a7503/tox/validate-request-results.log > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From emilien at redhat.com Tue Jan 30 02:34:55 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 29 Jan 2018 18:34:55 -0800 Subject: [openstack-dev] [tripleo] Queens milestone 3 has been released! Message-ID: Queens milestone 3 has been tagged and stable/queens branch was created for python-tripleoclient. Some interesting numbers: - 178 bug fixed (171 in pike-3, 110 in ocata-3 and 76 in newton-3). - 9 blueprints implemented (22 in pike-3, 11 in ocata-3 and 13 in newton-3) If we count by release (only the 3 milestones, not the RC): - Queens: 628 bugs fixed and 27 blueprints implemented - Pike: 511 bugs fixed and 37 blueprints implemented - Ocata: 282 bugs fixed and 14 blueprints implemented (remember the short cycle) - Newton: 129 bugs fixed and 16 blueprints implemented Good work team! And as usual, kudos to release managers for their eternal help :) -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Tue Jan 30 02:54:05 2018 From: lijie at unitedstack.com (=?utf-8?B?5p2O5p2w?=) Date: Tue, 30 Jan 2018 10:54:05 +0800 Subject: [openstack-dev] [nova]Nova rescue inject pasword failed In-Reply-To: References: Message-ID: Thank you,Mathieu.Do you know how to use metadata RESTful service to inject password? ------------------ Original ------------------ From: "Mathieu Gagné"; Date: 2018年1月30日(星期二) 凌晨3:05 To: "OpenStack Developmen"; Subject: Re: [openstack-dev] [nova]Nova rescue inject pasword failed On Mon, Jan 29, 2018 at 4:57 AM, Matthew Booth wrote: > On 29 January 2018 at 09:27, 李杰 wrote: >> >> Hi,all: >> I want to access to my instance under rescue state using >> temporary password which nova rescue gave me.But this password doesn't work. >> Can I ask how this password is injected to instance? I can't find any >> specification how is it done.I saw the code about rescue,But it displays the >> password has inject. >> I use the libvirt as the virt driver. The web said to >> set"[libvirt]inject_password=true",but it didn't work. Is it a bug?Can you >> give me some advice?Help in troubleshooting this issue will be appreciated. > > > Ideally your rescue image will support cloud-init and you would use a config > disk. > > But to reiterate, ideally your rescue image would support cloud-init and you > would use a config disk. > > Matt > -- > Matthew Booth > Red Hat OpenStack Engineer, Compute DFG > Just so you know, cloud-init does not read/support the admin_pass injected in the config-drive: https://bugs.launchpad.net/cloud-init/+bug/1236883 Known bug for years and no fix has been approved yet for various non-technical reasons. -- Mathieu __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From simple_hlw at 163.com Tue Jan 30 04:52:27 2018 From: simple_hlw at 163.com (We We) Date: Tue, 30 Jan 2018 12:52:27 +0800 Subject: [openstack-dev] [requirement][cyborg]FFE - pyspdk requirement dependency Message-ID: <53EADDD3-8A86-445F-A5D9-F5401ABB5309@163.com> Hi, The pyspdk is a important tool library [1] which supports Cyborg SPDK driver [2] to manage the backend SPDK-base app, so we need to upload pyspdk into the pypi [3] and then append 'pyspdk>=0.0.1’ item into ‘OpenStack/Cyborg/requirements.txt’ , so that SPDK driver can be built correctly when zuul runs. However, It's not what we thought it would be, if we want to add the new requirements, we should get support from upstream OpenStack/requirements [4] to append 'pyspdk>=0.0.1’ item. I'm sorry for propose the request so late. Please Please help. [1] https://review.gerrithub.io/#/c/379741/ [2] https://review.openstack.org/#/c/538164/11 [3] https://pypi.python.org/pypi/pyspdk/0.0.1 [4] https://github.com/openstack/requirements Regards, Helloway -------------- next part -------------- An HTML attachment was scrubbed... URL: From wanyenhsu at gmail.com Tue Jan 30 06:24:55 2018 From: wanyenhsu at gmail.com (Wan-yen Hsu) Date: Mon, 29 Jan 2018 22:24:55 -0800 Subject: [openstack-dev] [magnum] Any plan to resume nodegroup work? Message-ID: Hi, I saw magnum nodegroup specs https://review.openstack.org/425422, https://review.openstack.org/433680, and https://review.openstack.org/425431 were last updated a year ago. is there any plan to resume this work or is it superseded by other specs or features? Thanks! Regards, Wan-yen -------------- next part -------------- An HTML attachment was scrubbed... URL: From honjo.rikimaru at po.ntt-tx.co.jp Tue Jan 30 06:49:04 2018 From: honjo.rikimaru at po.ntt-tx.co.jp (Rikimaru Honjo) Date: Tue, 30 Jan 2018 15:49:04 +0900 Subject: [openstack-dev] [masakari] BUG in Masakari Installation and Procedure and/or Documentation In-Reply-To: <35935713-49F4-46C4-A675-A8D3B483A980@windriver.com> References: <265F454E-3330-4C9E-B2A2-1506F2843AA9@windriver.com> <35935713-49F4-46C4-A675-A8D3B483A980@windriver.com> Message-ID: <7556a88f-91c0-b34c-1a2c-cc30cae216d8@po.ntt-tx.co.jp> Hello Greg, Thank you for reporting & researching. On 2018/01/27 5:59, Waines, Greg wrote: > Update on this. > > It turned out that i had incorrectly set the ‘project_name’ and ‘username’ in /etc/masakarimonitors/masakarimonitors.conf > > Setting both these attributes to ‘admin’ made it such that the instancemonitor’s notification to masakari-engine was successful. > e.g. > stack at devstack-masakari-louie:~/devstack$ masakari notification-list > +--------------------------------------+----------------------------+---------+--------------------------------------+------+ > | notification_uuid | generated_time | status | source_host_uuid | type | > +--------------------------------------+----------------------------+---------+--------------------------------------+------+ > | b8c6c561-7a93-40a2-8d73-3783024865b4 | 2018-01-26T19:41:29.000000 | running | 51bc8b8b-324f-499a-9166-38c22b3842cd | VM | > +--------------------------------------+----------------------------+---------+--------------------------------------+------+ > stack at devstack-masakari-louie:~/devstack$ > > > However I now get the following error in masakari-engine, when the masakari-engine attempts to do the VM Recovery > > Jan 26 19:41:28 devstack-masakari-louie masakari-engine[11795]: 2018-01-26 19:41:28.968 TRACE masakari.engine.drivers.taskflow.driver EndpointNotFound: publicURL endpoint for compute service named Compute Service not found > > > Why is masakari-engine looking for a publicURL endpoint for service_type=’compute’ and service_name=’Compute Service’ ? I think there is no reason. This default value was added by the following patch. https://review.openstack.org/#/c/388734/ I think this is a bug. Could you report in Launchpad? > See below that the Service Name = ‘nova’ ... NOT ‘Compute Service’ > > stack at devstack-masakari-louie:~/devstack$ openstack endpoint list > +----------------------------------+-----------+--------------+----------------+---------+-----------+----------------------------------------------+ > | ID | Region | Service Name | Service Type | Enabled | Interface | URL | > +----------------------------------+-----------+--------------+----------------+---------+-----------+----------------------------------------------+ > | 0111643ef1584decb523524a3db5ce18 | RegionOne | nova_legacy | compute_legacy | True | public | http://10.10.10.14/compute/v2/$(project_id)s | > | 01790448c22f49e69774adf290fba728 | RegionOne | gnocchi | metric | True | internal | http://10.10.10.14/metric | > | 0b31693c6650499a981d580721be9e48 | RegionOne | vitrage | rca | True | internal | http://10.10.10.14:8999 | > | 40f66ed61b4e4310829aa69e11c75554 | RegionOne | neutron | network | True | public | http://10.10.10.14:9696/ | > | 47479cf64af944b996b1fbca42efd945 | RegionOne | nova | compute | True | public | http://10.10.10.14/compute/v2.1 | > | 49dccfc61e8246a2a2c0b8d12b3db91a | RegionOne | vitrage | rca | True | admin | http://10.10.10.14:8999 | > | 5261ba0327de4c2d92842147636ee770 | RegionOne | masakari | ha | True | internal | http://10.10.10.14:15868/v1/$(tenant_id)s | > | 5df28622c6f449ebad12d9b62110cd08 | RegionOne | gnocchi | metric | True | admin | http://10.10.10.14/metric | > | 64f8f401431042a0ab1d053ca4f4df02 | RegionOne | glance | image | True | public | http://10.10.10.14/image | > | 69ad6b9d0b0b4d0a8da6fa36af8289cb | RegionOne | masakari | ha | True | public | http://10.10.10.14:15868/v1/$(tenant_id)s | > | 7dd9d5396e9c49d4a41e2865b841f6a0 | RegionOne | masakari | ha | True | admin | http://10.10.10.14:15868/v1/$(tenant_id)s | > | 811fa7f4b3c14612b4aca354dc8ea77e | RegionOne | vitrage | rca | True | public | http://10.10.10.14:8999 | > | 8535da724c424363bffe1d033ee033e5 | RegionOne | cinder | volume | True | public | http://10.10.10.14/volume/v1/$(project_id)s | > | 853f1783f1014075a03c16f7c3a2568a | RegionOne | keystone | identity | True | admin | http://10.10.10.14/identity | > | 9450f5611ca747f2a049f22ff0996dba | RegionOne | cinderv3 | volumev3 | True | public | http://10.10.10.14/volume/v3/$(project_id)s | > | 9a73696d88a9438cb0ab75a754a08e9d | RegionOne | gnocchi | metric | True | public | http://10.10.10.14/metric | > | b1ff2b4d683c4a58a3b27232699d0058 | RegionOne | cinderv2 | volumev2 | True | public | http://10.10.10.14/volume/v2/$(project_id)s | > | d4e66240faff48f2b5e1d0fcfb73a74b | RegionOne | placement | placement | True | public | http://10.10.10.14/placement | > | fda917fd368a4a479c9c186df1beb8e9 | RegionOne | keystone | identity | True | public | http://10.10.10.14/identity | > +----------------------------------+-----------+--------------+----------------+---------+-----------+----------------------------------------------+ > stack at devstack-masakari-louie:~/devstack$ > > let me know your thoughts, > I don’t mind raising the required BUG in Launchpad if required, > Greg. > > p.s. my masakari configurations ... wrt hosts and segments ... are as follows: > stack at devstack-masakari-louie:~/devstack$ masakari segment-list > +--------------------------------------+-----------+-------------+--------------+-----------------+ > | uuid | name | description | service_type | recovery_method | > +--------------------------------------+-----------+-------------+--------------+-----------------+ > | 9c6e22bd-7fab-40cb-a8e0-3702137f3227 | segment-1 | - | COMPUTE | auto | > +--------------------------------------+-----------+-------------+--------------+-----------------+ > stack at devstack-masakari-louie:~/devstack$ masakari host-list --segment-id 9c6e22bd-7fab-40cb-a8e0-3702137f3227 > +--------------------------------------+-------------------------+---------+--------------------+----------+----------------+--------------------------------------+ > | uuid | name | type | control_attributes | reserved | on_maintenance | failover_segment_id | > +--------------------------------------+-------------------------+---------+--------------------+----------+----------------+--------------------------------------+ > | 51bc8b8b-324f-499a-9166-38c22b3842cd | devstack-masakari-louie | COMPUTE | SSH | False | False | 9c6e22bd-7fab-40cb-a8e0-3702137f3227 | > +--------------------------------------+-------------------------+---------+--------------------+----------+----------------+--------------------------------------+ > stack at devstack-masakari-louie:~/devstack$ > > > > > > From: Greg Waines > Reply-To: "openstack-dev at lists.openstack.org" > Date: Wednesday, January 24, 2018 at 4:13 PM > To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] [masakari] BUG in Masakari Installation and Procedure and/or Documentation > > I am looking for some input before I raise a BUG. > > I reviewed the following commits which documented the Masakari and MasakariMonitors Installation and Procedures. > i.e. > https://review.openstack.org/#/c/489570/ > https://review.openstack.org/#/c/489095/ > > I created an AIO devstack with Masakari on current/master ... this morning. > I followed the above instructions on configuring and installing Masakari and MasakariMonitors. > > I created a VM and then ‘sudo kill -9 ’ > and > I got the following error from instance monitoring trying to send the notification message to masakari-engine. > ( The request you have made requires authentication. ) ... see below, > > Is this a known BUG ? > Greg. > > > 2018-01-24 20:29:16.902 12473 INFO masakarimonitors.instancemonitor.libvirt_handler.callback [-] Libvirt Event: type=VM, hostname=devstack-masakari-new, uuid=6884cf13-5797-487b-9cb1-053a2e18b60e, time=2018-01-24 20:29:16.902347, event_id=LIFECYCLE, detail=STOPPED_FAILED) > > 2018-01-24 20:29:16.903 12473 INFO masakarimonitors.ha.masakari [-] Send a notification. {'notification': {'hostname': 'devstack-masakari-new', 'type': 'VM', 'payload': {'instance_uuid': '6884cf13-5797-487b-9cb1-053a2e18b60e', 'vir_domain_event': 'STOPPED_FAILED', 'event': 'LIFECYCLE'}, 'generated_time': datetime.datetime(2018, 1, 24, 20, 29, 16, 902347)}} > > 2018-01-24 20:29:16.977 12473 WARNING masakarimonitors.ha.masakari [-] Retry sending a notification. (HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-9c734f56-aca9-40a9-b2dd-3f372de8c34e), The request you have made requires authentication.): HttpException: HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-9c734f56-aca9-40a9-b2dd-3f372de8c34e), The request you have made requires authentication. > > ... > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari [-] Exception caught: HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-26a5de94-aaad-4f8f-949e-cbfeb5e31b8b), The request you have made requires authentication.: HttpException: HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-26a5de94-aaad-4f8f-949e-cbfeb5e31b8b), The request you have made requires authentication. > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari Traceback (most recent call last): > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/masakarimonitors/ha/masakari.py", line 91, in send_notification > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari payload=event['notification']['payload']) > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/masakariclient/sdk/ha/v1/_proxy.py", line 65, in create_notification > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self._create(_notification.Notification, **attrs) > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/proxy2.py", line 194, in _create > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return res.create(self._session) > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/resource2.py", line 588, in create > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari json=request.body, headers=request.headers) > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 848, in post > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self.request(url, 'POST', **kwargs) > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/session.py", line 64, in map_exceptions_wrapper > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return func(*args, **kwargs) > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/session.py", line 352, in request > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return super(Session, self).request(*args, **kwargs) > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 573, in request > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari auth_headers = self.get_auth_headers(auth) > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 900, in get_auth_headers > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return auth.get_headers(self, **kwargs) > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/plugin.py", line 95, in get_headers > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari token = self.get_token(session) > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", line 88, in get_token > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self.get_access(session).auth_token > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", line 134, in get_access > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari self.auth_ref = self.get_auth_ref(session) > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/generic/base.py", line 198, in get_auth_ref > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self._plugin.get_auth_ref(session, **kwargs) > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/v3/base.py", line 165, in get_auth_ref > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari authenticated=False, log=False, **rkwargs) > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 848, in post > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self.request(url, 'POST', **kwargs) > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/session.py", line 66, in map_exceptions_wrapper > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari raise exceptions.from_exception(e) > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari HttpException: HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-26a5de94-aaad-4f8f-949e-cbfeb5e31b8b), The request you have made requires authentication. > > 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari > > > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at po.ntt-tx.co.jp From muroi.masahito at lab.ntt.co.jp Tue Jan 30 08:03:58 2018 From: muroi.masahito at lab.ntt.co.jp (Masahito MUROI) Date: Tue, 30 Jan 2018 17:03:58 +0900 Subject: [openstack-dev] [requirements][Blazar] FFE - add python-blazarclient in global-requirements Message-ID: Hi requirements team, This is a FFE request for adding python-blazarclient to global-requirements.txt. Blazar team had release problems for updating the blazarclient to pypo. Luckily, the problems are fixed and the client is published at pypi this morning. 1. https://review.openstack.org/#/c/539126/ best regards, Masahito From paul.bourke at oracle.com Tue Jan 30 10:46:08 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Tue, 30 Jan 2018 10:46:08 +0000 Subject: [openstack-dev] [kolla] Policy regarding template customisation In-Reply-To: References: Message-ID: <66061451-5446-b07a-3e12-f5f6f4ed592a@oracle.com> So I think everyone is in agreement that it should be option 2. I'm leaning towards this also but I'm wondering how much of this makes things easier for us as developers rather than operators. How committed this are we in practice? For example, if we take nova.conf[0], if we follow option 2, theoretically all alternate hypervisor options (vmware/xen/nova-fake) etc. should come out and be left to override files. As should options templating options such as metadata_workers, listen ports, etc. globals.yml could probably be half the size it currently is. But if we go this route how many operators will stick with kolla? Maybe it won't be a big deal, the issue currently is the line is blurred on what gets templated and what doesn't. On 30/01/18 01:04, Michał Jastrzębski wrote: > Hey, > > So I'm also for option 2. There was big discussion in Atlanta about > "how hard it is to keep configs up to date and remove deprecated > options". merge_config makes it easier for us to handle this. With > amount of services we support I don't think we have enough time to > keep tabs on every config change across OpenStack. > > On 29 January 2018 at 08:03, Steven Dake (stdake) wrote: >> Agree, the “why” of this policy is stated here: >> >> https://docs.openstack.org/developer/kolla-ansible/deployment-philosophy.html >> >> >> >> Paul, I think your corrective actions sound good. Perhaps we should also >> reword “essential” to some other word that is more lenient. >> >> >> >> Cheers >> >> -steve >> >> >> >> From: Jeffrey Zhang >> Reply-To: "OpenStack Development Mailing List (not for usage questions)" >> >> Date: Monday, January 29, 2018 at 7:14 AM >> To: "OpenStack Development Mailing List (not for usage questions)" >> >> Subject: Re: [openstack-dev] [kolla] Policy regarding template customisation >> >> >> >> Thank Paul for pointing this out. >> >> >> >> for me, I prefer to consist with 2) >> >> >> >> There are thousands of configuration in OpenStack, it is hard for Kolla to >> >> add every key/value pair in playbooks. Currently, the merge_config is a more >> >> better solutions. >> >> >> >> >> >> >> >> >> >> On Mon, Jan 29, 2018 at 7:13 PM, Paul Bourke wrote: >> >> Hi all, >> >> I'd like to revisit our policy of not templating everything in >> kolla-ansible's template files. This is a policy that was set in place very >> early on in kolla-ansible's development, but I'm concerned we haven't been >> very consistent with it. This leads to confusion for contributors and >> operators - "should I template this and submit a patch, or do I need to >> start using my own config files?". >> >> The docs[0] are currently clear: >> >> "The Kolla upstream community does not want to place key/value pairs in the >> Ansible playbook configuration options that are not essential to obtaining a >> functional deployment." >> >> In practice though our templates contain many options that are not >> necessary, and plenty of patches have merged that while very useful to >> operators, are not necessary to an 'out of the box' deployment. >> >> So I'd like us to revisit the questions: >> >> 1) Is kolla-ansible attempting to be a 'batteries included' tool, which >> caters to operators via key/value config options? >> >> 2) Or, is it to be a solid reference implementation, where any degree of >> customisation implies a clear 'bring your own configs' type policy. >> >> If 1), then we should potentially: >> >> * Update ours docs to remove the referenced paragraph >> * Look at reorganising files like globals.yml into something more >> maintainable. >> >> If 2), >> >> * We should make it clear to reviewers that patches templating options that >> are non essential should not be accepted. >> * Encourage patches to strip down existing config files to an absolute >> minimum. >> * Make this policy more clear in docs / templates to avoid frustration on >> the part of operators. >> >> Thoughts? >> >> Thanks, >> -Paul >> >> [0] >> https://docs.openstack.org/kolla-ansible/latest/admin/deployment-philosophy.html#why-not-template-customization >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> -- >> >> Regards, >> >> Jeffrey Zhang >> >> Blog: http://xcodest.me >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From thierry at openstack.org Tue Jan 30 10:48:20 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 30 Jan 2018 11:48:20 +0100 Subject: [openstack-dev] [requirements][Blazar] FFE - add python-blazarclient in global-requirements In-Reply-To: References: Message-ID: <0841a16f-b32e-e7a0-160d-39f3ff65a2ac@openstack.org> Masahito MUROI wrote: > Hi requirements team, > > This is a FFE request for adding python-blazarclient to > global-requirements.txt.  Blazar team had release problems for updating > the blazarclient to pypo. > > Luckily, the problems are fixed and the client is published at pypi this > morning. > > 1. https://review.openstack.org/#/c/539126/ Looks like it only affects blazar-dashboard, so +1 from me -- Thierry Carrez (ttx) From thierry at openstack.org Tue Jan 30 10:58:17 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 30 Jan 2018 11:58:17 +0100 Subject: [openstack-dev] [requirement][cyborg]FFE - pyspdk requirement dependency In-Reply-To: <53EADDD3-8A86-445F-A5D9-F5401ABB5309@163.com> References: <53EADDD3-8A86-445F-A5D9-F5401ABB5309@163.com> Message-ID: <64ec8f36-dc4a-e5b1-484f-61938ac68001@openstack.org> We We wrote: > The pyspdk is a important tool library [1] which  supports Cyborg SPDK > driver [2] to manage the backend SPDK-base app, so we need to upload > pyspdk into the pypi [3]  and then append 'pyspdk>=0.0.1’ item into > ‘OpenStack/Cyborg/requirements.txt’ , so that  SPDK driver can be built > correctly when zuul runs. However, It's not what we thought it would be, > if we want to  add the new requirements, we should get support from > upstream OpenStack/requirements [4] to append 'pyspdk>=0.0.1’ item. Before we talk FFE, pyspdk looks a bit far away from being something OpenStack code can depend on. In particular: - it's not clearly licensed under a supported license (no LICENSE file in the source code) - Missing metadata entries in setup.cfg means we are missing a lot of context information about this library Those need to be fixed before we can even consider adding this library to global requirements... Cheers, -- Thierry Carrez (ttx) From coolsvap at gmail.com Tue Jan 30 11:03:43 2018 From: coolsvap at gmail.com (Swapnil Kulkarni) Date: Tue, 30 Jan 2018 16:33:43 +0530 Subject: [openstack-dev] [requirements][Blazar] FFE - add python-blazarclient in global-requirements In-Reply-To: <0841a16f-b32e-e7a0-160d-39f3ff65a2ac@openstack.org> References: <0841a16f-b32e-e7a0-160d-39f3ff65a2ac@openstack.org> Message-ID: On Tue, Jan 30, 2018 at 4:18 PM, Thierry Carrez wrote: > Masahito MUROI wrote: > > Hi requirements team, > > > > This is a FFE request for adding python-blazarclient to > > global-requirements.txt. Blazar team had release problems for updating > > the blazarclient to pypo. > > > > Luckily, the problems are fixed and the client is published at pypi this > > morning. > > > > 1. https://review.openstack.org/#/c/539126/ > > Looks like it only affects blazar-dashboard, so +1 from me > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Looks good to me as well. +1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.page at ubuntu.com Tue Jan 30 11:08:36 2018 From: james.page at ubuntu.com (James Page) Date: Tue, 30 Jan 2018 11:08:36 +0000 Subject: [openstack-dev] [charms] Dublin PTG devroom Message-ID: Hi Team The Dublin PTG is not so far away now, so lets start on the agenda for our Devroom: https://etherpad.openstack.org/p/DUB-charms-devroom We had a fairly formal agenda of design related topics in Denver for the first day, and spent most of the second day mini-sprinting on various features/bugs/issues/niggles etc... I think it worked well - what do others think? Please add topics to the pad. Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladislav.belogrudov at oracle.com Tue Jan 30 11:22:27 2018 From: vladislav.belogrudov at oracle.com (vladislav.belogrudov at oracle.com) Date: Tue, 30 Jan 2018 14:22:27 +0300 Subject: [openstack-dev] [kolla] Policy regarding template customisation In-Reply-To: <66061451-5446-b07a-3e12-f5f6f4ed592a@oracle.com> References: <66061451-5446-b07a-3e12-f5f6f4ed592a@oracle.com> Message-ID: <68db4598-3738-0085-46d0-6bb8bd630363@oracle.com> may be we could move those specific options / templates to sample overrides? Operators would move necessary pieces back to /etc/kolla/config . Just thinking of config plug-ins / third-party supported things. Thanks, Vlad On 01/30/2018 01:46 PM, Paul Bourke wrote: > So I think everyone is in agreement that it should be option 2. I'm > leaning towards this also but I'm wondering how much of this makes > things easier for us as developers rather than operators. > > How committed this are we in practice? For example, if we take > nova.conf[0], if we follow option 2, theoretically all alternate > hypervisor options (vmware/xen/nova-fake) etc. should come out and be > left to override files. As should options templating options such as > metadata_workers, listen ports, etc. globals.yml could probably be > half the size it currently is. But if we go this route how many > operators will stick with kolla? Maybe it won't be a big deal, the > issue currently is the line is blurred on what gets templated and what > doesn't. > > On 30/01/18 01:04, Michał Jastrzębski wrote: >> Hey, >> >> So I'm also for option 2. There was big discussion in Atlanta about >> "how hard it is to keep configs up to date and remove deprecated >> options". merge_config makes it easier for us to handle this. With >> amount of services we support I don't think we have enough time to >> keep tabs on every config change across OpenStack. >> >> On 29 January 2018 at 08:03, Steven Dake (stdake) >> wrote: >>> Agree, the “why” of this policy is stated here: >>> >>> https://docs.openstack.org/developer/kolla-ansible/deployment-philosophy.html >>> >>> >>> >>> >>> Paul, I think your corrective actions sound good.  Perhaps we should >>> also >>> reword “essential” to some other word that is more lenient. >>> >>> >>> >>> Cheers >>> >>> -steve >>> >>> >>> >>> From: Jeffrey Zhang >>> Reply-To: "OpenStack Development Mailing List (not for usage >>> questions)" >>> >>> Date: Monday, January 29, 2018 at 7:14 AM >>> To: "OpenStack Development Mailing List (not for usage questions)" >>> >>> Subject: Re: [openstack-dev] [kolla] Policy regarding template >>> customisation >>> >>> >>> >>> Thank Paul for pointing this out. >>> >>> >>> >>> for me, I prefer to consist with 2) >>> >>> >>> >>> There are thousands of configuration in OpenStack, it is hard for >>> Kolla to >>> >>> add every key/value pair in playbooks. Currently, the merge_config >>> is a more >>> >>> better solutions. >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> On Mon, Jan 29, 2018 at 7:13 PM, Paul Bourke >>> wrote: >>> >>> Hi all, >>> >>> I'd like to revisit our policy of not templating everything in >>> kolla-ansible's template files. This is a policy that was set in >>> place very >>> early on in kolla-ansible's development, but I'm concerned we >>> haven't been >>> very consistent with it. This leads to confusion for contributors and >>> operators - "should I template this and submit a patch, or do I need to >>> start using my own config files?". >>> >>> The docs[0] are currently clear: >>> >>> "The Kolla upstream community does not want to place key/value pairs >>> in the >>> Ansible playbook configuration options that are not essential to >>> obtaining a >>> functional deployment." >>> >>> In practice though our templates contain many options that are not >>> necessary, and plenty of patches have merged that while very useful to >>> operators, are not necessary to an 'out of the box' deployment. >>> >>> So I'd like us to revisit the questions: >>> >>> 1) Is kolla-ansible attempting to be a 'batteries included' tool, which >>> caters to operators via key/value config options? >>> >>> 2) Or, is it to be a solid reference implementation, where any >>> degree of >>> customisation implies a clear 'bring your own configs' type policy. >>> >>> If 1), then we should potentially: >>> >>> * Update ours docs to remove the referenced paragraph >>> * Look at reorganising files like globals.yml into something more >>> maintainable. >>> >>> If 2), >>> >>> * We should make it clear to reviewers that patches templating >>> options that >>> are non essential should not be accepted. >>> * Encourage patches to strip down existing config files to an absolute >>> minimum. >>> * Make this policy more clear in docs / templates to avoid >>> frustration on >>> the part of operators. >>> >>> Thoughts? >>> >>> Thanks, >>> -Paul >>> >>> [0] >>> https://docs.openstack.org/kolla-ansible/latest/admin/deployment-philosophy.html#why-not-template-customization >>> >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> >>> >>> -- >>> >>> Regards, >>> >>> Jeffrey Zhang >>> >>> Blog: http://xcodest.me >>> >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Greg.Waines at windriver.com Tue Jan 30 11:50:45 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Tue, 30 Jan 2018 11:50:45 +0000 Subject: [openstack-dev] [masakari] BUG in Masakari Installation and Procedure and/or Documentation In-Reply-To: <7556a88f-91c0-b34c-1a2c-cc30cae216d8@po.ntt-tx.co.jp> References: <265F454E-3330-4C9E-B2A2-1506F2843AA9@windriver.com> <35935713-49F4-46C4-A675-A8D3B483A980@windriver.com> <7556a88f-91c0-b34c-1a2c-cc30cae216d8@po.ntt-tx.co.jp> Message-ID: <3B065A7D-1304-4820-89EE-811E862B9238@windriver.com> Thanks Honjo, I reported bug in masakari Launchpad. https://bugs.launchpad.net/masakari/+bug/1746229 Greg. From: Rikimaru Honjo Reply-To: "openstack-dev at lists.openstack.org" Date: Tuesday, January 30, 2018 at 1:49 AM To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [masakari] BUG in Masakari Installation and Procedure and/or Documentation Hello Greg, Thank you for reporting & researching. On 2018/01/27 5:59, Waines, Greg wrote: Update on this. It turned out that i had incorrectly set the ‘project_name’ and ‘username’ in /etc/masakarimonitors/masakarimonitors.conf Setting both these attributes to ‘admin’ made it such that the instancemonitor’s notification to masakari-engine was successful. e.g. stack at devstack-masakari-louie:~/devstack$ masakari notification-list +--------------------------------------+----------------------------+---------+--------------------------------------+------+ | notification_uuid | generated_time | status | source_host_uuid | type | +--------------------------------------+----------------------------+---------+--------------------------------------+------+ | b8c6c561-7a93-40a2-8d73-3783024865b4 | 2018-01-26T19:41:29.000000 | running | 51bc8b8b-324f-499a-9166-38c22b3842cd | VM | +--------------------------------------+----------------------------+---------+--------------------------------------+------+ stack at devstack-masakari-louie:~/devstack$ However I now get the following error in masakari-engine, when the masakari-engine attempts to do the VM Recovery Jan 26 19:41:28 devstack-masakari-louie masakari-engine[11795]: 2018-01-26 19:41:28.968 TRACE masakari.engine.drivers.taskflow.driver EndpointNotFound: publicURL endpoint for compute service named Compute Service not found Why is masakari-engine looking for a publicURL endpoint for service_type=’compute’ and service_name=’Compute Service’ ? I think there is no reason. This default value was added by the following patch. https://review.openstack.org/#/c/388734/ I think this is a bug. Could you report in Launchpad? See below that the Service Name = ‘nova’ ... NOT ‘Compute Service’ stack at devstack-masakari-louie:~/devstack$ openstack endpoint list +----------------------------------+-----------+--------------+----------------+---------+-----------+----------------------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+----------------+---------+-----------+----------------------------------------------+ | 0111643ef1584decb523524a3db5ce18 | RegionOne | nova_legacy | compute_legacy | True | public | http://10.10.10.14/compute/v2/$(project_id)s | | 01790448c22f49e69774adf290fba728 | RegionOne | gnocchi | metric | True | internal | http://10.10.10.14/metric | | 0b31693c6650499a981d580721be9e48 | RegionOne | vitrage | rca | True | internal | http://10.10.10.14:8999 | | 40f66ed61b4e4310829aa69e11c75554 | RegionOne | neutron | network | True | public | http://10.10.10.14:9696/ | | 47479cf64af944b996b1fbca42efd945 | RegionOne | nova | compute | True | public | http://10.10.10.14/compute/v2.1 | | 49dccfc61e8246a2a2c0b8d12b3db91a | RegionOne | vitrage | rca | True | admin | http://10.10.10.14:8999 | | 5261ba0327de4c2d92842147636ee770 | RegionOne | masakari | ha | True | internal | http://10.10.10.14:15868/v1/$(tenant_id)s | | 5df28622c6f449ebad12d9b62110cd08 | RegionOne | gnocchi | metric | True | admin | http://10.10.10.14/metric | | 64f8f401431042a0ab1d053ca4f4df02 | RegionOne | glance | image | True | public | http://10.10.10.14/image | | 69ad6b9d0b0b4d0a8da6fa36af8289cb | RegionOne | masakari | ha | True | public | http://10.10.10.14:15868/v1/$(tenant_id)s | | 7dd9d5396e9c49d4a41e2865b841f6a0 | RegionOne | masakari | ha | True | admin | http://10.10.10.14:15868/v1/$(tenant_id)s | | 811fa7f4b3c14612b4aca354dc8ea77e | RegionOne | vitrage | rca | True | public | http://10.10.10.14:8999 | | 8535da724c424363bffe1d033ee033e5 | RegionOne | cinder | volume | True | public | http://10.10.10.14/volume/v1/$(project_id)s | | 853f1783f1014075a03c16f7c3a2568a | RegionOne | keystone | identity | True | admin | http://10.10.10.14/identity | | 9450f5611ca747f2a049f22ff0996dba | RegionOne | cinderv3 | volumev3 | True | public | http://10.10.10.14/volume/v3/$(project_id)s | | 9a73696d88a9438cb0ab75a754a08e9d | RegionOne | gnocchi | metric | True | public | http://10.10.10.14/metric | | b1ff2b4d683c4a58a3b27232699d0058 | RegionOne | cinderv2 | volumev2 | True | public | http://10.10.10.14/volume/v2/$(project_id)s | | d4e66240faff48f2b5e1d0fcfb73a74b | RegionOne | placement | placement | True | public | http://10.10.10.14/placement | | fda917fd368a4a479c9c186df1beb8e9 | RegionOne | keystone | identity | True | public | http://10.10.10.14/identity | +----------------------------------+-----------+--------------+----------------+---------+-----------+----------------------------------------------+ stack at devstack-masakari-louie:~/devstack$ let me know your thoughts, I don’t mind raising the required BUG in Launchpad if required, Greg. p.s. my masakari configurations ... wrt hosts and segments ... are as follows: stack at devstack-masakari-louie:~/devstack$ masakari segment-list +--------------------------------------+-----------+-------------+--------------+-----------------+ | uuid | name | description | service_type | recovery_method | +--------------------------------------+-----------+-------------+--------------+-----------------+ | 9c6e22bd-7fab-40cb-a8e0-3702137f3227 | segment-1 | - | COMPUTE | auto | +--------------------------------------+-----------+-------------+--------------+-----------------+ stack at devstack-masakari-louie:~/devstack$ masakari host-list --segment-id 9c6e22bd-7fab-40cb-a8e0-3702137f3227 +--------------------------------------+-------------------------+---------+--------------------+----------+----------------+--------------------------------------+ | uuid | name | type | control_attributes | reserved | on_maintenance | failover_segment_id | +--------------------------------------+-------------------------+---------+--------------------+----------+----------------+--------------------------------------+ | 51bc8b8b-324f-499a-9166-38c22b3842cd | devstack-masakari-louie | COMPUTE | SSH | False | False | 9c6e22bd-7fab-40cb-a8e0-3702137f3227 | +--------------------------------------+-------------------------+---------+--------------------+----------+----------------+--------------------------------------+ stack at devstack-masakari-louie:~/devstack$ From: Greg Waines > Reply-To: "openstack-dev at lists.openstack.org" > Date: Wednesday, January 24, 2018 at 4:13 PM To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] [masakari] BUG in Masakari Installation and Procedure and/or Documentation I am looking for some input before I raise a BUG. I reviewed the following commits which documented the Masakari and MasakariMonitors Installation and Procedures. i.e. https://review.openstack.org/#/c/489570/ https://review.openstack.org/#/c/489095/ I created an AIO devstack with Masakari on current/master ... this morning. I followed the above instructions on configuring and installing Masakari and MasakariMonitors. I created a VM and then ‘sudo kill -9 ’ and I got the following error from instance monitoring trying to send the notification message to masakari-engine. ( The request you have made requires authentication. ) ... see below, Is this a known BUG ? Greg. 2018-01-24 20:29:16.902 12473 INFO masakarimonitors.instancemonitor.libvirt_handler.callback [-] Libvirt Event: type=VM, hostname=devstack-masakari-new, uuid=6884cf13-5797-487b-9cb1-053a2e18b60e, time=2018-01-24 20:29:16.902347, event_id=LIFECYCLE, detail=STOPPED_FAILED) 2018-01-24 20:29:16.903 12473 INFO masakarimonitors.ha.masakari [-] Send a notification. {'notification': {'hostname': 'devstack-masakari-new', 'type': 'VM', 'payload': {'instance_uuid': '6884cf13-5797-487b-9cb1-053a2e18b60e', 'vir_domain_event': 'STOPPED_FAILED', 'event': 'LIFECYCLE'}, 'generated_time': datetime.datetime(2018, 1, 24, 20, 29, 16, 902347)}} 2018-01-24 20:29:16.977 12473 WARNING masakarimonitors.ha.masakari [-] Retry sending a notification. (HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-9c734f56-aca9-40a9-b2dd-3f372de8c34e), The request you have made requires authentication.): HttpException: HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-9c734f56-aca9-40a9-b2dd-3f372de8c34e), The request you have made requires authentication. ... 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari [-] Exception caught: HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-26a5de94-aaad-4f8f-949e-cbfeb5e31b8b), The request you have made requires authentication.: HttpException: HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-26a5de94-aaad-4f8f-949e-cbfeb5e31b8b), The request you have made requires authentication. 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari Traceback (most recent call last): 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/masakarimonitors/ha/masakari.py", line 91, in send_notification 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari payload=event['notification']['payload']) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/masakariclient/sdk/ha/v1/_proxy.py", line 65, in create_notification 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self._create(_notification.Notification, **attrs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/proxy2.py", line 194, in _create 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return res.create(self._session) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/resource2.py", line 588, in create 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari json=request.body, headers=request.headers) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 848, in post 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self.request(url, 'POST', **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/session.py", line 64, in map_exceptions_wrapper 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return func(*args, **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/session.py", line 352, in request 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return super(Session, self).request(*args, **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 573, in request 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari auth_headers = self.get_auth_headers(auth) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 900, in get_auth_headers 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return auth.get_headers(self, **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/plugin.py", line 95, in get_headers 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari token = self.get_token(session) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", line 88, in get_token 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self.get_access(session).auth_token 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", line 134, in get_access 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari self.auth_ref = self.get_auth_ref(session) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/generic/base.py", line 198, in get_auth_ref 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self._plugin.get_auth_ref(session, **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/v3/base.py", line 165, in get_auth_ref 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari authenticated=False, log=False, **rkwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 848, in post 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self.request(url, 'POST', **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/session.py", line 66, in map_exceptions_wrapper 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari raise exceptions.from_exception(e) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari HttpException: HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-26a5de94-aaad-4f8f-949e-cbfeb5e31b8b), The request you have made requires authentication. 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at po.ntt-tx.co.jp __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Tue Jan 30 12:41:34 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Tue, 30 Jan 2018 12:41:34 +0000 Subject: [openstack-dev] [masakari] BUG in Masakari Installation and Procedure and/or Documentation In-Reply-To: <3B065A7D-1304-4820-89EE-811E862B9238@windriver.com> References: <265F454E-3330-4C9E-B2A2-1506F2843AA9@windriver.com> <35935713-49F4-46C4-A675-A8D3B483A980@windriver.com> <7556a88f-91c0-b34c-1a2c-cc30cae216d8@po.ntt-tx.co.jp> <3B065A7D-1304-4820-89EE-811E862B9238@windriver.com> Message-ID: <1ABF3DC7-CF21-484A-A293-72040AEE9FE5@windriver.com> FYI ... I tried updating /etc/masakari/masakari.conf with [nova] nova_catalog_admin_info = compute:nova:publicURL and restarting masakari-engine and masakari-api ... but that didn’t work ... not sure why. So I manually changed the default nova_catalog_admin_info to ‘compute:nova:publicURL’ ... in masakari/masakari/conf/nova.py and restarted masakari-engine ... and YAY !!!! ... my usecase demo of masakari non-intrusive instance monitoring worked !!!!. i.e. masakari-instancemonitor detected and reported failed VM, masakari-engine recovered the VM automatically. See below: stack at devstack-masakari-louie:~$ nova list +--------------------------------------+-------------+--------+------------+-------------+--------------------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------+--------+------------+-------------+--------------------------------------------------------+ | 00f377e6-e21f-431a-aba9-31ce13fd974c | vm-1-cirros | ACTIVE | - | Running | private=fd37:1afb:1393:0:f816:3eff:fec9:7f48, 10.0.0.7 | +--------------------------------------+-------------+--------+------------+-------------+--------------------------------------------------------+ stack at devstack-masakari-louie:~$ masakari notification-list +--------------------------------------+----------------------------+---------+--------------------------------------+------+ | notification_uuid | generated_time | status | source_host_uuid | type | +--------------------------------------+----------------------------+---------+--------------------------------------+------+ | 5b535d99-4e02-44a5-bd21-d131d14aaa36 | 2018-01-30T12:10:43.000000 | running | 51bc8b8b-324f-499a-9166-38c22b3842cd | VM | | b8c6c561-7a93-40a2-8d73-3783024865b4 | 2018-01-26T19:41:29.000000 | running | 51bc8b8b-324f-499a-9166-38c22b3842cd | VM | | ed6433c3-939d-4aa8-bf47-3ce8e8c78d45 | 2018-01-26T17:13:03.000000 | running | 51bc8b8b-324f-499a-9166-38c22b3842cd | VM | +--------------------------------------+----------------------------+---------+--------------------------------------+------+ stack at devstack-masakari-louie:~$ stack at devstack-masakari-louie:~$ !ps ps -ef | fgrep qemu libvirt+ 8113 1 2 12:12 ? 00:00:14 /usr/bin/qemu-system-x86_64 -name guest=instance-00000004,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-4-instance-00000004/master-key.aes -machine pc-i440fx-artful,accel=tcg,usb=off,dump-guest-core=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 00f377e6-e21f-431a-aba9-31ce13fd974c -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=17.0.0,serial=dacbc8a8-47c5-4132-86e3-3c902df4cf15,uuid=00f377e6-e21f-431a-aba9-31ce13fd974c,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-4-instance-00000004/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/opt/stack/data/nova/instances/00f377e6-e21f-431a-aba9-31ce13fd974c/disk,format=qcow2,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=80,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:c9:7f:48,bus=pci.0,addr=0x3 -add-fd set=1,fd=83 -chardev pty,id=charserial0,logfile=/dev/fdset/1,logappend=on -device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on stack 9195 5888 0 12:23 pts/0 00:00:00 grep -F --color=auto qemu stack at devstack-masakari-louie:~$ stack at devstack-masakari-louie:~$ sudo kill -9 8113 ... masakari detects and restarts VM .... stack at devstack-masakari-louie:~$ nova list +--------------------------------------+-------------+--------+------------+-------------+--------------------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------+--------+------------+-------------+--------------------------------------------------------+ | 00f377e6-e21f-431a-aba9-31ce13fd974c | vm-1-cirros | ACTIVE | - | Running | private=fd37:1afb:1393:0:f816:3eff:fec9:7f48, 10.0.0.7 | +--------------------------------------+-------------+--------+------------+-------------+--------------------------------------------------------+ stack at devstack-masakari-louie:~$ masakari notification-list +--------------------------------------+----------------------------+----------+--------------------------------------+------+ | notification_uuid | generated_time | status | source_host_uuid | type | +--------------------------------------+----------------------------+----------+--------------------------------------+------+ | 89679602-494f-4394-ac32-b24f5e9afd83 | 2018-01-30T12:23:48.000000 | finished | 51bc8b8b-324f-499a-9166-38c22b3842cd | VM | | 5b535d99-4e02-44a5-bd21-d131d14aaa36 | 2018-01-30T12:10:43.000000 | running | 51bc8b8b-324f-499a-9166-38c22b3842cd | VM | | b8c6c561-7a93-40a2-8d73-3783024865b4 | 2018-01-26T19:41:29.000000 | running | 51bc8b8b-324f-499a-9166-38c22b3842cd | VM | | ed6433c3-939d-4aa8-bf47-3ce8e8c78d45 | 2018-01-26T17:13:03.000000 | running | 51bc8b8b-324f-499a-9166-38c22b3842cd | VM | +--------------------------------------+----------------------------+----------+--------------------------------------+------+ stack at devstack-masakari-louie:~$ !ps ps -ef | fgrep qemu libvirt+ 9494 1 10 12:23 ? 00:00:10 /usr/bin/qemu-system-x86_64 -name guest=instance-00000004,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-5-instance-00000004/master-key.aes -machine pc-i440fx-artful,accel=tcg,usb=off,dump-guest-core=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 00f377e6-e21f-431a-aba9-31ce13fd974c -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=17.0.0,serial=dacbc8a8-47c5-4132-86e3-3c902df4cf15,uuid=00f377e6-e21f-431a-aba9-31ce13fd974c,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-5-instance-00000004/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/opt/stack/data/nova/instances/00f377e6-e21f-431a-aba9-31ce13fd974c/disk,format=qcow2,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=80,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:c9:7f:48,bus=pci.0,addr=0x3 -add-fd set=1,fd=83 -chardev pty,id=charserial0,logfile=/dev/fdset/1,logappend=on -device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on stack 9628 5888 0 12:25 pts/0 00:00:00 grep -F --color=auto qemu stack at devstack-masakari-louie:~$ stack at devstack-masakari-louie:~$ YAY !!! Greg. From: Greg Waines Reply-To: "openstack-dev at lists.openstack.org" Date: Tuesday, January 30, 2018 at 6:50 AM To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [masakari] BUG in Masakari Installation and Procedure and/or Documentation Thanks Honjo, I reported bug in masakari Launchpad. https://bugs.launchpad.net/masakari/+bug/1746229 Greg. From: Rikimaru Honjo Reply-To: "openstack-dev at lists.openstack.org" Date: Tuesday, January 30, 2018 at 1:49 AM To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [masakari] BUG in Masakari Installation and Procedure and/or Documentation Hello Greg, Thank you for reporting & researching. On 2018/01/27 5:59, Waines, Greg wrote: Update on this. It turned out that i had incorrectly set the ‘project_name’ and ‘username’ in /etc/masakarimonitors/masakarimonitors.conf Setting both these attributes to ‘admin’ made it such that the instancemonitor’s notification to masakari-engine was successful. e.g. stack at devstack-masakari-louie:~/devstack$ masakari notification-list +--------------------------------------+----------------------------+---------+--------------------------------------+------+ | notification_uuid | generated_time | status | source_host_uuid | type | +--------------------------------------+----------------------------+---------+--------------------------------------+------+ | b8c6c561-7a93-40a2-8d73-3783024865b4 | 2018-01-26T19:41:29.000000 | running | 51bc8b8b-324f-499a-9166-38c22b3842cd | VM | +--------------------------------------+----------------------------+---------+--------------------------------------+------+ stack at devstack-masakari-louie:~/devstack$ However I now get the following error in masakari-engine, when the masakari-engine attempts to do the VM Recovery Jan 26 19:41:28 devstack-masakari-louie masakari-engine[11795]: 2018-01-26 19:41:28.968 TRACE masakari.engine.drivers.taskflow.driver EndpointNotFound: publicURL endpoint for compute service named Compute Service not found Why is masakari-engine looking for a publicURL endpoint for service_type=’compute’ and service_name=’Compute Service’ ? I think there is no reason. This default value was added by the following patch. https://review.openstack.org/#/c/388734/ I think this is a bug. Could you report in Launchpad? See below that the Service Name = ‘nova’ ... NOT ‘Compute Service’ stack at devstack-masakari-louie:~/devstack$ openstack endpoint list +----------------------------------+-----------+--------------+----------------+---------+-----------+----------------------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+----------------+---------+-----------+----------------------------------------------+ | 0111643ef1584decb523524a3db5ce18 | RegionOne | nova_legacy | compute_legacy | True | public | http://10.10.10.14/compute/v2/$(project_id)s | | 01790448c22f49e69774adf290fba728 | RegionOne | gnocchi | metric | True | internal | http://10.10.10.14/metric | | 0b31693c6650499a981d580721be9e48 | RegionOne | vitrage | rca | True | internal | http://10.10.10.14:8999 | | 40f66ed61b4e4310829aa69e11c75554 | RegionOne | neutron | network | True | public | http://10.10.10.14:9696/ | | 47479cf64af944b996b1fbca42efd945 | RegionOne | nova | compute | True | public | http://10.10.10.14/compute/v2.1 | | 49dccfc61e8246a2a2c0b8d12b3db91a | RegionOne | vitrage | rca | True | admin | http://10.10.10.14:8999 | | 5261ba0327de4c2d92842147636ee770 | RegionOne | masakari | ha | True | internal | http://10.10.10.14:15868/v1/$(tenant_id)s | | 5df28622c6f449ebad12d9b62110cd08 | RegionOne | gnocchi | metric | True | admin | http://10.10.10.14/metric | | 64f8f401431042a0ab1d053ca4f4df02 | RegionOne | glance | image | True | public | http://10.10.10.14/image | | 69ad6b9d0b0b4d0a8da6fa36af8289cb | RegionOne | masakari | ha | True | public | http://10.10.10.14:15868/v1/$(tenant_id)s | | 7dd9d5396e9c49d4a41e2865b841f6a0 | RegionOne | masakari | ha | True | admin | http://10.10.10.14:15868/v1/$(tenant_id)s | | 811fa7f4b3c14612b4aca354dc8ea77e | RegionOne | vitrage | rca | True | public | http://10.10.10.14:8999 | | 8535da724c424363bffe1d033ee033e5 | RegionOne | cinder | volume | True | public | http://10.10.10.14/volume/v1/$(project_id)s | | 853f1783f1014075a03c16f7c3a2568a | RegionOne | keystone | identity | True | admin | http://10.10.10.14/identity | | 9450f5611ca747f2a049f22ff0996dba | RegionOne | cinderv3 | volumev3 | True | public | http://10.10.10.14/volume/v3/$(project_id)s | | 9a73696d88a9438cb0ab75a754a08e9d | RegionOne | gnocchi | metric | True | public | http://10.10.10.14/metric | | b1ff2b4d683c4a58a3b27232699d0058 | RegionOne | cinderv2 | volumev2 | True | public | http://10.10.10.14/volume/v2/$(project_id)s | | d4e66240faff48f2b5e1d0fcfb73a74b | RegionOne | placement | placement | True | public | http://10.10.10.14/placement | | fda917fd368a4a479c9c186df1beb8e9 | RegionOne | keystone | identity | True | public | http://10.10.10.14/identity | +----------------------------------+-----------+--------------+----------------+---------+-----------+----------------------------------------------+ stack at devstack-masakari-louie:~/devstack$ let me know your thoughts, I don’t mind raising the required BUG in Launchpad if required, Greg. p.s. my masakari configurations ... wrt hosts and segments ... are as follows: stack at devstack-masakari-louie:~/devstack$ masakari segment-list +--------------------------------------+-----------+-------------+--------------+-----------------+ | uuid | name | description | service_type | recovery_method | +--------------------------------------+-----------+-------------+--------------+-----------------+ | 9c6e22bd-7fab-40cb-a8e0-3702137f3227 | segment-1 | - | COMPUTE | auto | +--------------------------------------+-----------+-------------+--------------+-----------------+ stack at devstack-masakari-louie:~/devstack$ masakari host-list --segment-id 9c6e22bd-7fab-40cb-a8e0-3702137f3227 +--------------------------------------+-------------------------+---------+--------------------+----------+----------------+--------------------------------------+ | uuid | name | type | control_attributes | reserved | on_maintenance | failover_segment_id | +--------------------------------------+-------------------------+---------+--------------------+----------+----------------+--------------------------------------+ | 51bc8b8b-324f-499a-9166-38c22b3842cd | devstack-masakari-louie | COMPUTE | SSH | False | False | 9c6e22bd-7fab-40cb-a8e0-3702137f3227 | +--------------------------------------+-------------------------+---------+--------------------+----------+----------------+--------------------------------------+ stack at devstack-masakari-louie:~/devstack$ From: Greg Waines > Reply-To: "openstack-dev at lists.openstack.org" > Date: Wednesday, January 24, 2018 at 4:13 PM To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] [masakari] BUG in Masakari Installation and Procedure and/or Documentation I am looking for some input before I raise a BUG. I reviewed the following commits which documented the Masakari and MasakariMonitors Installation and Procedures. i.e. https://review.openstack.org/#/c/489570/ https://review.openstack.org/#/c/489095/ I created an AIO devstack with Masakari on current/master ... this morning. I followed the above instructions on configuring and installing Masakari and MasakariMonitors. I created a VM and then ‘sudo kill -9 ’ and I got the following error from instance monitoring trying to send the notification message to masakari-engine. ( The request you have made requires authentication. ) ... see below, Is this a known BUG ? Greg. 2018-01-24 20:29:16.902 12473 INFO masakarimonitors.instancemonitor.libvirt_handler.callback [-] Libvirt Event: type=VM, hostname=devstack-masakari-new, uuid=6884cf13-5797-487b-9cb1-053a2e18b60e, time=2018-01-24 20:29:16.902347, event_id=LIFECYCLE, detail=STOPPED_FAILED) 2018-01-24 20:29:16.903 12473 INFO masakarimonitors.ha.masakari [-] Send a notification. {'notification': {'hostname': 'devstack-masakari-new', 'type': 'VM', 'payload': {'instance_uuid': '6884cf13-5797-487b-9cb1-053a2e18b60e', 'vir_domain_event': 'STOPPED_FAILED', 'event': 'LIFECYCLE'}, 'generated_time': datetime.datetime(2018, 1, 24, 20, 29, 16, 902347)}} 2018-01-24 20:29:16.977 12473 WARNING masakarimonitors.ha.masakari [-] Retry sending a notification. (HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-9c734f56-aca9-40a9-b2dd-3f372de8c34e), The request you have made requires authentication.): HttpException: HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-9c734f56-aca9-40a9-b2dd-3f372de8c34e), The request you have made requires authentication. ... 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari [-] Exception caught: HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-26a5de94-aaad-4f8f-949e-cbfeb5e31b8b), The request you have made requires authentication.: HttpException: HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-26a5de94-aaad-4f8f-949e-cbfeb5e31b8b), The request you have made requires authentication. 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari Traceback (most recent call last): 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/masakarimonitors/ha/masakari.py", line 91, in send_notification 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari payload=event['notification']['payload']) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/masakariclient/sdk/ha/v1/_proxy.py", line 65, in create_notification 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self._create(_notification.Notification, **attrs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/proxy2.py", line 194, in _create 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return res.create(self._session) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/resource2.py", line 588, in create 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari json=request.body, headers=request.headers) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 848, in post 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self.request(url, 'POST', **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/session.py", line 64, in map_exceptions_wrapper 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return func(*args, **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/session.py", line 352, in request 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return super(Session, self).request(*args, **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 573, in request 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari auth_headers = self.get_auth_headers(auth) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 900, in get_auth_headers 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return auth.get_headers(self, **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/plugin.py", line 95, in get_headers 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari token = self.get_token(session) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", line 88, in get_token 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self.get_access(session).auth_token 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", line 134, in get_access 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari self.auth_ref = self.get_auth_ref(session) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/generic/base.py", line 198, in get_auth_ref 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self._plugin.get_auth_ref(session, **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/v3/base.py", line 165, in get_auth_ref 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari authenticated=False, log=False, **rkwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 848, in post 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari return self.request(url, 'POST', **kwargs) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari File "/usr/local/lib/python2.7/dist-packages/openstack/session.py", line 66, in map_exceptions_wrapper 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari raise exceptions.from_exception(e) 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari HttpException: HttpException: The request you have made requires authentication. (HTTP 401) (Request-ID: req-26a5de94-aaad-4f8f-949e-cbfeb5e31b8b), The request you have made requires authentication. 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at po.ntt-tx.co.jp __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Tue Jan 30 12:54:15 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Tue, 30 Jan 2018 12:54:15 +0000 Subject: [openstack-dev] [magnum] [ironic] Why does magnum create instances with ports using 'fixed-ips' ? In-Reply-To: <7D38D6DF-A5D6-4F29-8554-03B9D2FDCF77@windriver.com> References: <7D38D6DF-A5D6-4F29-8554-03B9D2FDCF77@windriver.com> Message-ID: <19ADF669-01A0-4BF3-805A-ADD7F3188980@windriver.com> Any thoughts on this ? Greg. From: Greg Waines Reply-To: "openstack-dev at lists.openstack.org" Date: Friday, January 19, 2018 at 3:10 PM To: "openstack-dev at lists.openstack.org" Cc: "Nasir, Shoaib" Subject: [openstack-dev] [magnum] [ironic] Why does magnum create instances with ports using 'fixed-ips' ? Hey there, We have just recently integrated MAGNUM into our OpenStack Distribution. QUESTION: When MAGNUM is creating the ‘instances’ for the COE master and minion nodes, WHY does it create the instances with ports using ‘fixed-ips’ ? - instead of just letting the instance’s port dhcp for its ip-address ? I am asking this question because: · we have also integrated IRONIC into our OpenStack Distribution, and o currently support the simple (somewhat non-multi-tenant) networking approach i.e. § ironic-provisioning-net TENANT NETWORK, used to network boot the IRONIC Instances, is owned by ADMIN but shared so TENANTS can create IRONIC instances, § AND, we do NOT support the functionality to have IRONIC update the adjacent switch configuration in order to move the IRONIC instance on to a different (TENANT-owned) TENANT NETWORK after the instance is created. o so it is SORT OF multi-tenant in the sense that any TENANT can create an IRONIC instance, HOWEVER the IRONIC instances of all tenants are all on the same TENANT NETWORK · In this environment, When we use MAGNUM to create IRONIC COE Nodes o it ONLY works if the ADMIN creates the MAGNUM Cluster, o it does NOT work if a TENANT creates the MAGNUM Cluster, § because a TENANT can NOT create an instance port with ‘fixed-ips’ on a TENANT NETWORK that is not owned by himself. appreciate any info on this, Greg. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.leinen at switch.ch Tue Jan 30 12:54:19 2018 From: simon.leinen at switch.ch (Simon Leinen) Date: Tue, 30 Jan 2018 13:54:19 +0100 Subject: [openstack-dev] [kolla] Policy regarding template customisation In-Reply-To: <66061451-5446-b07a-3e12-f5f6f4ed592a@oracle.com> (Paul Bourke's message of "Tue, 30 Jan 2018 10:46:08 +0000") References: <66061451-5446-b07a-3e12-f5f6f4ed592a@oracle.com> Message-ID: Paul Bourke writes: > So I think everyone is in agreement that it should be option 2. I'm > leaning towards this also but I'm wondering how much of this makes > things easier for us as developers rather than operators. > How committed this are we in practice? For example, if we take > nova.conf[0], if we follow option 2, theoretically all alternate > hypervisor options (vmware/xen/nova-fake) etc. should come out and be > left to override files. As should options templating options such as > metadata_workers, listen ports, etc. globals.yml could probably be > half the size it currently is. But if we go this route how many > operators will stick with kolla? [...] Operator here. I've been following this discussion. Background: We have been using puppet-openstack combined with our own Puppet "integration classes" for several years. All configuration parameters are neatly in Hiera. So we're used to the "batteries-included" way that Paul describes under (1). For various reasons, we are also looking at new ways of provisioning our control plane, including Kolla. In hindsight, and in my personal opinion, while our previous approach (1) has somehow felt like the proper way to do things, it hasn't really paid off for us as an operator, and I would happily try approach (2). The perceived downside of (2) - or a perceived advantage of (1) - is that in an ideal world, (1) isolates us from the arcane configuration file details that the crazy devs of individual services come up with. In practice, it turns out that (a) those files aren't rocket science (b) as an operator you need to understand them anyway, at the latest when you need to debug stuff, and (c) the deployment tool can easily become a bottlenecks for deploying new features. This is why I'm happy to embrace the current Kolla philosophy (2). Sorry if I'm just repeating arguments that led to its adoption in the first place - I wasn't there when that happened. -- Simon. > Maybe it won't be a big deal, the issue currently is the line is > blurred on what gets templated and what doesn't. From paul.bourke at oracle.com Tue Jan 30 13:02:41 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Tue, 30 Jan 2018 13:02:41 +0000 Subject: [openstack-dev] [kolla] Policy regarding template customisation In-Reply-To: References: <66061451-5446-b07a-3e12-f5f6f4ed592a@oracle.com> Message-ID: On 30/01/18 12:54, Simon Leinen wrote: > The perceived downside of (2) - or a perceived advantage of (1) - is > that in an ideal world, (1) isolates us from the arcane configuration > file details that the crazy devs of individual services come up with. > In practice, it turns out that (a) those files aren't rocket science (b) > as an operator you need to understand them anyway, Thanks very much for taking the time to provide input Simon, it's very valuable. I think you sum it up well, definitely approach (1) is easier to newcomers who want to get up and going quickly without having to read too much into the files they're customising. In reality anyone going beyond a demo environment will need to do so. From moshele at mellanox.com Tue Jan 30 13:11:02 2018 From: moshele at mellanox.com (Moshe Levi) Date: Tue, 30 Jan 2018 13:11:02 +0000 Subject: [openstack-dev] [tripleo][opendaylight puppet] failed to deploy odl master with latest opendaylight puppet Message-ID: Hi all, We are trying to test solution of the odl hw offload in tripleo. We have already merged the odl support for ovs hardware offload [1]. We are trying to test that everting is working with tripleo, but we get failure with jetty.xml.orig. For some reason it get configure like [2] and causing opendaylight for failed to load http. It seem that there is problem with opendaylight puppet that is misconfiguring jetty.xml.orig Any help would be appreciated [1] - https://git.opendaylight.org/gerrit/#/c/67704/ [2] - http://paste.openstack.org/show/657899/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Tue Jan 30 13:16:37 2018 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 30 Jan 2018 13:16:37 +0000 Subject: [openstack-dev] [kolla] Policy regarding template customisation In-Reply-To: References: <66061451-5446-b07a-3e12-f5f6f4ed592a@oracle.com> Message-ID: Sometimes there are features that require different containers to be deployed, or different config files to be generated. These are things that cannot be done simply through merging a fixed set of config files. nova_compute_virt_type is an example of such a variable - various non-config tasks depend upon it. I guess the question is, for the supported values of kolla-ansible's variables, should a minimal working deployment also be supported? Does this logic inevitably lead to (1), or is it sustainable? Mark On 30 January 2018 at 12:54, Simon Leinen wrote: > Paul Bourke writes: > > So I think everyone is in agreement that it should be option 2. I'm > > leaning towards this also but I'm wondering how much of this makes > > things easier for us as developers rather than operators. > > > How committed this are we in practice? For example, if we take > > nova.conf[0], if we follow option 2, theoretically all alternate > > hypervisor options (vmware/xen/nova-fake) etc. should come out and be > > left to override files. As should options templating options such as > > metadata_workers, listen ports, etc. globals.yml could probably be > > half the size it currently is. But if we go this route how many > > operators will stick with kolla? [...] > > Operator here. I've been following this discussion. Background: We > have been using puppet-openstack combined with our own Puppet > "integration classes" for several years. All configuration parameters > are neatly in Hiera. So we're used to the "batteries-included" way that > Paul describes under (1). For various reasons, we are also looking at > new ways of provisioning our control plane, including Kolla. > > In hindsight, and in my personal opinion, while our previous approach > (1) has somehow felt like the proper way to do things, it hasn't really > paid off for us as an operator, and I would happily try approach (2). > > The perceived downside of (2) - or a perceived advantage of (1) - is > that in an ideal world, (1) isolates us from the arcane configuration > file details that the crazy devs of individual services come up with. > In practice, it turns out that (a) those files aren't rocket science (b) > as an operator you need to understand them anyway, at the latest when > you need to debug stuff, and (c) the deployment tool can easily become a > bottlenecks for deploying new features. > > This is why I'm happy to embrace the current Kolla philosophy (2). > Sorry if I'm just repeating arguments that led to its adoption in the > first place - I wasn't there when that happened. > -- > Simon. > > > Maybe it won't be a big deal, the issue currently is the line is > > blurred on what gets templated and what doesn't. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brad at redhat.com Tue Jan 30 13:16:52 2018 From: brad at redhat.com (Brad P. Crochet) Date: Tue, 30 Jan 2018 08:16:52 -0500 Subject: [openstack-dev] [Release-job-failures][mistral][release][requirements] Pre-release of openstack/mistral-extra failed In-Reply-To: <1517273885-sup-7518@lrrr.local> References: <1517273885-sup-7518@lrrr.local> Message-ID: On Mon, Jan 29, 2018 at 8:02 PM, Doug Hellmann wrote: > Excerpts from zuul's message of 2018-01-30 00:40:13 +0000: >> Build failed. >> >> - release-openstack-python http://logs.openstack.org/53/533a5ee424ebccccf6937f03d3b1d9d5b52e8ecb/pre-release/release-openstack-python/44f2fd4/ : FAILURE in 7m 58s >> - announce-release announce-release : SKIPPED >> - propose-update-constraints propose-update-constraints : SKIPPED >> > > This release appears to have failed because tox.ini is set up to use the > old style of constraints list management and mistral-extra appears in > the constraints list. > > I don't know why the tox environment is being used to build the package; > I thought we stopped doing that. > > One solution is to fix the tox.ini to put the constraints specification > in the "deps" field. The patch [1] to oslo.config making a similar > change should show you what is needed. > > Doug > > [1] https://review.openstack.org/#/c/524496/1/tox.ini > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Hopefully https://review.openstack.org/539204 fixes it. Brad From anicolae at lenovo.com Tue Jan 30 13:21:24 2018 From: anicolae at lenovo.com (Anda Nicolae) Date: Tue, 30 Jan 2018 13:21:24 +0000 Subject: [openstack-dev] RHOSP 10 failed overcloud deployment Message-ID: Hello, As previously stated in my previous mail on this list , I am trying to deploy OpenStack 10 using OpenStack Platform Director 10. I am using a bare-metal server with RedHat 7.4, on which I have created 3 VMs: 1st VM is the undercloud node, 2nd VM is the overcloud controller node and the 3rd VM is the overcloud compute node. The bare-metal server I am using is also my KVM hypervisor for the overcloud. I managed to provision my overcloud nodes and now I am stuck at performing overcloud deployment. The command I am running is: openstack --debug overcloud deploy --templates ~/templates --control-scale 1 --compute-scale 1 --control-flavor control --compute-flavor compute -e ~/templates/environments/network-isolation.yaml -e ~/templates/environments/network-environment.yaml --ntp-server pool.ntp.org --neutron-network-type vxlan --neutron-tunnel-types vxlan. I connected via ssh with heat-admin user on my controller and compute nodes. I've run the following command to gather logs: sudo journalctl -u os-collect-config I think the problem is on my controller node, because I've noticed the following messages in the output of the above command: os-collect-config[2996]: Source [ec2] Unavailable. os-collect-config[2996]: /var/lib/os-collect-config/local-data not found. Skipping os-collect-config[2996]: No local metadata found (['/var/lib/os-collect-config/local-data'] These messages repeat for various times in the output of the above command. On my undercloud VM, I've noticed that overcloud deployment remains stuck when running wait_for_stack_ready function from /usr/lib/python2.7/site-packages/tripleoclient/utils.py. I also intend to add some logs in /usr/lib/python2.7/site-packages/os_collect_config/collect.py to see what causes the error message: Source [ec2] Unavailable I think I have an error in my templates, but I didn't figure out which yet. Do you happen to know what might have caused this? Thanks, Anda -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Jan 30 13:57:45 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 30 Jan 2018 05:57:45 -0800 Subject: [openstack-dev] [tripleo] Gate resets - do not recheck or approve any patch now Message-ID: Please do not recheck or approve any patch now, we're having some gate issue, the team is working on it on #tripleo. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Tue Jan 30 13:57:52 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 30 Jan 2018 22:57:52 +0900 Subject: [openstack-dev] bug deputy report (week of Jan 22) Message-ID: Hi neutrinos, Sorry for a bit late report of the bug deputy of last week. I think there are several number of interesting bugs reported. ---- [Needs attention] https://bugs.launchpad.net/neutron/+bug/1745642 SG hybrid iptables driver and FWaaS OVS driver create overlapping conntrack zones MTU topics on ovs-agent https://bugs.launchpad.net/neutron/+bug/1744101 vxlan interfaces doesn't get MTU https://bugs.launchpad.net/neutron/+bug/1745150 neutron doesn't set MTU on ovs I am not sure it depends on deployments or not. We need to evaluate these carefully, while the priority haven't been decided. https://bugs.launchpad.net/neutron/+bug/1746000 dnsmasq does not fallback on SERVFAIL It is still Undecided status. The solution of dropping strict-order option sounds reasonable to me, but I haven't understood why strict-order of dnsmasq is required. https://bugs.launchpad.net/neutron/+bug/1745468 Conntrack entry removal can take a long time on large deployments Brian is working on this, but it is worth attracted for the release. https://bugs.launchpad.net/neutron/+bug/1745386 Update FloatingIP to set QoS policy on it fails It is medium priority, but it is worth fixed as this is one of new features in queens. The fix is already proposed. https://bugs.launchpad.net/neutron/+bug/1745443 cannot restrict /var/lib/neutron permissions I am not sure we can allow 'dnsmasq' user to access /var/lib/neutron/dhcp instead of /var/lib/neutron. More input on SElinux(?) would be appreciated. [tricky bugs] https://bugs.launchpad.net/neutron/+bug/1745412 test_l3_agent_scheduler intermittent failures when running locally [Open question] Neutron haproxy logs are not being collected https://bugs.launchpad.net/devstack/+bug/1744359 -> Is there anything remaining? The merged fix looks complete to me. [Closes Bug] Many fullstack tests failing because of some error in L3 agent https://bugs.launchpad.net/neutron/+bug/1745013 -> Nice finding, Slawek. It was eventlet bug. http://lists.openstack.org/pipermail/openstack-dev/2018-January/126580.html From thierry at openstack.org Tue Jan 30 14:11:24 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 30 Jan 2018 15:11:24 +0100 Subject: [openstack-dev] [ptg] Dublin PTG proposed track schedule In-Reply-To: <27e2401b-9753-51b9-f465-eeb0281dc350@openstack.org> References: <27e2401b-9753-51b9-f465-eeb0281dc350@openstack.org> Message-ID: Thierry Carrez wrote: > Here is the proposed pre-allocated track schedule for the Dublin PTG: > > https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307&single=true Following feedback I made small adjustments to Kuryr and OpenStack-Charms allocations. The track schedule is about to be published on the event website, so now is your last chance to signal critical issues with it! -- Thierry Carrez (ttx) From johfulto at redhat.com Tue Jan 30 14:27:21 2018 From: johfulto at redhat.com (John Fulton) Date: Tue, 30 Jan 2018 09:27:21 -0500 Subject: [openstack-dev] [tripleo] CI'ing ceph-ansible against TripleO scenarios In-Reply-To: <74a2641a-a2af-771b-3e17-8ccadfd06e2e@redhat.com> References: <20180126004931.GA8048@localhost.localdomain> <74a2641a-a2af-771b-3e17-8ccadfd06e2e@redhat.com> Message-ID: On Fri, Jan 26, 2018 at 7:29 AM, Giulio Fidente wrote: > On 01/26/2018 01:49 AM, Paul Belanger wrote: >> On Thu, Jan 25, 2018 at 04:22:56PM -0800, Emilien Macchi wrote: >>> Is there any plans to run TripleO CI jobs in ceph-ansible? >>> I know the project is on github but thanks to zuulv3 we can now easily >>> configure ceph-ansible to run Ci jobs in OpenStack Infra. >>> >>> It would be really great to investigate that in the near future so we avoid >>> eventual regressions. >>> Sebastien, Giulio, John, thoughts? >>> -- >>> Emilien Macchi >> >> Just a note, we haven't actually agree to enable CI for github projects just >> yet. While it is something zuul can do now, I believe we still need to decide >> when / how to enable it. >> >> We are doing some initial testing with ansible/ansible however. > > but we like being on the front line! :D > > we discussed this same topic with Sebastien and John a few weeks back > and agreed on having some gate job for ceph-ansible CI'ing against TripleO! > > how do we start? I think the candidate branch on ceph-ansible to gate is > "beta-3.1" but there will be more ... I am just not sure we're stable > enough to gate master yet ... but we might do it non-voting, it's up for > debate > > on TripleO side we'd be looking at running scenarios 001 and 004 ... > maybe initially 004 only is good enough as it covers (at least for ceph) > most of what is in 001 as well > > can we continue on IRC? :D > > and thanks Emilien and Paul for starting the thread and helping +1 We talked about this today and there's agreement from Seb to get a job in place for this. I think we could start with the following plan: - Simple extra job in ceph-ansible's CI on github which runs ceph-ansible with the same params that TripleO uses - ceph-ansible triggering zuul: -- I'm hoping I can get a test in place in a test github area with the help of pabelanger (do we have a living example on github of this?) -- Then I can pursue making a real ceph-ansible CI job from the living example Thoughts? John From kendall at openstack.org Tue Jan 30 14:35:51 2018 From: kendall at openstack.org (Kendall Waters) Date: Tue, 30 Jan 2018 08:35:51 -0600 Subject: [openstack-dev] PTG Dublin - Price Increase this Thursday Message-ID: <7FA0A5F3-82D4-48FE-A407-6F85AF4E57C7@openstack.org> Hi everyone, We are four weeks out from the Dublin Project Teams Gathering (February 26 - March 2nd), and we are expecting the event to sell out! You have two more days to book your ticket at the normal price. We'll switch to last-minute price (USD $200) on Thursday, February 1st at 12 noon CT (18:00 UTC). So go and grab your ticket before the price increases! [1] Cheers, Kendall [1] https://rockyptg.eventbrite.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jankihc91 at gmail.com Tue Jan 30 14:57:34 2018 From: jankihc91 at gmail.com (Janki Chhatbar) Date: Tue, 30 Jan 2018 20:27:34 +0530 Subject: [openstack-dev] [tripleo][opendaylight puppet] failed to deploy odl master with latest opendaylight puppet In-Reply-To: References: Message-ID: Hi Tim Can this be because of version bump? Moshe is using custom build ODL rpm based on master. ---------- Forwarded message ---------- From: Moshe Levi Date: Tue, Jan 30, 2018 at 6:41 PM Subject: [openstack-dev] [tripleo][opendaylight puppet] failed to deploy odl master with latest opendaylight puppet To: Janki Chhatbar , "OpenStack Development Mailing List (not for usage questions)" , " integration-dev at lists.opendaylight.org" < integration-dev at lists.opendaylight.org> Cc: Sulaiman Radwan , Hasan Qunoo < hasanq at mellanox.com> Hi all, We are trying to test solution of the odl hw offload in tripleo. We have already merged the odl support for ovs hardware offload [1]. We are trying to test that everting is working with tripleo, but we get failure with jetty.xml.orig. For some reason it get configure like [2] and causing opendaylight for failed to load http. It seem that there is problem with opendaylight puppet that is misconfiguring jetty.xml.orig Any help would be appreciated [1] - https://git.opendaylight.org/gerrit/#/c/67704/ [2] - http://paste.openstack.org/show/657899/ -- Thanking you Janki Chhatbar OpenStack | Docker | SDN simplyexplainedblog.wordpress.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gord at live.ca Tue Jan 30 15:01:44 2018 From: gord at live.ca (gordon chung) Date: Tue, 30 Jan 2018 15:01:44 +0000 Subject: [openstack-dev] [nova][ceilometer] versioned notifications coverage Message-ID: hi, we've had an open item to consume versioned notifications in ceilometer. the question that remains is: do all unversioned notifications have a versioned version or are there still some items missing? the blocker for us is that we can't consume both as then we'd end up duplicating data but we also can't consume versioned notifications exclusively or we'll miss data. apologies, if this is captured somewhere, just figured this is easier. cheers, -- gord From lbragstad at gmail.com Tue Jan 30 15:01:49 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 30 Jan 2018 09:01:49 -0600 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 22 January 2018 In-Reply-To: References: Message-ID: On 01/27/2018 01:16 PM, Colleen Murphy wrote: > # Keystone Team Update - Week of 22 January 2018 > > ## News > > ### Feature freeze > > This week was feature freeze and client freeze. While we approved > everything we cared about for this release on time, some CI issues > (some unexpected and some predictable) delayed these features being > merged. The release team has extended the freeze deadline to Monday, > which should (hopefully) give us enough time for the last few changes > to land before we release RC1. > > ### RC bugs > > We've started compiling a list of potential release-critical bugs[1]. > Please continue to report bugs as you find them in the RC, and also > please focus your attention on fixing these bugs and reviewing > bugfixes. Looks like everything that is a possible RC bug is targeted accordingly. Here is a LP link in case that's easier for some people to track [0]. [0] https://goo.gl/A5Wz4Z > > [1] https://etherpad.openstack.org/p/keystone-queens-bug-list > > ### API Discovery > > We had some interesting discussions this week about experimental APIs > and API discovery[2][3]. This was partly in the context of our new > "unified limits" API, which is step 1 in providing a cross-project > service where quotas for projects could be set and retrieved by other > OpenStack services. We're marking this API as "experimental" for the > time being while we shake out some of the cross-project usage patterns > we'll need to support, but this poses a discoverabiltiy problem. We > already expose a "home document" which lists all of our API routes and > their statuses, e.g. whether they're tagged as "experimental". While > this seems like a really useful feature for API consumers as well as a > great way to expose experimental features without committing to > stability, it seems like the JSON-home standard never quite made it > off the ground, so it's not a standard we can rely on API consumers > supporting. However, we could certainly build off of what we already > have to enhance our API discoverability > > [2] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-01-24.log.html#t2018-01-24T22:27:50 > [3] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-01-25.log.html#t2018-01-25T14:43:46 > [4] https://mnot.github.io/I-D/json-home/ > > ### GSoC Projects > > OpenStack is applying to participate in the Google Summer of Code > project[5]. We've started compiling a list of potential projects that > a GSoC intern could work on[6]. Please help us add to the list! And if > you're interested in being a mentor, please step up! We'll likely > discuss this more at the next keystone meeting. > > [5] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-01-25.log.html#t2018-01-25T14:38:28 > [6] https://etherpad.openstack.org/p/keystone-internship-ideas > > ## Recently Merged Changes > > Search query: https://goo.gl/hdD9Kw > > We merged 49 changes this week, though we approved quite a few that > are still making their way through the gate, including changes that > are part of our main feature objectives. > > ## Changes that need Attention > > Search query: https://goo.gl/h9knRA > > There are 36 changes that are passing CI, not in merge conflict, have > no negative reviews and aren't proposed by bots. Expect to see a lot > more as we bugstomp over the next two weeks. > > ## Milestone Outlook > > https://releases.openstack.org/queens/schedule.html > > This week marked feature freeze and client freeze, but due to a number > of CI problems the release team has extended the feature freeze till > Monday and the client freeze until Tuesday[7]. This just means the > approved changes that we still have moving through CI should hopefully > have time to finish and be merged. > > [7] http://lists.openstack.org/pipermail/openstack-dev/2018-January/126621.html > > ## Shout-outs > > Thanks to the whole team for working so hard this week! > > ## Help with this newsletter > > Help contribute to this newsletter by editing the etherpad: > https://etherpad.openstack.org/p/keystone-team-newsletter > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From steve.mclellan at hpe.com Tue Jan 30 15:03:59 2018 From: steve.mclellan at hpe.com (McLellan, Steven) Date: Tue, 30 Jan 2018 15:03:59 +0000 Subject: [openstack-dev] [release][searchlight] problem with release job configurations In-Reply-To: <1517271277-sup-4518@lrrr.local> References: <1517271277-sup-4518@lrrr.local> Message-ID: <27DB2B6B-294D-4B13-9333-345AFCE134FC@hpe.com> Hi Doug, Apologies, I was travelling all day yesterday. I've put up https://review.openstack.org/539231 to change the project config and made the release review (https://review.openstack.org/538321) depend on it. Thanks for the detailed information! Steve On 1/29/18, 6:21 PM, "Doug Hellmann" wrote: Both searchlight-ui has a configuration issue that the release team cannot fix by ourselves. We need input from the searchlight team about how to resolve it. As you'll see from [2] the release validation logic is categorizing searchlight-ui as a horizon-plugin. It is then rejecting the release request [1] because, according to the settings in project-config, the repository is configured to use publish-to-pypi instead of the expected publish-to-pypi-horizon. The difference between the two jobs is the latter installs horizon before trying to build the package. Many horizon plugins apparently needed this. We don't know if searchlight does. There are 2 possible ways to fix the issue: 1. Set release-type to "python-pypi" in [1] to tell the validation code that publish-to-pypi is the expected job. 2. Change the release job for the repository in project-config. Please let us know which fix is correct by either updating [1] with the release-type or a Depends-On link to the change in project-config to use the correct release job. Doug [1] https://review.openstack.org/#/c/538321/ [2] http://logs.openstack.org/21/538321/1/check/openstack-tox-validate/3afbe28/tox/validate-request-results.log __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Tue Jan 30 15:10:29 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 30 Jan 2018 10:10:29 -0500 Subject: [openstack-dev] [Release-job-failures][mistral][release][requirements] Pre-release of openstack/mistral-extra failed In-Reply-To: References: <1517273885-sup-7518@lrrr.local> Message-ID: <1517325018-sup-9841@lrrr.local> Excerpts from Brad P. Crochet's message of 2018-01-30 08:16:52 -0500: > On Mon, Jan 29, 2018 at 8:02 PM, Doug Hellmann wrote: > > Excerpts from zuul's message of 2018-01-30 00:40:13 +0000: > >> Build failed. > >> > >> - release-openstack-python http://logs.openstack.org/53/533a5ee424ebccccf6937f03d3b1d9d5b52e8ecb/pre-release/release-openstack-python/44f2fd4/ : FAILURE in 7m 58s > >> - announce-release announce-release : SKIPPED > >> - propose-update-constraints propose-update-constraints : SKIPPED > >> > > > > This release appears to have failed because tox.ini is set up to use the > > old style of constraints list management and mistral-extra appears in > > the constraints list. > > > > I don't know why the tox environment is being used to build the package; > > I thought we stopped doing that. > > > > One solution is to fix the tox.ini to put the constraints specification > > in the "deps" field. The patch [1] to oslo.config making a similar > > change should show you what is needed. > > > > Doug > > > > [1] https://review.openstack.org/#/c/524496/1/tox.ini > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > Hopefully https://review.openstack.org/539204 fixes it. > > Brad > Thanks! From doug at doughellmann.com Tue Jan 30 15:11:20 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 30 Jan 2018 10:11:20 -0500 Subject: [openstack-dev] [release][searchlight] problem with release job configurations In-Reply-To: <27DB2B6B-294D-4B13-9333-345AFCE134FC@hpe.com> References: <1517271277-sup-4518@lrrr.local> <27DB2B6B-294D-4B13-9333-345AFCE134FC@hpe.com> Message-ID: <1517325068-sup-9352@lrrr.local> Excerpts from McLellan, Steven's message of 2018-01-30 15:03:59 +0000: > Hi Doug, > > Apologies, I was travelling all day yesterday. I've put up https://review.openstack.org/539231 to change the project config and made the release review (https://review.openstack.org/538321) depend on it. > > Thanks for the detailed information! > > Steve Thanks for hopping on the fix so quickly, Steve. > > On 1/29/18, 6:21 PM, "Doug Hellmann" wrote: > > Both searchlight-ui has a configuration issue that the release team > cannot fix by ourselves. We need input from the searchlight team about > how to resolve it. > > As you'll see from [2] the release validation logic is categorizing > searchlight-ui as a horizon-plugin. It is then rejecting the release > request [1] because, according to the settings in project-config, > the repository is configured to use publish-to-pypi instead of the > expected publish-to-pypi-horizon. > > The difference between the two jobs is the latter installs horizon > before trying to build the package. Many horizon plugins apparently > needed this. We don't know if searchlight does. > > There are 2 possible ways to fix the issue: > > 1. Set release-type to "python-pypi" in [1] to tell the validation code > that publish-to-pypi is the expected job. > 2. Change the release job for the repository in project-config. > > Please let us know which fix is correct by either updating [1] with the > release-type or a Depends-On link to the change in project-config to use > the correct release job. > > Doug > > > [1] https://review.openstack.org/#/c/538321/ > [2] http://logs.openstack.org/21/538321/1/check/openstack-tox-validate/3afbe28/tox/validate-request-results.log > From mriedemos at gmail.com Tue Jan 30 15:15:11 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 30 Jan 2018 09:15:11 -0600 Subject: [openstack-dev] [nova][osc] How to deal with add/remove fixed/floating CLIs after novaclient 10.0.0? Message-ID: <6d1f4bb0-f7c9-cc32-9339-55701eee0b7c@gmail.com> The 10.0.0 release of python-novaclient dropped some deprecated CLIs and python API bindings for the server actions to add/remove fixed and floating IPs: https://docs.openstack.org/releasenotes/python-novaclient/queens.html#id2 python-openstackclient was using some of those python API bindings from novaclient which now no longer work: https://bugs.launchpad.net/python-openstackclient/+bug/1745795 I've at least identified where the broken code is: https://review.openstack.org/#/c/538539/ The question I'm struggling with is how to resolve this in OSC. We can't just remove the CLIs without a deprecation period. I thought about doing something similar to the proxy orchestration that the compute API does for these actions, but that gets a bit complicated in OSC if we are going to make the CLI support both nova-network and neutron. Furthermore, if we did still support nova-network in those CLIs, the novaclient python API bindings are gone, so OSC would have to do something different - likely make it's own REST API calls to the compute API at a low enough microversion where those APIs still exist (<=2.43): https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id39 So if we had to make a straight up REST API call to the compute endpoint anyway for the CLIs to still work, it seems easiest to just do that for now in both the nova-network and neutron cases. OSC CLI would make a request to the compute API on a microversion <= 2.43 and the compute service does the proxy work (albeit deprecated). In the OSC CLI, we could dump a deprecation warning if you're using these with nova-network since that is deprecated and the plan is in Rocky to drop support for nova-network altogether. Users of OSC could still be using newer versions of the client with older clouds that still run nova-network, but the deprecation timer would at least be set. Are there other options here? Or some other precedent for a situation like this? -- Thanks, Matt From mriedemos at gmail.com Tue Jan 30 15:19:28 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 30 Jan 2018 09:19:28 -0600 Subject: [openstack-dev] [nova][ceilometer] versioned notifications coverage In-Reply-To: References: Message-ID: On 1/30/2018 9:01 AM, gordon chung wrote: > hi, > > we've had an open item to consume versioned notifications in ceilometer. > the question that remains is: do all unversioned notifications have a > versioned version or are there still some items missing? > > the blocker for us is that we can't consume both as then we'd end up > duplicating data but we also can't consume versioned notifications > exclusively or we'll miss data. > > apologies, if this is captured somewhere, just figured this is easier. > > cheers, > Gibi's burndown chart is here to see what's remaining: http://burndown.peermore.com/nova-notification/ And this is the list of what's already done: https://docs.openstack.org/nova/latest/reference/notifications.html#versioned-notification-samples -- Thanks, Matt From colleen at gazlene.net Tue Jan 30 15:33:35 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Tue, 30 Jan 2018 16:33:35 +0100 Subject: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG? Message-ID: At the last PTG we had some time on Monday and Tuesday for cross-project discussions related to baremetal and VM management. We don't currently have that on the schedule for this PTG. There is still some free time available that we can ask for[1]. Should we try to schedule some time for this? >From a keystone perspective, some things we'd like to talk about with the BM/VM teams are: - Unified limits[2]: we now have a basic REST API for registering limits in keystone. Next steps are building out libraries that can consume this API and calculate quota usage and limit allocation, and developing models for quotas in project hierarchies. Input from other projects is essential here. - RBAC: we've introduced "system scope"[3] to fix the admin-ness problem, and we'd like to guide other projects through the migration. - Application credentials[4]: this main part of this work is largely done, next steps are implementing better access control for it, which is largely just a keystone team problem but we could also use this time for feedback on the implementation so far There's likely some non-keystone-related things that might be at home in a dedicated BM/VM room too. Do we want to have a dedicated day or two for these projects? Or perhaps not dedicated days, but planned-in-advance meeting time? Or should we wait and schedule it ad-hoc if we feel like we need it? Colleen [1] https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307&single=true [2] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html [3] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html [4] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/application-credentials.html From mgagne at calavera.ca Tue Jan 30 15:37:35 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Tue, 30 Jan 2018 10:37:35 -0500 Subject: [openstack-dev] [nova][ceilometer] versioned notifications coverage In-Reply-To: References: Message-ID: On Tue, Jan 30, 2018 at 10:19 AM, Matt Riedemann wrote: > > Gibi's burndown chart is here to see what's remaining: > > http://burndown.peermore.com/nova-notification/ > > And this is the list of what's already done: > > https://docs.openstack.org/nova/latest/reference/notifications.html#versioned-notification-samples > So I just discovered this documentation. It's awesome! Great job to all the ones involved! Now I wish we had the same in all projects. =) -- Mathieu From prometheanfire at gentoo.org Tue Jan 30 15:50:54 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Tue, 30 Jan 2018 09:50:54 -0600 Subject: [openstack-dev] [requirement][cyborg]FFE - pyspdk requirement dependency In-Reply-To: <64ec8f36-dc4a-e5b1-484f-61938ac68001@openstack.org> References: <53EADDD3-8A86-445F-A5D9-F5401ABB5309@163.com> <64ec8f36-dc4a-e5b1-484f-61938ac68001@openstack.org> Message-ID: <20180130155054.bxfafzebvlnvgtzi@gentoo.org> On 18-01-30 11:58:17, Thierry Carrez wrote: > We We wrote: > > The pyspdk is a important tool library [1] which  supports Cyborg SPDK > > driver [2] to manage the backend SPDK-base app, so we need to upload > > pyspdk into the pypi [3]  and then append 'pyspdk>=0.0.1’ item into > > ‘OpenStack/Cyborg/requirements.txt’ , so that  SPDK driver can be built > > correctly when zuul runs. However, It's not what we thought it would be, > > if we want to  add the new requirements, we should get support from > > upstream OpenStack/requirements [4] to append 'pyspdk>=0.0.1’ item. > > Before we talk FFE, pyspdk looks a bit far away from being something > OpenStack code can depend on. In particular: > > - it's not clearly licensed under a supported license (no LICENSE file > in the source code) > - Missing metadata entries in setup.cfg means we are missing a lot of > context information about this library > > Those need to be fixed before we can even consider adding this library > to global requirements... > I agree. Also, there's no mention of python3 or testing in the repo. The licence looks like gplv3, which is fine for non-openstack code. There's also no commit since the initial commit. Unless I'm looking at the wrong repo, I'm going to say no to this FFE. https://github.com/lschw/pyspdf -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From prometheanfire at gentoo.org Tue Jan 30 15:53:17 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Tue, 30 Jan 2018 09:53:17 -0600 Subject: [openstack-dev] [requirements][Blazar] FFE - add python-blazarclient in global-requirements In-Reply-To: References: Message-ID: <20180130155317.jdemeldbqjuhv27e@gentoo.org> On 18-01-30 17:03:58, Masahito MUROI wrote: > Hi requirements team, > > This is a FFE request for adding python-blazarclient to > global-requirements.txt. Blazar team had release problems for updating the > blazarclient to pypo. > > Luckily, the problems are fixed and the client is published at pypi this > morning. > > 1. https://review.openstack.org/#/c/539126/ > LGTM, +2 -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mriedemos at gmail.com Tue Jan 30 16:00:30 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 30 Jan 2018 10:00:30 -0600 Subject: [openstack-dev] [tc] Technical Committee Status update, January 26th In-Reply-To: <11a04045-2cbd-006c-04f9-a9da33018d6f@openstack.org> References: <11a04045-2cbd-006c-04f9-a9da33018d6f@openstack.org> Message-ID: On 1/26/2018 10:26 AM, Thierry Carrez wrote: > == Rocky goals == > > We are in the final steps of selecting a set of community goals for > Rocky. We need wide community input on which goals are doable and make > the most sense! Please see the list of proposed goals and associated > champions: > > * Storyboard Migration [3] (diablo_rojo) > * Remove mox [4] (chandankumar) > * Ensure pagination links [5] (mordred) > * Add Cold upgrades capabilities [6] (masayuki) > * Enable mutable configuration [7] (gcb) > > [3]https://review.openstack.org/513875 > [4]https://review.openstack.org/532361 > [5]https://review.openstack.org/532627 > [6]https://review.openstack.org/#/c/533544/ > [7]https://review.openstack.org/534605 > > NB: mriedem suggested on the ML that we wait until the PTG in Dublin to > make the final call. It gives more time to carefully consider the goals, > but delays the start of the work and makes planning pre-PTG a bit more > difficult. I just threw "Use keystoneauth1 Adapter for consistent inter-service configuration" into the etherpad: https://etherpad.openstack.org/p/community-goals Eric didn't think he'd have the bandwidth to champion this goal right now, but if someone wanted to pick it up I think it would be pretty straight-forward. -- Thanks, Matt From jim at jimrollenhagen.com Tue Jan 30 16:03:14 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Tue, 30 Jan 2018 11:03:14 -0500 Subject: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG? In-Reply-To: References: Message-ID: On Tue, Jan 30, 2018 at 10:33 AM, Colleen Murphy wrote: > At the last PTG we had some time on Monday and Tuesday for > cross-project discussions related to baremetal and VM management. We > don't currently have that on the schedule for this PTG. There is still > some free time available that we can ask for[1]. Should we try to > schedule some time for this? > I'd attend for the topics you list below, FWIW. > > From a keystone perspective, some things we'd like to talk about with > the BM/VM teams are: > > - Unified limits[2]: we now have a basic REST API for registering > limits in keystone. Next steps are building out libraries that can > consume this API and calculate quota usage and limit allocation, and > developing models for quotas in project hierarchies. Input from other > projects is essential here. > - RBAC: we've introduced "system scope"[3] to fix the admin-ness > problem, and we'd like to guide other projects through the migration. > - Application credentials[4]: this main part of this work is largely > done, next steps are implementing better access control for it, which > is largely just a keystone team problem but we could also use this > time for feedback on the implementation so far > > There's likely some non-keystone-related things that might be at home > in a dedicated BM/VM room too. Do we want to have a dedicated day or > two for these projects? Or perhaps not dedicated days, but > planned-in-advance meeting time? Or should we wait and schedule it > ad-hoc if we feel like we need it? > There's always plenty to discuss between nova and ironic, but we usually just schedule those topics somewhat ad-hoc. Never opposed to some dedicated time if folks will show up, though. :) // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From gord at live.ca Tue Jan 30 16:25:09 2018 From: gord at live.ca (gordon chung) Date: Tue, 30 Jan 2018 16:25:09 +0000 Subject: [openstack-dev] [nova][ceilometer] versioned notifications coverage In-Reply-To: References: Message-ID: On 2018-01-30 10:19 AM, Matt Riedemann wrote: > Gibi's burndown chart is here to see what's remaining: > > http://burndown.peermore.com/nova-notification/ this answer far exceeded my expectations :) thanks! -- gord From mriedemos at gmail.com Tue Jan 30 17:17:52 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 30 Jan 2018 11:17:52 -0600 Subject: [openstack-dev] [nova][ceilometer] versioned notifications coverage In-Reply-To: References: Message-ID: <33d09c5a-ecf9-3933-6551-ef8cff8d5ae0@gmail.com> On 1/30/2018 9:37 AM, Mathieu Gagné wrote: > So I just discovered this documentation. It's awesome! Great job to > all the ones involved! > Now I wish we had the same in all projects. =) Maybe it's worth putting something into the community goals etherpad: https://etherpad.openstack.org/p/community-goals At this point, I'd say it's probably premature to push across all projects since we don't have another service consuming nova's versioned notifications yet. Searchlight was working on it, but then I think that got stalled. Maybe after the Telemetry team starts looking at it to see how things work for them from a consumer / client standpoint. -- Thanks, Matt From harlowja at fastmail.com Tue Jan 30 17:34:05 2018 From: harlowja at fastmail.com (Joshua Harlow) Date: Tue, 30 Jan 2018 09:34:05 -0800 Subject: [openstack-dev] [kolla] Policy regarding template customisation In-Reply-To: References: Message-ID: <5A70AC8D.4050407@fastmail.com> I'm ok with #2, Though I would like to show an alternative that we have been experimenting with that avoids the whole needs for a globals.yml and such files in the first place (and feels more naturally inline with how ansible works IMHO). So short explanation first; we have this yaml format that describes all of our clouds and there settings and such (and which servers belong in which cloud and so on and so forth). We have then setup a REST server (small gunicorn based one) that renders/serves this format into other formats. One of those other formats is one that is compatible with ansibles concept of dynamic inventory [1] and that is the one we are trying to send into kolla-ansible to get it to configure all the things (via typical mechanisms such as hostvars and groupvars). An example of this rendering: https://gist.github.com/harlowja/9d7b57571a2290c315fc9a4bf2957dac (this is dynamically generated from the other format, which is git version controlled...). The goal here is that we can just render all the needed variables and such for kolla-ansible (at a per-host basis if we have to) and avoid the need for having a special globals.yml (per-cloud/environment) and per-host special files in the first place. Was this kind of approach ever thought of? Perhaps I can go into more detail if it seems like one others may want to follow.... [1]: http://docs.ansible.com/ansible/latest/intro_dynamic_inventory.html Paul Bourke wrote: > Hi all, > > I'd like to revisit our policy of not templating everything in > kolla-ansible's template files. This is a policy that was set in place > very early on in kolla-ansible's development, but I'm concerned we > haven't been very consistent with it. This leads to confusion for > contributors and operators - "should I template this and submit a patch, > or do I need to start using my own config files?". > > The docs[0] are currently clear: > > "The Kolla upstream community does not want to place key/value pairs in > the Ansible playbook configuration options that are not essential to > obtaining a functional deployment." > > In practice though our templates contain many options that are not > necessary, and plenty of patches have merged that while very useful to > operators, are not necessary to an 'out of the box' deployment. > > So I'd like us to revisit the questions: > > 1) Is kolla-ansible attempting to be a 'batteries included' tool, which > caters to operators via key/value config options? > > 2) Or, is it to be a solid reference implementation, where any degree of > customisation implies a clear 'bring your own configs' type policy. > > If 1), then we should potentially: > > * Update ours docs to remove the referenced paragraph > * Look at reorganising files like globals.yml into something more > maintainable. > > If 2), > > * We should make it clear to reviewers that patches templating options > that are non essential should not be accepted. > * Encourage patches to strip down existing config files to an absolute > minimum. > * Make this policy more clear in docs / templates to avoid frustration > on the part of operators. > > Thoughts? > > Thanks, > -Paul > > [0] > https://docs.openstack.org/kolla-ansible/latest/admin/deployment-philosophy.html#why-not-template-customization > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From steve.mclellan at hpe.com Tue Jan 30 17:39:08 2018 From: steve.mclellan at hpe.com (McLellan, Steven) Date: Tue, 30 Jan 2018 17:39:08 +0000 Subject: [openstack-dev] [nova][ceilometer] versioned notifications coverage In-Reply-To: <33d09c5a-ecf9-3933-6551-ef8cff8d5ae0@gmail.com> References: <33d09c5a-ecf9-3933-6551-ef8cff8d5ae0@gmail.com> Message-ID: <41CF0A53-2DF0-40E6-A078-ECBF40FF679F@hpe.com> On 1/30/18, 11:17 AM, "Matt Riedemann" wrote: On 1/30/2018 9:37 AM, Mathieu Gagné wrote: > So I just discovered this documentation. It's awesome! Great job to > all the ones involved! > Now I wish we had the same in all projects. =) Maybe it's worth putting something into the community goals etherpad: https://etherpad.openstack.org/p/community-goals At this point, I'd say it's probably premature to push across all projects since we don't have another service consuming nova's versioned notifications yet. Searchlight was working on it, but then I think that got stalled. Maybe after the Telemetry team starts looking at it to see how things work for them from a consumer / client standpoint. -- It did stall in searchlight (partly through lack of interest and partly because it was tracking a moving target), but it was/is very close and in general they work fine. We didn't get to the point of supporting lots of different versions, though, and I was a bit worried about branching hell if ever different versions introduced non-additive changes. Patch is https://review.openstack.org/#/c/453352/ if anyone's interested (it's a little complicated because it supports versioned and unversioned). Steve From mriedemos at gmail.com Tue Jan 30 17:44:34 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 30 Jan 2018 11:44:34 -0600 Subject: [openstack-dev] [nova][ceilometer] versioned notifications coverage In-Reply-To: <33d09c5a-ecf9-3933-6551-ef8cff8d5ae0@gmail.com> References: <33d09c5a-ecf9-3933-6551-ef8cff8d5ae0@gmail.com> Message-ID: <8123ec77-9f1d-af21-2fdf-7012c8b03a8f@gmail.com> On 1/30/2018 11:17 AM, Matt Riedemann wrote: > Maybe it's worth putting something into the community goals etherpad: > > https://etherpad.openstack.org/p/community-goals I added the "Convert from legacy to versioned notifications" entry to keep the idea for the future. -- Thanks, Matt From simple_hlw at 163.com Tue Jan 30 17:54:04 2018 From: simple_hlw at 163.com (We We) Date: Wed, 31 Jan 2018 01:54:04 +0800 Subject: [openstack-dev] [requirement][cyborg]FFE - pyspdk requirement dependency In-Reply-To: <53EADDD3-8A86-445F-A5D9-F5401ABB5309@163.com> References: <53EADDD3-8A86-445F-A5D9-F5401ABB5309@163.com> Message-ID: <25D76CB6-7EDE-491E-ADAB-6FD4B5B56DAC@163.com> > Hi, > I have modified and resubmitted pyspdk to the pypi. Please check it. > Thx, > Helloway > 在 2018年1月30日,下午12:52,We We > 写道: > > Hi, > The pyspdk is a important tool library [1] which supports Cyborg SPDK driver [2] to manage the backend SPDK-base app, so we need to upload pyspdk into the pypi [3] and then append 'pyspdk>=0.0.1’ item into ‘OpenStack/Cyborg/requirements.txt’ , so that SPDK driver can be built correctly when zuul runs. However, It's not what we thought it would be, if we want to add the new requirements, we should get support from upstream OpenStack/requirements [4] to append 'pyspdk>=0.0.1’ item. > > I'm sorry for propose the request so late. Please Please help. > > > [1] https://review.gerrithub.io/#/c/379741/ > [2] https://review.openstack.org/#/c/538164/11 > [3] https://pypi.python.org/pypi/pyspdk/0.0.1 > [4] https://github.com/openstack/requirements > > > Regards, > Helloway > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Jan 30 18:34:30 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 30 Jan 2018 18:34:30 +0000 (GMT) Subject: [openstack-dev] [tc] [all] TC Report 18-05 Message-ID: Linkified: https://anticdent.org/tc-report-18-05.html Your author has been rather ill, so this week's TC Report will be a bit abridged and mostly links. I'll try to return with more robust commentary next week. ## RDO Test Days dmsimard showed up in `#openstack-tc` channel to [provide some details](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-24.log.html#t2018-01-24T16:32:11) on forthcoming RDO Test Days. ## More on the Goals [Conversations](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-25.log.html#t2018-01-25T15:48:50) [continue](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-30.log.html#t2018-01-30T09:03:31) about choosing OpenStack goals. One issue raised is whether we have sufficient insight into how people are really using and deploying OpenStack to be able to prioritize goals. ## Project Inception smcginnis [pointed out](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-29.log.html#t2018-01-29T17:25:25) a governance discussion in the CNCF about [project inception](https://github.com/cncf/toc/issues/85) that he thought might be of interest to the TC. Discussion ensued around how the OpenStack ecosystem differs from the CNCF and as a result the need, or lack thereof, for help to bootstrap projects is different. ## PTG Scheduling The PTG is coming and with it [discussion](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-30.log.html#t2018-01-30T09:22:44) of how best to make room for all the conversations that need to happen, including things that come up at the last minute. New this time will be a more formal structuring of [post-lunch presentations](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-30.log.html#t2018-01-30T09:38:17) to do theme-setting and information sharing. ## Board Meeting at the PTG Monday there was [discussion](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-29.log.html#t2018-01-29T16:06:18) about the OpenStack Foundation [Board Meeting](https://wiki.openstack.org/wiki/Governance/Foundation/26Feb2018BoardMeeting) overlapping with the first day of the PTG. This follows [discussion in email](http://lists.openstack.org/pipermail/foundation/2018-January/002558.html). ## Today's Board Meeting I attended [today's Board Meeting](https://wiki.openstack.org/wiki/Governance/Foundation/30Jan2018BoardMeeting) but it seems that according to the [transparency policy](http://www.openstack.org/legal/transparency-policy/) I can't comment: > No commenting on Board meeting contents and decisions until > Executive Director publishes a meeting summary That doesn't sound like transparency to me, but I assume there must be reasons. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From inc007 at gmail.com Tue Jan 30 18:38:13 2018 From: inc007 at gmail.com (=?UTF-8?B?TWljaGHFgiBKYXN0cnrEmWJza2k=?=) Date: Tue, 30 Jan 2018 10:38:13 -0800 Subject: [openstack-dev] [kolla] Policy regarding template customisation In-Reply-To: <5A70AC8D.4050407@fastmail.com> References: <5A70AC8D.4050407@fastmail.com> Message-ID: On 30 January 2018 at 09:34, Joshua Harlow wrote: > I'm ok with #2, > > Though I would like to show an alternative that we have been experimenting > with that avoids the whole needs for a globals.yml and such files in the > first place (and feels more naturally inline with how ansible works IMHO). > > So short explanation first; we have this yaml format that describes all of > our clouds and there settings and such (and which servers belong in which > cloud and so on and so forth). We have then setup a REST server (small > gunicorn based one) that renders/serves this format into other formats. > > One of those other formats is one that is compatible with ansibles concept > of dynamic inventory [1] and that is the one we are trying to send into > kolla-ansible to get it to configure all the things (via typical mechanisms > such as hostvars and groupvars). > > An example of this rendering: > > https://gist.github.com/harlowja/9d7b57571a2290c315fc9a4bf2957dac (this is > dynamically generated from the other format, which is git version > controlled...). > > The goal here is that we can just render all the needed variables and such > for kolla-ansible (at a per-host basis if we have to) and avoid the need for > having a special globals.yml (per-cloud/environment) and per-host special > files in the first place. > > Was this kind of approach ever thought of? Well that totally works:) I routinely use inventory to override parts of globals (different iface per node). You could have [all:vars] section in inventory and set every variable usually set in globals there. However I think issue here is about files in /etc/kolla/config - so config overrides. I think one potential solution would be to have some sort of ansible task that would translate ansible vars to ini format and lay down files in /etc/kolla/config, but I think that's beyond scope of Kolla-Ansible. > > Perhaps I can go into more detail if it seems like one others may want to > follow.... > > [1]: http://docs.ansible.com/ansible/latest/intro_dynamic_inventory.html > > > Paul Bourke wrote: >> >> Hi all, >> >> I'd like to revisit our policy of not templating everything in >> kolla-ansible's template files. This is a policy that was set in place >> very early on in kolla-ansible's development, but I'm concerned we >> haven't been very consistent with it. This leads to confusion for >> contributors and operators - "should I template this and submit a patch, >> or do I need to start using my own config files?". >> >> The docs[0] are currently clear: >> >> "The Kolla upstream community does not want to place key/value pairs in >> the Ansible playbook configuration options that are not essential to >> obtaining a functional deployment." >> >> In practice though our templates contain many options that are not >> necessary, and plenty of patches have merged that while very useful to >> operators, are not necessary to an 'out of the box' deployment. >> >> So I'd like us to revisit the questions: >> >> 1) Is kolla-ansible attempting to be a 'batteries included' tool, which >> caters to operators via key/value config options? >> >> 2) Or, is it to be a solid reference implementation, where any degree of >> customisation implies a clear 'bring your own configs' type policy. >> >> If 1), then we should potentially: >> >> * Update ours docs to remove the referenced paragraph >> * Look at reorganising files like globals.yml into something more >> maintainable. >> >> If 2), >> >> * We should make it clear to reviewers that patches templating options >> that are non essential should not be accepted. >> * Encourage patches to strip down existing config files to an absolute >> minimum. >> * Make this policy more clear in docs / templates to avoid frustration >> on the part of operators. >> >> Thoughts? >> >> Thanks, >> -Paul >> >> [0] >> >> https://docs.openstack.org/kolla-ansible/latest/admin/deployment-philosophy.html#why-not-template-customization >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From pshchelokovskyy at mirantis.com Tue Jan 30 18:39:11 2018 From: pshchelokovskyy at mirantis.com (Pavlo Shchelokovskyy) Date: Tue, 30 Jan 2018 20:39:11 +0200 Subject: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG? In-Reply-To: References: Message-ID: +1 to Jim, I'm specifically interested in app creds and RBAC as I'd like to find a way to pass some of API access creds to the ironic deploy ramdisk, and it seems one of those could help. Let's discuss :) Cheers, On Tue, Jan 30, 2018 at 6:03 PM, Jim Rollenhagen wrote: > On Tue, Jan 30, 2018 at 10:33 AM, Colleen Murphy > wrote: > >> At the last PTG we had some time on Monday and Tuesday for >> cross-project discussions related to baremetal and VM management. We >> don't currently have that on the schedule for this PTG. There is still >> some free time available that we can ask for[1]. Should we try to >> schedule some time for this? >> > > I'd attend for the topics you list below, FWIW. > > >> >> From a keystone perspective, some things we'd like to talk about with >> the BM/VM teams are: >> >> - Unified limits[2]: we now have a basic REST API for registering >> limits in keystone. Next steps are building out libraries that can >> consume this API and calculate quota usage and limit allocation, and >> developing models for quotas in project hierarchies. Input from other >> projects is essential here. >> - RBAC: we've introduced "system scope"[3] to fix the admin-ness >> problem, and we'd like to guide other projects through the migration. >> - Application credentials[4]: this main part of this work is largely >> done, next steps are implementing better access control for it, which >> is largely just a keystone team problem but we could also use this >> time for feedback on the implementation so far >> >> There's likely some non-keystone-related things that might be at home >> in a dedicated BM/VM room too. Do we want to have a dedicated day or >> two for these projects? Or perhaps not dedicated days, but >> planned-in-advance meeting time? Or should we wait and schedule it >> ad-hoc if we feel like we need it? >> > > There's always plenty to discuss between nova and ironic, but we usually > just schedule those topics somewhat ad-hoc. Never opposed to some > dedicated time if folks will show up, though. :) > > // jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Dr. Pavlo Shchelokovskyy Senior Software Engineer Mirantis Inc www.mirantis.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Jan 30 18:43:24 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 30 Jan 2018 13:43:24 -0500 Subject: [openstack-dev] [requirement][cyborg]FFE - pyspdk requirement dependency In-Reply-To: <25D76CB6-7EDE-491E-ADAB-6FD4B5B56DAC@163.com> References: <53EADDD3-8A86-445F-A5D9-F5401ABB5309@163.com> <25D76CB6-7EDE-491E-ADAB-6FD4B5B56DAC@163.com> Message-ID: <1517337778-sup-8267@lrrr.local> Excerpts from We We's message of 2018-01-31 01:54:04 +0800: > > Hi, > > > I have modified and resubmitted pyspdk to the pypi. Please check it. > > > Thx, > > > Helloway Is there a public source repository for the library somewhere? > > > 在 2018年1月30日,下午12:52,We We > 写道: > > > > Hi, > > The pyspdk is a important tool library [1] which supports Cyborg SPDK driver [2] to manage the backend SPDK-base app, so we need to upload pyspdk into the pypi [3] and then append 'pyspdk>=0.0.1’ item into ‘OpenStack/Cyborg/requirements.txt’ , so that SPDK driver can be built correctly when zuul runs. However, It's not what we thought it would be, if we want to add the new requirements, we should get support from upstream OpenStack/requirements [4] to append 'pyspdk>=0.0.1’ item. > > > > I'm sorry for propose the request so late. Please Please help. > > > > > > [1] https://review.gerrithub.io/#/c/379741/ > > [2] https://review.openstack.org/#/c/538164/11 > > [3] https://pypi.python.org/pypi/pyspdk/0.0.1 > > [4] https://github.com/openstack/requirements > > > > > > Regards, > > Helloway > > From gr at ham.ie Tue Jan 30 18:53:40 2018 From: gr at ham.ie (Graham Hayes) Date: Tue, 30 Jan 2018 18:53:40 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-05 In-Reply-To: References: Message-ID: <696c009f-005e-8aa7-7363-696fccac49fd@ham.ie> On 30/01/18 18:34, Chris Dent wrote: > > ## Today's Board Meeting > > I attended [today's Board > Meeting](https://wiki.openstack.org/wiki/Governance/Foundation/30Jan2018BoardMeeting) > > but it seems that according to the [transparency > policy](http://www.openstack.org/legal/transparency-policy/) I can't > comment: > >>  No commenting on Board meeting contents and decisions until >>  Executive Director publishes a meeting summary > > That doesn't sound like transparency to me, but I assume there must be > reasons. I was under the assumption that this only applied to board members, but I am open to correction. Can someone on legal-discuss clarify? > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From gr at ham.ie Tue Jan 30 19:00:54 2018 From: gr at ham.ie (Graham Hayes) Date: Tue, 30 Jan 2018 19:00:54 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-05 In-Reply-To: References: Message-ID: <76dfd081-973e-7877-36d0-411b66822738@ham.ie> On 30/01/18 18:34, Chris Dent wrote: > > Linkified: https://anticdent.org/tc-report-18-05.html > > Your author has been rather ill, so this week's TC Report will be a > bit abridged and mostly links. I'll try to return with more robust > commentary next week. > > ## RDO Test Days > > dmsimard showed up in `#openstack-tc` channel to [provide some > details](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-24.log.html#t2018-01-24T16:32:11) > > on forthcoming RDO Test Days. > > ## More on the Goals > > [Conversations](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-25.log.html#t2018-01-25T15:48:50) > > [continue](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-30.log.html#t2018-01-30T09:03:31) > > about choosing OpenStack goals. One issue raised is whether we have > sufficient insight into how people are really using and deploying > OpenStack to be able to prioritize goals. > > ## Project Inception > > smcginnis [pointed > out](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-29.log.html#t2018-01-29T17:25:25) > > a governance discussion in the CNCF about [project > inception](https://github.com/cncf/toc/issues/85) that he thought > might be of interest to the TC. Discussion ensued around how the > OpenStack ecosystem differs from the CNCF and as a result the need, or > lack thereof, for help to bootstrap projects is different. > > ## PTG Scheduling > > The PTG is coming and with it > [discussion](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-30.log.html#t2018-01-30T09:22:44) > > of how best to make room for all the conversations that need to > happen, including things that come up at the last minute. New this > time will be a more formal structuring of [post-lunch > presentations](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-30.log.html#t2018-01-30T09:38:17) > > to do theme-setting and information sharing. > > ## Board Meeting at the PTG > > Monday there was > [discussion](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-29.log.html#t2018-01-29T16:06:18) > > about the OpenStack Foundation [Board > Meeting](https://wiki.openstack.org/wiki/Governance/Foundation/26Feb2018BoardMeeting) > > overlapping with the first day of the PTG. This follows [discussion in > email](http://lists.openstack.org/pipermail/foundation/2018-January/002558.html). There was also a suggestion [2] that the community / TC put an item on the F2F meeting agenda, with an associated etherpad [2]: 1- http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-29.log.html#t2018-01-29T16:34:43 2 - https://etherpad.openstack.org/p/tc-message-to-board-dublin-2018 > > ## Today's Board Meeting > > I attended [today's Board > Meeting](https://wiki.openstack.org/wiki/Governance/Foundation/30Jan2018BoardMeeting) > > but it seems that according to the [transparency > policy](http://www.openstack.org/legal/transparency-policy/) I can't > comment: > >>  No commenting on Board meeting contents and decisions until >>  Executive Director publishes a meeting summary > > That doesn't sound like transparency to me, but I assume there must be > reasons. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From chris.friesen at windriver.com Tue Jan 30 19:04:06 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 30 Jan 2018 13:04:06 -0600 Subject: [openstack-dev] [nova][osc] How to deal with add/remove fixed/floating CLIs after novaclient 10.0.0? In-Reply-To: <6d1f4bb0-f7c9-cc32-9339-55701eee0b7c@gmail.com> References: <6d1f4bb0-f7c9-cc32-9339-55701eee0b7c@gmail.com> Message-ID: <5A70C1A6.7030905@windriver.com> On 01/30/2018 09:15 AM, Matt Riedemann wrote: > The 10.0.0 release of python-novaclient dropped some deprecated CLIs and python > API bindings for the server actions to add/remove fixed and floating IPs: > > https://docs.openstack.org/releasenotes/python-novaclient/queens.html#id2 > > python-openstackclient was using some of those python API bindings from > novaclient which now no longer work: > > https://bugs.launchpad.net/python-openstackclient/+bug/1745795 Is there a plan going forward to ensure that python-novaclient and OSC are on the same page as far as deprecating CLIs and API bindings? Chris From mordred at inaugust.com Tue Jan 30 19:06:51 2018 From: mordred at inaugust.com (Monty Taylor) Date: Tue, 30 Jan 2018 13:06:51 -0600 Subject: [openstack-dev] [legal-discuss] [tc] [all] TC Report 18-05 In-Reply-To: References: <696c009f-005e-8aa7-7363-696fccac49fd@ham.ie> Message-ID: <73d762de-b7b0-df39-4bba-2be025157d19@inaugust.com> On 01/30/2018 01:05 PM, Monty Taylor wrote: > On 01/30/2018 12:53 PM, Graham Hayes wrote: >> On 30/01/18 18:34, Chris Dent wrote: >> >>> >>> ## Today's Board Meeting >>> >>> I attended [today's Board >>> Meeting](https://wiki.openstack.org/wiki/Governance/Foundation/30Jan2018BoardMeeting) >>> >>> >>> but it seems that according to the [transparency >>> policy](http://www.openstack.org/legal/transparency-policy/) I can't >>> comment: >>> >>>>   No commenting on Board meeting contents and decisions until >>>>   Executive Director publishes a meeting summary >>> >>> That doesn't sound like transparency to me, but I assume there must be >>> reasons. >> >> I was under the assumption that this only applied to board members, but >> I am open to correction. >> >> Can someone on legal-discuss clarify? > > That is correct. As officers information published from Board members > could be construed as "official" communication. The approach we've taken > on this topic is that Board members refrain from publishing their > thoughts/feedback/take/opinion/summary of a meeting until after Jonathan > has published an 'official' summary of the meeting, at which point, > since there is an official document we're free to comment as we desire. > > There is nothing preventing anyone who is not a board member from > publishing a summary or thoughts or live-tweeting the whole thing. > > markmc has historically written excellent summaries and has posted them > as soon as Jonathan's have gone out. Gah. Didn't reply originally to openstack-dev. Monty From eumel at arcor.de Tue Jan 30 19:07:53 2018 From: eumel at arcor.de (Frank Kloeker) Date: Tue, 30 Jan 2018 20:07:53 +0100 Subject: [openstack-dev] [I18n][PTL] PTL nomination for I18n Message-ID: <489844e70f170a35b658e860671ada5d@arcor.de> This is my announcement for re-candidacy as I18n PTL in Rocky Cycle. The time from the last cycle passed very fast. I had to manage all the things that a PTL expects. But we documented everything very well and I always had the full support of the team. I asked the team and it would continue to support me, which is why I take the chance again. This is the point to say thank you to all that we have achieved many things and we are a great community! Now it's time to finish things: 1. Zanata upgrade. We are in the middle of the upgrade process. The dev server is sucessfull upgraded and the new Zanata versions fits all our requirements to automate things more and more. Now we are in the hot release phase and when it's over, the live upgrade can start. 2. Translation check site. A little bit out of scope in Queens release because of lack of resources. We'll try this again in Rocky. 3. Aquire more people to the team. That will be the main part of my work as PTL in Rocky. We've won 3 new language teams in the last cycle and can Openstack serve in Indian, Turkish and Esperanto. There is even more potential for strengthening existing teams or creating new ones. For this we have great OpenStack events in Europe this year, at least the Fall Summit in Berlin. We plan workshops and presentations. The work of the translation team is also becoming more colorful. We have project documentation translation in the order books, translation user survey and white papers for working groups. We are well prepared, but we also look to the future, for example how AI-programming can support us in the translation work. If the plan suits you, I look forward to your vote. Frank Email: eumel at arcor.de IRC: eumel8 Twitter: eumel_8 OpenStack Profile: https://www.openstack.org/community/members/profile/45058/frank-kloeker From harlowja at fastmail.com Tue Jan 30 19:20:16 2018 From: harlowja at fastmail.com (Joshua Harlow) Date: Tue, 30 Jan 2018 11:20:16 -0800 Subject: [openstack-dev] [kolla] Policy regarding template customisation In-Reply-To: References: <5A70AC8D.4050407@fastmail.com> Message-ID: <5A70C570.7000103@fastmail.com> Yup, I am hoping to avoid all of these kinds of customizations if possible... But if we have to we'll probably have to make something like that. Or we'll just have to render out files for each host and serve it from the same REST endpoint, ya da ya da... -Josh Michał Jastrzębski wrote: > However I think issue > here is about files in /etc/kolla/config - so config overrides. > > I think one potential solution would be to have some sort of ansible > task that would translate ansible vars to ini format and lay down > files in /etc/kolla/config, but I think that's beyond scope of > Kolla-Ansible. From mriedemos at gmail.com Tue Jan 30 19:46:49 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 30 Jan 2018 13:46:49 -0600 Subject: [openstack-dev] [nova][osc] How to deal with add/remove fixed/floating CLIs after novaclient 10.0.0? In-Reply-To: <5A70C1A6.7030905@windriver.com> References: <6d1f4bb0-f7c9-cc32-9339-55701eee0b7c@gmail.com> <5A70C1A6.7030905@windriver.com> Message-ID: On 1/30/2018 1:04 PM, Chris Friesen wrote: > Is there a plan going forward to ensure that python-novaclient and OSC > are on the same page as far as deprecating CLIs and API bindings? There is no official plan, no. The 10.0.0 novaclient release came out late. If there was more lead time, or if I would have thought of it, we could have done an audit on what other things within OpenStack were using the deprecated CLIs/python API bindings and cleaned those up first. But that didn't happen. At this point, novaclient 10.0.0 probably won't get into upper-constraints for Queens: https://review.openstack.org/#/c/538070/ And I'm OK with that, nothing in Queens requires >=10.0.0. We can clean things up in Rocky. -- Thanks, Matt From cdent+os at anticdent.org Tue Jan 30 19:48:34 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 30 Jan 2018 19:48:34 +0000 (GMT) Subject: [openstack-dev] [legal-discuss] [tc] [all] TC Report 18-05 In-Reply-To: <73d762de-b7b0-df39-4bba-2be025157d19@inaugust.com> References: <696c009f-005e-8aa7-7363-696fccac49fd@ham.ie> <73d762de-b7b0-df39-4bba-2be025157d19@inaugust.com> Message-ID: On Tue, 30 Jan 2018, Monty Taylor wrote: > On 01/30/2018 01:05 PM, Monty Taylor wrote: >> That is correct. As officers information published from Board members could >> be construed as "official" communication. The approach we've taken on this >> topic is that Board members refrain from publishing their >> thoughts/feedback/take/opinion/summary of a meeting until after Jonathan >> has published an 'official' summary of the meeting, at which point, since >> there is an official document we're free to comment as we desire. Thanks, I assumed/hoped it was something like that but the blurb on the wiki page[1] was ambiguous and I thought it better to point out the ambiguity and see where that took us. Somewhere good. So that's nice. [1] https://wiki.openstack.org/wiki/Governance/Foundation/30Jan2018BoardMeeting -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From ben at swartzlander.org Tue Jan 30 19:56:49 2018 From: ben at swartzlander.org (Ben Swartzlander) Date: Tue, 30 Jan 2018 14:56:49 -0500 Subject: [openstack-dev] [manila][ptl] Stepping down as Manila PTL Message-ID: After leading the Manila project for 5 years, it's time for me to step down. I feel incredibly proud of the project and the team that's worked to bring Manila from an idea at the Folsom design summit to the successful project is it today. Manila has reached a point of stability where I feel like it doesn't need me to spend all my time pushing it forward, and I can change my role to contributor and let someone else lead. I'm thankful for all the support the project has received from contributors and from the larger OpenStack community. -Ben Swartzlander From alexandre.van-kempen at inria.fr Tue Jan 30 20:31:37 2018 From: alexandre.van-kempen at inria.fr (avankemp) Date: Tue, 30 Jan 2018 21:31:37 +0100 Subject: [openstack-dev] [FEMDC] Wed. 31 Jan - IRC Meeting 15:00 UTC Message-ID: <63A644FC-5712-40CA-967D-B7DC07A235F9@inria.fr> Dear all, A gentle reminder for our tomorrow meeting at 15:00 UTC A draft of the agenda is available at line 141, you are very welcome to add any item. https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2018 Best, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue Jan 30 20:43:13 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 30 Jan 2018 21:43:13 +0100 Subject: [openstack-dev] [os-upstream-institute][ptg] Schedule Upstream Institute working sessions for the PTG Message-ID: <8B712F52-F349-459A-A425-DE793A0A9B67@gmail.com> Hi Training Team, Good news, we are planning to have working sessions at the PTG! :) The challenging part is that all of us are involved in several activities which makes it difficult to schedule these. We are looking into co-locating some parts of the discussions with related sessions, like First Contact SIG and Contributor Portal discussions. On top of these we need to get rolling on re-designing the training including the flow and the exercises, which requires its own set of work sessions. In order to get most of the team in the same room I created a Doodle poll[1] capturing mornings and afternoons to find the best options. Please mark ALL the time intervals that might work for you! We will also try to make it as a virtual working session with remote participation so please vote even if you cannot make it to the PTG in person. Please let me know if you have any questions. Thanks, Ildikó (IRC: ildikov) [1] https://doodle.com/poll/y4cmhe2ur2vqkhae From sean.mcginnis at gmx.com Tue Jan 30 22:37:05 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 30 Jan 2018 16:37:05 -0600 Subject: [openstack-dev] [heat] [Release-job-failures] Tag of openstack/python-heatclient failed Message-ID: <20180130223705.GA12806@sm-xps> We had a release job failure for python-heatclient. See below for details. This appears to be from the fact that this repo is configured with the release-notes-job [1], and it does have a couple of notes, but the release note infrastructure has not been set up for the repo. This is safe to ignore - there just won't be any published release notes for the project. But either the release note set up needs to be completed in the repo, or the release-notes-jobs template needs to be removed for the repo so we don't get errors going forward. If set up is completed, the next release will get the release notes published. [1] https://github.com/openstack-infra/project-config/blob/2c4c10e7f2c4b8fbef77b1859864c59f4909f0f2/zuul.d/projects.yaml#L13592 ----- Forwarded message from zuul at openstack.org ----- Date: Tue, 30 Jan 2018 22:29:48 +0000 From: zuul at openstack.org To: release-job-failures at lists.openstack.org Subject: [Release-job-failures] Tag of openstack/python-heatclient failed Reply-To: openstack-dev at lists.openstack.org Build failed. - publish-openstack-releasenotes http://logs.openstack.org/1b/1b86b91e2bc0e6dc3556ee7153e59e1a6ec6e655/tag/publish-openstack-releasenotes/606c36a/ : FAILURE in 4m 22s _______________________________________________ Release-job-failures mailing list Release-job-failures at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures ----- End forwarded message ----- From sean.mcginnis at gmx.com Tue Jan 30 22:40:39 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 30 Jan 2018 16:40:39 -0600 Subject: [openstack-dev] [blazar] [Release-job-failures] Release of openstack/blazar-nova failed Message-ID: <20180130224038.GB12806@sm-xps> There was a failure with releasing blazar-nova. It would appear this has a dependency on nova, but nova is not in the required-projects. This will need to be corrected before we can complete the blazar-nova release. http://logs.openstack.org/ed/ed4afab27ae0a030a775059936da80f7f01eb2f6/release/release-openstack-python/821c969/job-output.txt.gz#_2018-01-30_22_06_50_103665 ----- Forwarded message from zuul at openstack.org ----- Date: Tue, 30 Jan 2018 22:11:42 +0000 From: zuul at openstack.org To: release-job-failures at lists.openstack.org Subject: [Release-job-failures] Release of openstack/blazar-nova failed Reply-To: openstack-dev at lists.openstack.org Build failed. - release-openstack-python http://logs.openstack.org/ed/ed4afab27ae0a030a775059936da80f7f01eb2f6/release/release-openstack-python/821c969/ : FAILURE in 3m 45s - announce-release announce-release : SKIPPED - propose-update-constraints propose-update-constraints : SKIPPED _______________________________________________ Release-job-failures mailing list Release-job-failures at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures ----- End forwarded message ----- From sean.mcginnis at gmx.com Tue Jan 30 22:45:06 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 30 Jan 2018 16:45:06 -0600 Subject: [openstack-dev] [heat] [Release-job-failures] Tag of openstack/heat-agents failed Message-ID: <20180130224505.GC12806@sm-xps> We have another failure. Looks to be the same as the last one, caused by the release notes job being defined for the repo but the release note setup not being complete. ----- Forwarded message from zuul at openstack.org ----- Date: Tue, 30 Jan 2018 22:43:14 +0000 From: zuul at openstack.org To: release-job-failures at lists.openstack.org Subject: [Release-job-failures] Tag of openstack/heat-agents failed Reply-To: openstack-dev at lists.openstack.org Build failed. - publish-openstack-releasenotes http://logs.openstack.org/26/267694b7d3537942ab270bc7e91df58a7b7a1073/tag/publish-openstack-releasenotes/72ac779/ : FAILURE in 5m 06s _______________________________________________ Release-job-failures mailing list Release-job-failures at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures ----- End forwarded message ----- From dirk at dmllr.de Tue Jan 30 22:47:17 2018 From: dirk at dmllr.de (=?UTF-8?B?RGlyayBNw7xsbGVy?=) Date: Tue, 30 Jan 2018 23:47:17 +0100 Subject: [openstack-dev] [requirements][Blazar] FFE - add python-blazarclient in global-requirements In-Reply-To: <20180130155317.jdemeldbqjuhv27e@gentoo.org> References: <20180130155317.jdemeldbqjuhv27e@gentoo.org> Message-ID: 2018-01-30 16:53 GMT+01:00 Matthew Thode : > LGTM, +2 +2 From melwittt at gmail.com Tue Jan 30 23:59:00 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 30 Jan 2018 15:59:00 -0800 Subject: [openstack-dev] Race in FixedIP.associate_pool In-Reply-To: References: Message-ID: <72BDE5F0-112C-4C1F-87ED-69C508716C68@gmail.com> > On Jan 29, 2018, at 16:45, Arun SAG wrote: > > If anyone is running into db race while running database in > master-slave mode with async replication, The bug has been identified > and getting fixed here > https://bugs.launchpad.net/oslo.db/+bug/1746116 Thanks for your persistence in tracking down the problem and raising the bug. If you get a chance, do please test the proposed patch in your environment to help ensure there aren’t any loose ends left. Once the fix is merged, I think we should propose backports to stable/pike and stable/ocata, do .z releases for them, and bump oslo.db in upper-constraints.txt in the openstack/requirements repo for the stable/pike and stable/ocata branches. That way, operators running from stable can get the fix by upgrading their oslo.db packages to those .z releases. -melanie From emilien at redhat.com Wed Jan 31 01:09:49 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 30 Jan 2018 17:09:49 -0800 Subject: [openstack-dev] [tripleo] The Weekly Owl - 7th Edition Message-ID: Note: this is the seventh edition of a weekly update of what happens in TripleO. The goal is to provide a short reading (less than 5 minutes) to learn where we are and what we're doing. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-January/126509.html +---------------------------------+ | General announcements | +---------------------------------+ +--> Queens milestone 3 was released! We fixed more bugs than any other milestone in the past! +--> Next release will be Queens RC1, target is in ~1 month. +--> Focus is on finishing the blueprints that have granted FFE; High/Critical bugs; CI stabilization; Rocky planning. +------------------------------+ | Continuous Integration | +------------------------------+ +--> TripleO CI squad is proud to announce a new dashboard to track CI metrics: https://review.rdoproject.org/grafana/dashboard/db/tripleo-ci +--> Rover is John and ruck is Wes. Please let them know any new CI issue. +--> Master promotion is 6 days, Pike is 24 days and Ocata is 2 days. +--> Gate is currently in bad shape (a lot of resets), we're working on it. Please help us by staying tuned in case we ask to not 'recheck'. +--> We agreed that once OVB jobs running in RDO cloud become more stable, they'll vote in check (as third party). +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting and https://goo.gl/D4WuBP +-------------+ | Upgrades | +-------------+ +--> New format for weekly meetings, checkout http://lists.openstack.org/pipermail/openstack-dev/2018-January/126611.html ! +--> Reviews are still needed on FFU, Queens upgrade workflow and undercloud backup. +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status and https://etherpad.openstack.org/p/tripleo-upgrade-squad-meeting +---------------+ | Containers | +---------------+ +--> Containerized undercloud: good discussions and progress on https://review.openstack.org/#/c/517444/ +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +--------------+ | Integration | +--------------+ +--> The squad is planning Rocky and also talking about testing ceph-ansible with tripleo! +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> CI jobs were optimized for stable branches +--> Team is planning work in Rocky +--> Good progress on Roles management +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> The squad needs review, please check the etherpad. +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> Routed networks support still ongoing. +--> Octavia work is mostly done, fixing a small issues for Queens. +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-------------+ | Owl facts | +-------------+ The Minahassa Masked Owl is a medium-sized owl with short rounded wings and no ear-tufts. It is also known as the Minahassa Barn Owl or the Sulawesi Golden Owl. Voice: A screech similar to other Barn and Masked Owls. (RHIIIK RHIIK - does it sound real?) (source: https://www.owlpages.com/owls/species.php?s=150) Stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Wed Jan 31 04:17:29 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 31 Jan 2018 15:17:29 +1100 Subject: [openstack-dev] [nova][ironic] Tagging newton EOL Message-ID: <20180131041729.GC23143@thor.bakeyournoodle.com> Hi All, When we tagged newton EOL in October there were in-flight reviews for nova and ironic that needed to land before we could EOL them. That work completed but I dropped the ball. So can we tag those last 2 repos? As in October a member of the infra team needs to do this *or* I can be added to Project Bootstrappers[1] for long enough to do this. # EOL repos belonging to ironic eol_branch.sh -- stable/newton newton-eol openstack/ironic # EOL repos belonging to nova eol_branch.sh -- stable/newton newton-eol openstack/nova Yours Tony. [1] https://review.openstack.org/#/admin/groups/26,members -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Wed Jan 31 04:39:02 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 30 Jan 2018 23:39:02 -0500 Subject: [openstack-dev] [glance] Q-3 milestone released Message-ID: Glance 16.0.0.0b3 was released today, The tag was created at this commit: http://git.openstack.org/cgit/openstack/glance/commit/?id=631add1ba849eac67ecb36ea4c90b57a300a96fa which is dated Thu, 18 Jan 2018 12:09:11 +0000 As you'll notice, that was about 2 weeks ago. Which prompts the question, why did we wait so long to release it. Q: Why that commit? A: It's the last commit when the glance functional gates jobs were still working. Q: Why didn't we release it 2 weeks ago? A: The Q-3 release marks the feature freeze. Any code committed after that point should be either a release-critical bug or something with a Feature Freeze Exception. So you really don't want to release Q-3 early. Q: What's the deal with the glance functional gate jobs? A: There are two patches fixing them, but they have to go through the 'integrated' queue before they can merge, and we've had some bad luck over the past few days ... they've made it to the top of the queue to be processed when something bad happened and they had to start over. They're being processed again right now (keep your fingers crossed). Q: What about my commits that show up after 631add1ba849eac67ecb36ea4c90b57a300a96fa? A: I will be sending out FFEs or marking the bugs for RC1, as appropriate. Glance cores, please do not merge any patches until after https://review.openstack.org/#/c/536630/ has merged. thanks, brian From apolloliuhx at gmail.com Wed Jan 31 05:14:45 2018 From: apolloliuhx at gmail.com (Hanxi Liu) Date: Wed, 31 Jan 2018 13:14:45 +0800 Subject: [openstack-dev] [nova][ceilometer] versioned notifications coverage In-Reply-To: <33d09c5a-ecf9-3933-6551-ef8cff8d5ae0@gmail.com> References: <33d09c5a-ecf9-3933-6551-ef8cff8d5ae0@gmail.com> Message-ID: On Wed, Jan 31, 2018 at 1:17 AM, Matt Riedemann wrote: > On 1/30/2018 9:37 AM, Mathieu Gagné wrote: > >> So I just discovered this documentation. It's awesome! Great job to >> all the ones involved! >> Now I wish we had the same in all projects. =) >> > > Maybe it's worth putting something into the community goals etherpad: > > https://etherpad.openstack.org/p/community-goals > > At this point, I'd say it's probably premature to push across all projects > since we don't have another service consuming nova's versioned > notifications yet. Searchlight was working on it, but then I think that got > stalled. Maybe after the Telemetry team starts looking at it to see how > things work for them from a consumer / client standpoint. It's so glad that Ceilometer is the first consumer of versioned notifications. Can I understand it as a trend that versioned notifications will replace unversioned notifications even though versioned notifications may have the only consumer for long time? You know Ceilometer consume unversioned notifications for now. So should we replace unversioned with versioned one by one according to the link[1] because we can't consume both as then we'd end up duplicating data? It may result in data lost with thoroughly transformation. [1] http://burndown.peermore.com/nova-notification/ -- Cheers, Hanxi Liu (IRC: lhx_) __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pratapagoutham at gmail.com Wed Jan 31 06:49:40 2018 From: pratapagoutham at gmail.com (Goutham Pratapa) Date: Wed, 31 Jan 2018 12:19:40 +0530 Subject: [openstack-dev] [all][Kingbird]Multi-Region Orchestrator Message-ID: *Kingbird (The Multi Region orchestrator):* We are proud to announce kingbird is not only a centralized quota and resource-manager but also a Multi-region Orchestrator. *Use-cases covered:*- Admin can synchronize and periodically balance quotas across regions and can have a global view of quotas of all the tenants across regions. - A user can sync a resource or a group of resources from one region to other in a single go A user can sync multiple key-pairs, images, and flavors from one region to other, ( Flavor can be synced only by admin) - A user must have complete tempest test-coverage for all the scenarios/services rendered by kingbird. - Horizon plugin so that user can access/view global limits. * Our Road-map:* -- Automation scripts for kingbird in -ansible, -salt -puppet. -- Add SSL support to kingbird -- Resource management in Kingbird-dashboard. -- Kingbird in a docker -- Add Kingbird into Kolla. We are looking out for* contributors and ideas* which can enhance Kingbird and make kingbird a one-stop solution for all multi-region problems *Stable Branches :* *Kingbird-server: https://github.com/openstack/kingbird/tree/stable/queens * *Python-Kingbird-client (0.2.1): https://github.com/openstack/python-kingbirdclient/tree/0.2.1 * I would like to Thank all the people who have helped us in achieving this milestone and guided us all throughout this Journey :) Thanks Goutham Pratapa PTL OpenStack-Kingbird. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zzxwill at gmail.com Wed Jan 31 07:05:03 2018 From: zzxwill at gmail.com (Will Zhou) Date: Wed, 31 Jan 2018 07:05:03 +0000 Subject: [openstack-dev] [I18n][PTL] PTL nomination for I18n In-Reply-To: <489844e70f170a35b658e860671ada5d@arcor.de> References: <489844e70f170a35b658e860671ada5d@arcor.de> Message-ID: +1 Frank Kloeker 于2018年1月31日周三 上午3:08写道: > This is my announcement for re-candidacy as I18n PTL in Rocky Cycle. > > The time from the last cycle passed very fast. I had to manage all the > things that a PTL expects. But we documented everything very well and I > always had the full support of the team. I asked the team and it would > continue to support me, which is why I take the chance again. > This is the point to say thank you to all that we have achieved many > things and we are a great community! > > Now it's time to finish things: > > 1. Zanata upgrade. We are in the middle of the upgrade process. The dev > server is sucessfull upgraded and the new Zanata versions fits all our > requirements to automate things more and more. > Now we are in the hot release phase and when it's over, the live > upgrade can start. > > 2. Translation check site. A little bit out of scope in Queens release > because of lack of resources. We'll try this again in Rocky. > > 3. Aquire more people to the team. That will be the main part of my work > as PTL in Rocky. We've won 3 new language teams in the last cycle and > can Openstack serve in Indian, Turkish and Esperanto. There is even more > potential for strengthening existing teams or creating new ones. > For this we have great OpenStack events in Europe this year, at least > the Fall Summit in Berlin. We plan workshops and presentations. > > The work of the translation team is also becoming more colorful. We have > project documentation translation in the order books, translation user > survey and white papers for working groups. > > We are well prepared, but we also look to the future, for example how > AI-programming can support us in the translation work. > > If the plan suits you, I look forward to your vote. > > Frank > > Email: eumel at arcor.de > IRC: eumel8 > Twitter: eumel_8 > > OpenStack Profile: > https://www.openstack.org/community/members/profile/45058/frank-kloeker > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- --------------------------------------------- ​ 周正喜 Mobile: 13701280947 ​WeChat: 472174291 -------------- next part -------------- An HTML attachment was scrubbed... URL: From liu.xuefeng1 at zte.com.cn Wed Jan 31 07:58:21 2018 From: liu.xuefeng1 at zte.com.cn (liu.xuefeng1 at zte.com.cn) Date: Wed, 31 Jan 2018 15:58:21 +0800 (CST) Subject: [openstack-dev] =?utf-8?q?_=5BSenlin=5D_=5BPTL=5D_PTL_nomination_?= =?utf-8?q?for_Senlin?= Message-ID: <201801311558218822444@zte.com.cn> SGkgYWxsDQoNCkknZCBsaWtlIHRvIGFubm91bmNlIG15IGNhbmRpZGFjeSBmb3IgdGhlIFBUTCBy b2xlIG9mIFNlbmxpbiBQcm9qZWN0IGZvcg0KDQpSb2NreSBjeWNsZS4NCg0KDQoNCg0KSSBiZWdh biB0byBjb250cmlidXRlIHRvIFNlbmxpbiBwcm9qZWN0IHNpbmNlIE1pdGFrYSBhbmQgam9pbmVk IHRoZSB0ZWFtIGFzDQoNCmEgY29yZSByZXZpZXdlciBpbiAyMDE2LjEwLiBJdCBpcyBteSBwbGVh c3VyZSB0byB3b3JrIHdpdGggdGhlIGdyZWF0IHRlYW0NCg0KdG8gbWFrZSB0aGlzIHByb2plY3Qg YmV0dGVyIGFuZCBiZXR0ZXIsIGFuZCB3ZSB3aWxsIGtlZXAgbW92aW5nIGFuZCBsb29rDQoNCmZv cndhcmQgdG8gcHVzaCBTZW5saW4gdG8gdGhlIG5leHQgbGV2ZWwuDQoNCg0KDQoNCkFzIGEgY2x1 c3RlcmluZyBzZXJ2aWNlLCB3ZSBhbHJlYWR5IGNhbiBoYW5kbGUgc29tZSByZXNvdXJjZSB0eXBl cyBsaWtlIG5vdmENCg0Kc2VydmVyLCBoZWF0IHN0YWNrLCBORlYgVkRVIGV0Yy4gaW4gcGFzdCBj eWNsZXMuIFdlIGFsc28gaGF2ZSBkb25lIGEgbG90IG9mDQoNCmdyZWF0IHdvcmtzIGluIFF1ZXVl IGN5Y2xlLCBmb3IgZXhhbXBsZSB3ZSBmaW5pc2hlZCBrOHMgb24gU2VubGluIGZlYXR1cmUncw0K DQpkZW1vWzFdWzJdWzNdWzRdLiBBbmQgdGhlcmUgYXJlIHN0aWxsIG1hbnkgd29ya3MgbmVlZCB0 byBkbyBpbiBmdXR1cmUuDQoNCg0KDQoNCkFzIGEgUFRMIGluIFJvY2t5IGN5Y2xlLCBJJ2QgbGlr ZSB0byBmb2N1cyBvbiB0aGUgdGFza3MgYXMgZm9sbG93czoNCg0KDQoNCg0KKiBQcm9tb3RlIGs4 cyBvbiBTZW5saW4gZmVhdHVyZSBpbXBsZW1lbnRhdGlvbiBhbmQgbWFrZSBpdCB1c2UgaW4gTkZW DQoNCiAgRm9yIGV4YW1wbGU6DQoNCiAgLSBBZGQgYWJpbGl0eSB0byBkbyBhY3Rpb25zIG9uIGNs dXN0ZXIgY3JlYXRpb24vZGVsZXRpb24uDQoNCiAgLSBBZGQgbW9yZSBuZXR3b3JrIGludGVyZmFj ZXMgaW4gZHJpdmVycy4NCg0KICAtIEFkZCBrdWJlcm5ldGVzIG1hc3RlciBwcm9maWxlLCB1c2Ug a3ViZWFkbSB0byBzZXR1cCBvbmUgbWFzdGVyIG5vZGUuDQoNCiAgLSBBZGQga3ViZXJuZXRlcyBu b2RlIHByb2ZpbGUsIGF1dG8gcmV0cmlldmUga3ViZXJuZXRlcyBkYXRhIGZyb20gbWFzdGVyDQoN CiAgICBjbHVzdGVyLg0KDQoqIEltcHJvdmUgaGVhbHRoIHBvbGljeSB0byBzdXBwb3J0IG1vcmUg dXNlZnVsIGF1dG8taGVhbGluZyBzY2VuYXJpbw0KDQoqIEltcHJvdmUgTG9hZEJhbGFuY2UgcG9s aWN5IHdoZW4gdXNlIE9jdGF2aWEgc2VydmljZSBkcml2ZXINCg0KKiBJbXByb3ZlIHJ1bnRpbWUg ZGF0YSBwcm9jZXNzaW5nIGluc2lkZSBTZW5saW4gc2VydmVyDQoNCiogQSBiZXR0ZXIgc3VwcG9y dCBmb3IgRURHRS1Db21wdXRpbmcgdW5hdHRlbmRlZCBvcGVyYXRpb24gdXNlIGNhc2VzWzVdDQoN CiogQSBzdHJvbmdlciB0ZWFtIHRvIHRha2UgdGhlIFNlbmxpbiBwcm9qZWN0IHRvIGl0cyBuZXh0 IGxldmVsLg0KDQoNCg0KDQpBZ2FpbiwgaXQgaXMgbXkgcGxlYXN1cmUgdG8gd29yayB3aXRoIHN1 Y2ggYSBncmVhdCB0ZWFtLg0KDQoNCg0KDQpUaGFua3MNCg0KWHVlRmVuZyBMaXUNCg0KDQoNCg0K WzFdaHR0cHM6Ly9yZXZpZXcub3BlbnN0YWNrLm9yZy8jL2MvNTE1MzIxLw0KDQpbMl1odHRwczov L3YucXEuY29tL3gvcGFnZS9pMDUxMjVzZm9uaC5odG1sDQoNClszXWh0dHBzOi8vdi5xcS5jb20v eC9wYWdlL3QwNTEydm82dHcxLmh0bWwNCg0KWzRdaHR0cHM6Ly92LnFxLmNvbS94L3BhZ2UveTA1 MTJlaHFpaXEuaHRtbA0KDQpbNV1odHRwczovL3d3dy5vcGVuc3RhY2sub3JnL3ZpZGVvcy9ib3N0 b24tMjAxNy9pbnRlZ3JhdGlvbi1vZi1lbnRlcnByaXNlLW1vbml0b3JpbmctcHJvZHVjdC1zZW5s aW4tYW5kLW1pc3RyYWwtZm9yLWF1dG8taGVhbGluZw== -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Wed Jan 31 08:18:21 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 31 Jan 2018 16:18:21 +0800 Subject: [openstack-dev] [acceleration]Cyborg Weekly Team Meeting 2018.01.31 Message-ID: Hi Team, Team meeting starting UTC1500 as usual at #openstack-cyborg. We will try to make hell freeze over today :) -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From saverio.proto at switch.ch Wed Jan 31 08:51:33 2018 From: saverio.proto at switch.ch (Saverio Proto) Date: Wed, 31 Jan 2018 09:51:33 +0100 Subject: [openstack-dev] [Openstack-operators] LTS pragmatic example In-Reply-To: <332af66b-320f-bda4-495f-870dd9e10349@gmail.com> References: <20171114154658.mpnwfsn7uzmis2l4@redhat.com> <1510675891-sup-8327@lrrr.local> <4ef3b8ff-5374-f440-5595-79e1d33ce3bb@switch.ch> <332af66b-320f-bda4-495f-870dd9e10349@gmail.com> Message-ID: <54c9afed-129b-914c-32f4-451dbdf41279@switch.ch> Hello all, I am again proposing a change due to operations experience. I am proposing a clean and simple cherry-pick to Ocata. "it depends" works pretty bad as policy for accepting patches. Now I really dont understand what is the issue with the Stable Policy and this patch: https://review.openstack.org/#/c/539439/ This is a UX problem. Horizon is giving the wrong information to the user. I got this answer: Ocata is the second phase of stable branches [1]. Only critical bugfixes and security patches are acceptable. I don't think it belongs to the category. But merging a patch that changes a log file in Nova back to Newton was OKAY few weeks ago. I will not be able to be in person at the PTG, but please talk about this. People just give up upstreaming stuff like this. thank you Saverio On 15.11.17 03:37, Matt Riedemann wrote: > On 11/14/2017 10:58 AM, Davanum Srinivas wrote: >> Let's focus our energy on the etherpad please >> >> https://etherpad.openstack.org/p/LTS-proposal >> >> On Wed, Nov 15, 2017 at 3:35 AM, Davanum Srinivas >> wrote: >>> Saverio, >>> >>> Please see this : >>> https://docs.openstack.org/project-team-guide/stable-branches.html for >>> current policies. >>> >>> On Wed, Nov 15, 2017 at 3:33 AM, Saverio Proto >>> wrote: >>>>> Which stable policy does that patch violate?  It's clearly a bug >>>>> because the wrong information is being logged.  I suppose it goes >>>>> against the string freeze rule? Except that we've stopped translating >>>>> log messages so maybe we don't need to worry about that in this case, >>>>> since it isn't an exception. >>>> >>>> Well, I also would like to understand more about stable policy >>>> violations. >>>> When I proposed such patches in the past for the release N-2 I have >>>> always got the answer: it is not a security issue so it will not be >>>> merged. >>>> >>>> This is a good example of how things have been working so far: >>>> >>>> https://review.openstack.org/#/q/677eb1c4160c08cfce2900495741f0ea15f566fa >>>> >>>> >>>> This cinder patch was merged in master. It was then merged in Mitaka. >>>> But it was not merged in Liberty just because "only security fixes" >>>> were >>>> allowed at that point. >>>> >>>> You can read that in the comments: >>>> https://review.openstack.org/#/c/306610/ >>>> >>>> Is this kind of things going to change after the discussion in Sydney ? >>>> The discussion is not enough ? what we need to get done then ? >>>> >>>> thank you >>>> >>>> Saverio >>>> >>>> >>>> __________________________________________________________________________ >>>> >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> -- >>> Davanum Srinivas :: https://twitter.com/dims >> >> >> > > Heh, I'm reading this thread after approving all of those patches. > > The answer as to whether it's appropriate or not, is "it depends". > Depends on the patch, depends on the age of the branch, etc. > > In this case, newton is in phase 3 so normally it's only security or > critical fixes allowed, but in this case it's so trivial and so > obviously wrong that I was OK with approving it just to get it in before > we end of life the branch. > > So, it depends. And because it depends, that's also why we don't > automate the backport of every fix made on master. Because guess what, > we also backport "fixes" that introduce regressions, and when you do > that to n-1 (Pike at this point) then you still have a lot of time to > detect that and fix it upstream, but regressing things on the oldest > branch leaves very little time to (1) have it detected and (2) get it > fixed before end of life. > -- SWITCH Saverio Proto, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 1573 saverio.proto at switch.ch, http://www.switch.ch http://www.switch.ch/stories From thierry at openstack.org Wed Jan 31 09:15:12 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 31 Jan 2018 10:15:12 +0100 Subject: [openstack-dev] [requirement][cyborg]FFE - pyspdk requirement dependency In-Reply-To: <1517337778-sup-8267@lrrr.local> References: <53EADDD3-8A86-445F-A5D9-F5401ABB5309@163.com> <25D76CB6-7EDE-491E-ADAB-6FD4B5B56DAC@163.com> <1517337778-sup-8267@lrrr.local> Message-ID: Doug Hellmann wrote: > Excerpts from We We's message of 2018-01-31 01:54:04 +0800: >>> Hi, >> >>> I have modified and resubmitted pyspdk to the pypi. Please check it. >> >>> Thx, >> >>> Helloway > > Is there a public source repository for the library somewhere? Looks like it lives at: https://github.com/hellowaywewe/py-spdk Since the primary objections are not really due to the FFE state but more due to the nature of the library, this should probably first be proposed as a change to openstack/requirements and discussed there... When it's ready but blocked by FF we can return to a ML thread to discuss it... -- Thierry Carrez (ttx) From thierry at openstack.org Wed Jan 31 09:21:36 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 31 Jan 2018 10:21:36 +0100 Subject: [openstack-dev] [all][Kingbird]Multi-Region Orchestrator In-Reply-To: References: Message-ID: <66989544-5248-8787-42d6-5c41dd9753ec@openstack.org> Goutham Pratapa wrote: > *Kingbird (The Multi Region orchestrator):* > > We are proud to announce kingbird is not only a centralized quota and > resource-manager but also a  Multi-region Orchestrator. > [...] > Thanks > Goutham Pratapa > PTL > OpenStack-Kingbird. Quick clarification: Kingbird is not (yet) an official OpenStack component, so it should probably not call itself OpenStack Kingbird. Regards, -- Thierry Carrez (ttx) From pratapagoutham at gmail.com Wed Jan 31 09:27:53 2018 From: pratapagoutham at gmail.com (Goutham Pratapa) Date: Wed, 31 Jan 2018 14:57:53 +0530 Subject: [openstack-dev] [all][Kingbird]Multi-Region Orchestrator In-Reply-To: <66989544-5248-8787-42d6-5c41dd9753ec@openstack.org> References: <66989544-5248-8787-42d6-5c41dd9753ec@openstack.org> Message-ID: OK, sorry that was a mistake :) Thank you, Thierry :) On Wed, Jan 31, 2018 at 2:51 PM, Thierry Carrez wrote: > Goutham Pratapa wrote: > > *Kingbird (The Multi Region orchestrator):* > > > > We are proud to announce kingbird is not only a centralized quota and > > resource-manager but also a Multi-region Orchestrator. > > [...] > > Thanks > > Goutham Pratapa > > PTL > > OpenStack-Kingbird. > > Quick clarification: Kingbird is not (yet) an official OpenStack > component, so it should probably not call itself OpenStack Kingbird. > > Regards, > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cheers !!! Goutham Pratapa -------------- next part -------------- An HTML attachment was scrubbed... URL: From muroi.masahito at lab.ntt.co.jp Wed Jan 31 09:30:45 2018 From: muroi.masahito at lab.ntt.co.jp (Masahito MUROI) Date: Wed, 31 Jan 2018 18:30:45 +0900 Subject: [openstack-dev] [blazar] [Release-job-failures] Release of openstack/blazar-nova failed In-Reply-To: <20180130224038.GB12806@sm-xps> References: <20180130224038.GB12806@sm-xps> Message-ID: <20cdeae5-6df7-16d1-4478-32637450de31@lab.ntt.co.jp> Hi Sean, Thanks. I push the patch to fix the issue. Do I need an additional patch that tags blazar-nova 1.0.1 as I did to blazarclient? https://review.openstack.org/#/c/539433/ best regards, Masahito On 2018/01/31 7:40, Sean McGinnis wrote: > There was a failure with releasing blazar-nova. It would appear this has a > dependency on nova, but nova is not in the required-projects. This will need to > be corrected before we can complete the blazar-nova release. > > http://logs.openstack.org/ed/ed4afab27ae0a030a775059936da80f7f01eb2f6/release/release-openstack-python/821c969/job-output.txt.gz#_2018-01-30_22_06_50_103665 > > ----- Forwarded message from zuul at openstack.org ----- > > Date: Tue, 30 Jan 2018 22:11:42 +0000 > From: zuul at openstack.org > To: release-job-failures at lists.openstack.org > Subject: [Release-job-failures] Release of openstack/blazar-nova failed > Reply-To: openstack-dev at lists.openstack.org > > Build failed. > > - release-openstack-python http://logs.openstack.org/ed/ed4afab27ae0a030a775059936da80f7f01eb2f6/release/release-openstack-python/821c969/ : FAILURE in 3m 45s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures > > ----- End forwarded message ----- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From thierry at openstack.org Wed Jan 31 09:32:41 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 31 Jan 2018 10:32:41 +0100 Subject: [openstack-dev] [Openstack-operators] LTS pragmatic example In-Reply-To: <54c9afed-129b-914c-32f4-451dbdf41279@switch.ch> References: <20171114154658.mpnwfsn7uzmis2l4@redhat.com> <1510675891-sup-8327@lrrr.local> <4ef3b8ff-5374-f440-5595-79e1d33ce3bb@switch.ch> <332af66b-320f-bda4-495f-870dd9e10349@gmail.com> <54c9afed-129b-914c-32f4-451dbdf41279@switch.ch> Message-ID: Saverio Proto wrote: > Hello all, > > I am again proposing a change due to operations experience. I am > proposing a clean and simple cherry-pick to Ocata. > > "it depends" works pretty bad as policy for accepting patches. > > Now I really dont understand what is the issue with the Stable Policy > and this patch: > > https://review.openstack.org/#/c/539439/ > > This is a UX problem. Horizon is giving the wrong information to the user. > > I got this answer: > Ocata is the second phase of stable branches [1]. Only critical bugfixes > and security patches are acceptable. I don't think it belongs to the > category. Every deployer has different expectations for "stable", which is why we have a stable branch policy. Our current policy is a trade-off between being a source of important fixes ("bugfix" branch), and changing less and less as time moves on ("stable" branch). The idea behind it being, if people lived with a minor issue for so long, the behavior change creates more "instability" than keeping the minor bug in. The rules have been followed in this case -- it's a UI bug, but changing the behavior now would create unwanted churn on a "stable" branch for little gain. You seem to be interested in a policy shift toward more of a "bugfix" branch where any fix should be allowed to land, and where branch age should not be a factor. It would be interesting to assess if that is a general view. I know that distros in general are happy with more of a "stable" approach. > But merging a patch that changes a log file in Nova back to Newton was > OKAY few weeks ago. Could you provide a link to that one ? -- Thierry Carrez (ttx) From thierry at openstack.org Wed Jan 31 09:35:09 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 31 Jan 2018 10:35:09 +0100 Subject: [openstack-dev] [blazar] [Release-job-failures] Release of openstack/blazar-nova failed In-Reply-To: <20cdeae5-6df7-16d1-4478-32637450de31@lab.ntt.co.jp> References: <20180130224038.GB12806@sm-xps> <20cdeae5-6df7-16d1-4478-32637450de31@lab.ntt.co.jp> Message-ID: <6e64b952-1116-5481-99b8-0f5be6c0c906@openstack.org> Masahito MUROI wrote: > Thanks. I push the patch to fix the issue.  Do I need an additional > patch that tags blazar-nova 1.0.1 as I did to blazarclient? I'd say yes. The simplest way for us to properly publish it would be a new version tag once everything is in place. Thanks! -- Thierry Carrez (ttx) From amotoki at gmail.com Wed Jan 31 09:39:30 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Wed, 31 Jan 2018 18:39:30 +0900 Subject: [openstack-dev] [horizon] FFE Request for Queens In-Reply-To: <483d507b-4c81-1058-f498-03dc9b2495be@ericsson.com> References: <483d507b-4c81-1058-f498-03dc9b2495be@ericsson.com> Message-ID: +1 for FFE. I can support it. We need a final ack from our PTL. Akihiro 2018-01-30 5:13 GMT+09:00 Lajos Katona : > Hi, > > I would like to ask for FFE on the neutron-trunk-ui blueprint to let the > admin panel for trunks be accepted for Queens. > > Based on discussion on IRC > (http://eavesdrop.openstack.org/irclogs/%23openstack-horizon/%23openstack-horizon.2018-01-29.log.html#t2018-01-29T14:36:58 > ) the remaining part of the blueprint neutron-trunk-ui > (https://blueprints.launchpad.net/horizon/+spec/neutron-trunk-ui) should be > handled separately: > > The admin panel (https://review.openstack.org/516657) should be part of the > Queens release, as now that is not dependent on the ngDetails patches. With > this the blueprint should be set to implemented. > The links (https://review.openstack.org/524619) for the ports details (trunk > parent and subports) from the trunk panel should be handled in a bug report: > > https://bugs.launchpad.net/horizon/+bug/1746082 > > Regards > Lajos Katona > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dtantsur at redhat.com Wed Jan 31 10:43:30 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 31 Jan 2018 11:43:30 +0100 Subject: [openstack-dev] [nova][ironic] Tagging newton EOL In-Reply-To: <20180131041729.GC23143@thor.bakeyournoodle.com> References: <20180131041729.GC23143@thor.bakeyournoodle.com> Message-ID: Hi! Ironic is ready for newton EOL. On 01/31/2018 05:17 AM, Tony Breeds wrote: > Hi All, > When we tagged newton EOL in October there were in-flight reviews > for nova and ironic that needed to land before we could EOL them. That > work completed but I dropped the ball. So can we tag those last 2 > repos? > > As in October a member of the infra team needs to do this *or* I can > be added to Project Bootstrappers[1] for long enough to do this. > > # EOL repos belonging to ironic > eol_branch.sh -- stable/newton newton-eol openstack/ironic > # EOL repos belonging to nova > eol_branch.sh -- stable/newton newton-eol openstack/nova > > Yours Tony. > > [1] https://review.openstack.org/#/admin/groups/26,members > From honjo.rikimaru at po.ntt-tx.co.jp Wed Jan 31 11:03:46 2018 From: honjo.rikimaru at po.ntt-tx.co.jp (Rikimaru Honjo) Date: Wed, 31 Jan 2018 20:03:46 +0900 Subject: [openstack-dev] [oslo][oslo.log]Re: Error will be occurred if watch_log_file option is true In-Reply-To: <1515514211-sup-4244@lrrr.local> References: <1515074711-sup-5593@lrrr.local> <165d1214-d0af-b634-6a29-c3e3afe52797@po.ntt-tx.co.jp> <1515514211-sup-4244@lrrr.local> Message-ID: Hello, Sorry for the very late reply... On 2018/01/10 1:11, Doug Hellmann wrote: > Excerpts from Rikimaru Honjo's message of 2018-01-09 18:11:09 +0900: >> Hello, >> >> On 2018/01/04 23:12, Doug Hellmann wrote: >>> Excerpts from Rikimaru Honjo's message of 2018-01-04 18:22:26 +0900: >>>> Hello, >>>> >>>> The below bug was reported in Masakari's Launchpad. >>>> I think that this bug was caused by oslo.log. >>>> (And, the root cause is a bug of pyinotify using by oslo.log. The detail is >>>> written in the bug report.) >>>> >>>> * masakari-api failed to launch due to setting of watch_log_file and log_file >>>> https://bugs.launchpad.net/masakari/+bug/1740111 >>>> >>>> There is a possibility that this bug will affects all openstack components using oslo.log. >>>> (But, the processes working with uwsgi[1] wasn't affected when I tried to reproduce. >>>> I haven't solved the reason of this yet...) >>>> >>>> Could you help us? >>>> And, what should we do...? >>>> >>>> [1] >>>> e.g. nova-api, cinder-api, keystone... >>>> >>>> Best regards, >>> >>> The bug is in pyinotify. According to the git repo [1] that project >>> was last updated in June of 2015. I recommend we move off of >>> pyinotify entirely, since it appears to be unmaintained. >>> >>> If there is another library to do the same thing we should switch >>> to it (there seem to be lots of options [2]). If there is no viable >>> replacement or fork, we should deprecate that log watching feature >>> (and anything else for which we use pyinotify) and remove it ASAP. >>> >>> We'll need a volunteer to do the evaluation and update oslo.log. >>> >>> Doug >>> >>> [1] https://github.com/seb-m/pyinotify >>> [2] https://pypi.python.org/pypi?%3Aaction=search&term=inotify&submit=search >> Thank you for replying. >> >> I haven't deeply researched, but inotify looks good. >> Because "weight" of inotify is the largest, and following text is described. >> >> https://pypi.python.org/pypi/inotify/0.2.9 >>> This project is unrelated to the *PyInotify* project that existed prior to this one (this project began in 2015). That project is defunct and no longer available. >> PyInotify is defunct and no longer available... >> > > The inotify package seems like a good candidate to replace pyinotify. > > Have you looked at how hard it would be to change oslo.log? If so, does > using the newer library eliminate the bug you had? I am researching it now. (But, I think it is not easy.) I'll create a patch if inotify can eliminate the bug. > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at po.ntt-tx.co.jp From tobias at citynetwork.se Wed Jan 31 11:37:20 2018 From: tobias at citynetwork.se (Tobias Rydberg) Date: Wed, 31 Jan 2018 12:37:20 +0100 Subject: [openstack-dev] [publiccloud-wg] Reminder for todays meeting Message-ID: <506bf34d-12b6-8c2c-05f6-2ba0195e04ee@citynetwork.se> Hi all, Time again for a meeting for the Public Cloud WG - today at 1400 UTC in #openstack-meeting-3 Agenda and etherpad at: https://etherpad.openstack.org/p/publiccloud-wg See you later! Tobias Rydberg -- Tobias Rydberg Senior Developer Mobile: +46 733 312780 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3945 bytes Desc: S/MIME Cryptographic Signature URL: From saverio.proto at switch.ch Wed Jan 31 11:54:07 2018 From: saverio.proto at switch.ch (Saverio Proto) Date: Wed, 31 Jan 2018 12:54:07 +0100 Subject: [openstack-dev] [Openstack-operators] LTS pragmatic example In-Reply-To: References: <20171114154658.mpnwfsn7uzmis2l4@redhat.com> <1510675891-sup-8327@lrrr.local> <4ef3b8ff-5374-f440-5595-79e1d33ce3bb@switch.ch> <332af66b-320f-bda4-495f-870dd9e10349@gmail.com> <54c9afed-129b-914c-32f4-451dbdf41279@switch.ch> Message-ID: > You seem to be interested in a policy shift toward more of a "bugfix" > branch where any fix should be allowed to land, and where branch age > should not be a factor. It would be interesting to assess if that is a > general view. I know that distros in general are happy with more of a > "stable" approach. Hello Thierry, you are right. A bugfix branch is very important for us. I cannot keep a UX bug in production, when I have a user that opens a support ticket about this. I have to avoid the second user opening the second ticket for the very same problem. If merging to a common bugfix branch is not possible, I will have to carry local patches. Please be aware that most Openstack Ubuntu packages do carry local patches in the debian/patches/ folder. I am pretty sure that this patch could land without problems in the Ubuntu packages for Newton/Horizon. > >> But merging a patch that changes a log file in Nova back to Newton was >> OKAY few weeks ago. > Could you provide a link to that one ? sure, here it is: https://review.openstack.org/#/q/If525313c63c4553abe8bea6f2bfaf75431ed18ea Thank you Saverio -- SWITCH Saverio Proto, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 1573 saverio.proto at switch.ch, http://www.switch.ch http://www.switch.ch/stories From lajos.katona at ericsson.com Wed Jan 31 12:39:35 2018 From: lajos.katona at ericsson.com (Lajos Katona) Date: Wed, 31 Jan 2018 13:39:35 +0100 Subject: [openstack-dev] [horizon] FFE Request for Queens In-Reply-To: References: <483d507b-4c81-1058-f498-03dc9b2495be@ericsson.com> Message-ID: Thanks Akihiro I will bring it up on today's meeting. Regards Lajos On 2018-01-31 10:39, Akihiro Motoki wrote: > +1 for FFE. I can support it. > > We need a final ack from our PTL. > > Akihiro > > 2018-01-30 5:13 GMT+09:00 Lajos Katona : >> Hi, >> >> I would like to ask for FFE on the neutron-trunk-ui blueprint to let the >> admin panel for trunks be accepted for Queens. >> >> Based on discussion on IRC >> (http://eavesdrop.openstack.org/irclogs/%23openstack-horizon/%23openstack-horizon.2018-01-29.log.html#t2018-01-29T14:36:58 >> ) the remaining part of the blueprint neutron-trunk-ui >> (https://blueprints.launchpad.net/horizon/+spec/neutron-trunk-ui) should be >> handled separately: >> >> The admin panel (https://review.openstack.org/516657) should be part of the >> Queens release, as now that is not dependent on the ngDetails patches. With >> this the blueprint should be set to implemented. >> The links (https://review.openstack.org/524619) for the ports details (trunk >> parent and subports) from the trunk panel should be handled in a bug report: >> >> https://bugs.launchpad.net/horizon/+bug/1746082 >> >> Regards >> Lajos Katona >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean.mcginnis at gmx.com Wed Jan 31 12:39:48 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 31 Jan 2018 06:39:48 -0600 Subject: [openstack-dev] [masakari] [Release-job-failures] Tag of openstack/masakari failed Message-ID: <20180131123947.GA5521@sm-xps> Masakari had a release job fail for publishing the release notes. It appears to be due to the release notes not being updated to follow the latest requirements with the changes that were done in the zuul job. There is an outstanding patch in openstack/masakari that should fix this: https://review.openstack.org/#/c/520887/ Thanks, Sean ----- Forwarded message from zuul at openstack.org ----- Date: Wed, 31 Jan 2018 09:56:58 +0000 From: zuul at openstack.org To: release-job-failures at lists.openstack.org Subject: [Release-job-failures] Tag of openstack/masakari failed Reply-To: openstack-dev at lists.openstack.org Build failed. - publish-openstack-releasenotes http://logs.openstack.org/6b/6b10645d92e7560efc088f7f09991d332af7096f/tag/publish-openstack-releasenotes/de4278d/ : FAILURE in 3m 46s _______________________________________________ Release-job-failures mailing list Release-job-failures at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures ----- End forwarded message ----- From waboring at hemna.com Wed Jan 31 12:59:27 2018 From: waboring at hemna.com (Walter Boring) Date: Wed, 31 Jan 2018 07:59:27 -0500 Subject: [openstack-dev] [nova][cinder] Questions about truncked disk serial number In-Reply-To: References: Message-ID: First off, the id's you are showing there are Cinder uuid's to identify the volumes in the cinder DB and are used for cinder based actions. The Ids that are seen and used by the system for discovery and passing to qemu are the disk SCSI ids, which are embedded in the volume's themselves. os-brick returns the SCSI id to nova for use in attaching and it's not limited to the 20 characters. On Tue, Jan 16, 2018 at 4:19 AM, Yikun Jiang wrote: > Some detail steps as below: > 1. First, We have 2 volumes with same part-uuid prefix. > [image: 内嵌图片 1] > > volume(yikun2) is attached to server(test) > > 2. In GuestOS(Cent OS 7), take a look at by path and by id: > [image: 内嵌图片 2] > we found both by-path and by-id vdb links was generated successfully. > > 3. attach volume(yikun2_1) to server(test) > [image: 内嵌图片 4] > > 4. In GuestOS(Cent OS 7), take a look at by path and by id: > > [image: 内嵌图片 6] > > by-path soft link was generated successfully, but by-id link was failed > to generate. > *That is, in this case, if a user find the device by by-id, it would be > failed to find it or find a wrong device.* > > one of the user cases was happened on k8s device finding, more info you > can see the ref as below: > https://github.com/kubernetes/kubernetes/blob/ > 53a8ac753bf468eaf6bcb5a07e34a0a67480df43/pkg/cloudprovider/ > providers/openstack/openstack_volumes.go#L463 > > So, I think by-id is NOT a good way to find the device, but what the best > practice is? let's see other idea. > > Regards, > Yikun > > ---------------------------------------- > Jiang Yikun(Kero) > Mail: yikunkero at gmail.com > > 2018-01-16 14:36 GMT+08:00 Zhenyu Zheng : > >> Ops, forgot references: >> [1] https://github.com/torvalds/linux/blob/1cc15701cd89b0ce6 >> 95bbc5cff3a2bf3e2efd25f/include/uapi/linux/virtio_blk.h#L54 >> [2] https://github.com/torvalds/linux/blob/1cc15701cd89b0ce6 >> 95bbc5cff3a2bf3e2efd25f/drivers/block/virtio_blk.c#L363 >> >> On Tue, Jan 16, 2018 at 2:35 PM, Zhenyu Zheng >> wrote: >> >>> Hi, >>> >>> I meet a problem like this recently: >>> >>> When attaching a volume to an instance, in the xml, the disk is >>> described as: >>> >>> [image: Inline image 1] >>> where the serial number here is the volume uuid in Cinder. While inside >>> the vm: >>> in /dev/disk/by-id, there is a link for /vdb with the name of >>> "virtio"+truncated serial number: >>> >>> [image: Inline image 2] >>> >>> and according to https://access.redhat.com/d >>> ocumentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platfo >>> rm/2/html/Getting_Started_Guide/ch16s03.html >>> >>> it seems that we will use this mount the volume. >>> >>> The truncate seems to be happen in here [1][2] which is 20 digits. >>> >>> *My question here is: *if two volume have the identical first 20 digits >>> in their uuids, it seems that the latter attached one will overwrite the >>> first one's link: >>> [image: Inline image 3] >>> (the above graph is snapshot for an volume backed instance, the >>> virtio-15exxxxx was point to vda before, the by-path seems correct though) >>> >>> It is rare to have the identical first 20 digits of two uuids, but >>> possible, so what was the consideration of truncate only 20 digits of the >>> volume uuid instead of use full 32? >>> >>> BR, >>> >>> Kevin Zheng >>> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 10798 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 9638 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 21095 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 19561 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 13550 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 5228 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 46374 bytes Desc: not available URL: From ekuvaja at redhat.com Wed Jan 31 13:03:05 2018 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Wed, 31 Jan 2018 13:03:05 +0000 Subject: [openstack-dev] [Glance][PTL] Nomination for Glance PTL for Rocky cycle Message-ID: Hi all, After lengthy discussions and careful thinking I have thrown my name into the hat for Glance PTL. You can find my thoughts from the candidacy patch https://review.openstack.org/#/c/539196/ I'd like to take the opportunity to thank most recently Brian but also our other former PTLs from Mark to Flavio and everyone in between that I've had pleasure to work with for running the show and showing the way. You guys have been integral part of me growing to the point where I dare to do this. Nominations are still open and having an election is healthy so if any of you are still thinking of nominating yourselves, there's time until Fri. Don't miss the deadline or you will get me by default ;) Best, Erno jokke Kuvaja From rosmaita.fossdev at gmail.com Wed Jan 31 13:14:49 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 31 Jan 2018 08:14:49 -0500 Subject: [openstack-dev] [glance] Q-3 milestone released In-Reply-To: References: Message-ID: On Tue, Jan 30, 2018 at 11:39 PM, Brian Rosmaita wrote: [snip] > Glance cores, please do not merge any patches until after > https://review.openstack.org/#/c/536630/ has merged. The patch merged at 08:15UTC today. Merge away (subject to common sense and the fact that we're in RC time). cheers, brian From dtantsur at redhat.com Wed Jan 31 13:16:22 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 31 Jan 2018 14:16:22 +0100 Subject: [openstack-dev] [ironic] ansible deploy, playbooks and containers? Message-ID: <54a90a04-2587-8498-3df1-04270e8386a6@redhat.com> Hi all, I'd like to discuss one idea that came to me while trying to use the ansible deploy in TripleO. The ansible deploy interface is all about customization. Meaning, we expect people to modify the playbooks. I have two concerns with it: 1. Nearly any additions requires a full copy of the playbooks. Which will make the operators miss any future updates to the shipped version (e.g. from packages). 2. We require operators to modify playbooks on the hard drive in a location, available to ironic-conductor. This is inconvenient when there are many conductors and quite hairy with containers. So, what came to my mind is: 1. Let us maybe define some hook points in our playbooks and allow operators to overwrite only them? I'm not sure how it's going to look, so suggestions are welcome. 2. Let us maybe allow a swift or http(s) URL for the playbooks_path configuration? That will be a link to a tarball that will be unpacked by ironic to a temporary location before executing. What do you think? From mbooth at redhat.com Wed Jan 31 13:30:47 2018 From: mbooth at redhat.com (Matthew Booth) Date: Wed, 31 Jan 2018 13:30:47 +0000 Subject: [openstack-dev] [nova] Requesting eyes on fix for bug 1686703 Message-ID: Could I please have some eyes on this bugfix: https://review.openstack.org/#/c/462521/ . I addressed an issue raised in August 2017, and it's had no negative feedback since. It would be good to get this one finished. Thanks, Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Jan 31 13:31:32 2018 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 31 Jan 2018 13:31:32 +0000 Subject: [openstack-dev] [ironic] ansible deploy, playbooks and containers? In-Reply-To: <54a90a04-2587-8498-3df1-04270e8386a6@redhat.com> References: <54a90a04-2587-8498-3df1-04270e8386a6@redhat.com> Message-ID: I like the swift/HTTP proposal. In kayobe we've considered how to customise the deployment process, with hook points [1] looking like the right approach. Another possibility when deploy steps [2] land could be to break up the ansible deployment into multiple steps, and allow each step to be overridden. [1] https://github.com/stackhpc/kayobe/issues/52 [2] https://review.openstack.org/#/c/412523/ On 31 January 2018 at 13:16, Dmitry Tantsur wrote: > Hi all, > > I'd like to discuss one idea that came to me while trying to use the > ansible deploy in TripleO. > > The ansible deploy interface is all about customization. Meaning, we > expect people to modify the playbooks. I have two concerns with it: > > 1. Nearly any additions requires a full copy of the playbooks. Which will > make the operators miss any future updates to the shipped version (e.g. > from packages). > > 2. We require operators to modify playbooks on the hard drive in a > location, available to ironic-conductor. This is inconvenient when there > are many conductors and quite hairy with containers. > > So, what came to my mind is: > > 1. Let us maybe define some hook points in our playbooks and allow > operators to overwrite only them? I'm not sure how it's going to look, so > suggestions are welcome. > > 2. Let us maybe allow a swift or http(s) URL for the playbooks_path > configuration? That will be a link to a tarball that will be unpacked by > ironic to a temporary location before executing. > > What do you think? > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar246 at gmail.com Wed Jan 31 13:37:25 2018 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 31 Jan 2018 19:07:25 +0530 Subject: [openstack-dev] [qa][all] QA Office Hours on 01st Feb, 2018 Message-ID: Hello All, a kind reminder that tomorrow at 9:00 UTC we'll start office hours for the QA team in the #openstack-qa channel. Please join us with any question/comment you may have related to tempest plugin split community goal, tempest and others QA tools. We'll triage bugs for QA projects from the past 7 days and then extend the time frame if there is time left. Thanks, Chandan Kumar From jean-daniel.bonnetot at corp.ovh.com Wed Jan 31 13:46:21 2018 From: jean-daniel.bonnetot at corp.ovh.com (Jean-Daniel Bonnetot) Date: Wed, 31 Jan 2018 13:46:21 +0000 Subject: [openstack-dev] [User-committee] [publiccloud-wg] Reminder for todays meeting In-Reply-To: <506bf34d-12b6-8c2c-05f6-2ba0195e04ee@citynetwork.se> References: <506bf34d-12b6-8c2c-05f6-2ba0195e04ee@citynetwork.se> Message-ID: <74312438-3380-4728-8909-466CA8FFC8E5@corp.ovh.com> Hi, For me it's not the best time frame. A have a weekly meeting on that time. Is it possible to move our weekly meeting 30min later ? If it's not possible for this meeting, maybe it's doable for the next ones? Jean-Daniel Bonnetot ovh.com | @pilgrimstack On 31 Jan 2018, at 12:37, Tobias Rydberg > wrote: Hi all, Time again for a meeting for the Public Cloud WG - today at 1400 UTC in #openstack-meeting-3 Agenda and etherpad at: https://etherpad.openstack.org/p/publiccloud-wg See you later! Tobias Rydberg -- Tobias Rydberg Senior Developer Mobile: +46 733 312780 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED _______________________________________________ User-committee mailing list User-committee at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam47priya at gmail.com Wed Jan 31 13:50:10 2018 From: sam47priya at gmail.com (Sam P) Date: Wed, 31 Jan 2018 22:50:10 +0900 Subject: [openstack-dev] [masakari] [Release-job-failures] Tag of openstack/masakari failed In-Reply-To: <20180131123947.GA5521@sm-xps> References: <20180131123947.GA5521@sm-xps> Message-ID: Hi Sean, Thanks for the advice. Above fix has merged. --- Regards, Sampath On Wed, Jan 31, 2018 at 9:39 PM, Sean McGinnis wrote: > Masakari had a release job fail for publishing the release notes. It > appears to > be due to the release notes not being updated to follow the latest > requirements > with the changes that were done in the zuul job. > > There is an outstanding patch in openstack/masakari that should fix this: > > https://review.openstack.org/#/c/520887/ > > Thanks, > Sean > > ----- Forwarded message from zuul at openstack.org ----- > > Date: Wed, 31 Jan 2018 09:56:58 +0000 > From: zuul at openstack.org > To: release-job-failures at lists.openstack.org > Subject: [Release-job-failures] Tag of openstack/masakari failed > Reply-To: openstack-dev at lists.openstack.org > > Build failed. > > - publish-openstack-releasenotes http://logs.openstack.org/6b/ > 6b10645d92e7560efc088f7f09991d332af7096f/tag/publish- > openstack-releasenotes/de4278d/ : FAILURE in 3m 46s > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures > > ----- End forwarded message ----- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias at citynetwork.se Wed Jan 31 13:55:37 2018 From: tobias at citynetwork.se (Tobias Rydberg) Date: Wed, 31 Jan 2018 14:55:37 +0100 Subject: [openstack-dev] [User-committee] [publiccloud-wg] Reminder for todays meeting In-Reply-To: <74312438-3380-4728-8909-466CA8FFC8E5@corp.ovh.com> References: <506bf34d-12b6-8c2c-05f6-2ba0195e04ee@citynetwork.se> <74312438-3380-4728-8909-466CA8FFC8E5@corp.ovh.com> Message-ID: <75d5792f-8b3a-c988-888b-fbd33039c651@citynetwork.se> Hi, It's maybe time for a new doodle about bi-weekly meeting ... potentially also about going to weekly meeting? Personally I'm fine with morning times as well, if that is better for more people. Tobias On 2018-01-31 14:46, Jean-Daniel Bonnetot wrote: > Hi, > > For me it's not the best time frame. A have a weekly meeting on that time. > Is it possible to move our weekly meeting 30min later ? > If it's not possible for this meeting, maybe it's doable for the next > ones? > > Jean-Daniel Bonnetot > ovh.com | @pilgrimstack > > > > > >> On 31 Jan 2018, at 12:37, Tobias Rydberg > > wrote: >> >> Hi all, >> >> Time again for a meeting for the Public Cloud WG - today at 1400 UTC >> in #openstack-meeting-3 >> >> Agenda and etherpad at: >> https://etherpad.openstack.org/p/publiccloud-wg >> >> >> See you later! >> >> Tobias Rydberg >> >> -- >> Tobias Rydberg >> Senior Developer >> Mobile: +46 733 312780 >> >> www.citynetwork.eu | www.citycloud.com >> >> >> INNOVATION THROUGH OPEN IT INFRASTRUCTURE >> ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED >> >> >> _______________________________________________ >> User-committee mailing list >> User-committee at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3945 bytes Desc: S/MIME Cryptographic Signature URL: From Bhagyashri.Shewale at nttdata.com Wed Jan 31 13:59:43 2018 From: Bhagyashri.Shewale at nttdata.com (Shewale, Bhagyashri) Date: Wed, 31 Jan 2018 13:59:43 +0000 Subject: [openstack-dev] [glance] FFE request for --check feature Message-ID: Hi Glance Folks, I'm requesting an Feature Freeze Exception for the lite-spec http://specs.openstack.org/openstack/glance-specs/specs/untargeted/glance/lite-spec-db-sync-check.html which is implemented by https://review.openstack.org/#/c/455837/8/ Regards, Bhagyashri Shewale ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From zhipengh512 at gmail.com Wed Jan 31 14:01:06 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 31 Jan 2018 22:01:06 +0800 Subject: [openstack-dev] [User-committee] [publiccloud-wg] Reminder for todays meeting In-Reply-To: <75d5792f-8b3a-c988-888b-fbd33039c651@citynetwork.se> References: <506bf34d-12b6-8c2c-05f6-2ba0195e04ee@citynetwork.se> <74312438-3380-4728-8909-466CA8FFC8E5@corp.ovh.com> <75d5792f-8b3a-c988-888b-fbd33039c651@citynetwork.se> Message-ID: shall we cancel the meeting for today then ? On Wed, Jan 31, 2018 at 9:55 PM, Tobias Rydberg wrote: > Hi, > > It's maybe time for a new doodle about bi-weekly meeting ... potentially > also about going to weekly meeting? > > Personally I'm fine with morning times as well, if that is better for more > people. > > Tobias > > On 2018-01-31 14:46, Jean-Daniel Bonnetot wrote: > > Hi, > > For me it's not the best time frame. A have a weekly meeting on that time. > Is it possible to move our weekly meeting 30min later ? > If it's not possible for this meeting, maybe it's doable for the next ones? > > Jean-Daniel Bonnetot > ovh.com | @pilgrimstack > > > > > > On 31 Jan 2018, at 12:37, Tobias Rydberg wrote: > > Hi all, > > Time again for a meeting for the Public Cloud WG - today at 1400 UTC in > #openstack-meeting-3 > > Agenda and etherpad at: https://etherpad.openstack.org/p/publiccloud-wg > > See you later! > > Tobias Rydberg > > -- > Tobias Rydberg > Senior Developer > Mobile: +46 733 312780 > > www.citynetwork.eu | www.citycloud.com > > INNOVATION THROUGH OPEN IT INFRASTRUCTURE > ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED > > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > > > > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Wed Jan 31 14:21:47 2018 From: akekane at redhat.com (Abhishek Kekane) Date: Wed, 31 Jan 2018 19:51:47 +0530 Subject: [openstack-dev] [Glance][PTL] Nomination for Glance PTL for Rocky cycle In-Reply-To: References: Message-ID: All the best Erno, Long may you reign ;) Abhishek On 31-Jan-2018 18:38, "Erno Kuvaja" wrote: > Hi all, > > After lengthy discussions and careful thinking I have thrown my name > into the hat for Glance PTL. You can find my thoughts from the > candidacy patch https://review.openstack.org/#/c/539196/ > > I'd like to take the opportunity to thank most recently Brian but also > our other former PTLs from Mark to Flavio and everyone in between that > I've had pleasure to work with for running the show and showing the > way. You guys have been integral part of me growing to the point where > I dare to do this. > > Nominations are still open and having an election is healthy so if any > of you are still thinking of nominating yourselves, there's time until > Fri. Don't miss the deadline or you will get me by default ;) > > Best, > Erno jokke Kuvaja > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Wed Jan 31 14:24:40 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 31 Jan 2018 08:24:40 -0600 Subject: [openstack-dev] [keystone] [ptl] PTL candidacy for Rocky Message-ID: <9ba6eb92-6d75-4c93-fea4-7e5e1e4f5aa4@gmail.com> Hey folks, I'm writing to express my interest in continuing to serve as keystone's PTL for the Rocky release. Even though we're a few weeks away from an official Queens release, I felt we had an extremely productive cycle. The outcome of the Queens PTG in Denver was nothing short of ambitious. We finished up things that missed the boat for Pike, while shouldering some hefty new initiatives. To summarize: * We helped projects move their default policies into code and document them, which makes maintenance for operators and deployers much more manageable * We implemented full support for tagging project resources, which has been a long-time ask from operators * We landed an implementation for application credentials, making it easier to run apps that integrate with OpenStack - especially for deployments backing to LDAP * We implemented an experimental API for unified limits, which opens the doors for us to start integrating consistent quota enforcement across services * We introduced a new assignment target and token scope to help set administrative APIs apart from end user APIs * Lastly, our team did a good job stepping up to mentor new contributors The following are a few of my top priorities for Rocky: * Help projects leverage system scope to improve the usability and security of their APIs * Build on the in-code policy work to provide discoverable capability APIs * Put together a plan to make unified limits a stable API and help incorporate its usage into other projects * Continue fostering an environment where newcomers feel welcome and can make meaningful contributions In my nomination for the Pike cycle, I mentioned that I wanted to do what I could to solve the hard problems our community is faced with. It's extremely encouraging to see the needle move on several long-standing issues, especially with a systematic, team-oriented approach. It's been great to be a part of those efforts. Thanks for taking the time to read this and I hope to see you in Dublin, Lance -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From lpetrut at cloudbasesolutions.com Wed Jan 31 14:36:57 2018 From: lpetrut at cloudbasesolutions.com (Lucian Petrut) Date: Wed, 31 Jan 2018 14:36:57 +0000 Subject: [openstack-dev] [nova][cinder] Questions about truncked disk serial number In-Reply-To: References: Message-ID: <1517409417.32220.10.camel@cloudbasesolutions.com> Actually, when using the libvirt driver, the SCSI id returned by os-brick is not exposed to the guest. The reason is that Nova explicitly sets the volume id as "serial" id in the guest disk configuration. Qemu will expose this to the guest, but with a 20 character limit. For what is worth, Kubernetes as well as some guides rely on this behaviour. For example: nova volume-attach e03303e1-c20b-441c-b94a-724cb2469487 10780b60-ad70-479f-a612-14d03b1cc64d virsh dumpxml `nova show cirros | grep instance_name | cut -d "|" -f 3` instance-0000013d e03303e1-c20b-441c-b94a-724cb2469487 .... 10780b60-ad70-479f-a612-14d03b1cc64d
nova log: Jan 31 15:39:54 ubuntu nova-compute[46142]: DEBUG os_brick.initiator.connectors.iscsi [None req-d0c62440-133c-4e89-8798-20278ca50f00 admin admin] <== connect_volume: return (2578ms) {'path': u'/dev/sdb', 'scsi_wwn': u'360000000000000000e00000000010001', 'type': u'block'} {{(pid=46142) trace_logging_wrapper /usr/local/lib/python2.7/dist-packages/os_brick/utils.py:170}} Jan 31 15:39:54 ubuntu nova-compute[46142]: DEBUG nova.virt.libvirt.volume.iscsi [None req-d0c62440-133c-4e89-8798-20278ca50f00 admin admin] Attached iSCSI volume {'path': u'/dev/sdb', 'scsi_wwn': '360000000000000000e00000000010001', 'type': 'block'} {{(pid=46142) connect_volume /opt/stack/nova/nova/virt/libvirt/volume/iscsi.py:65}} Jan 31 15:39:54 ubuntu nova-compute[46142]: DEBUG nova.virt.libvirt.guest [None req-d0c62440-133c-4e89-8798-20278ca50f00 admin admin] attach device xml: Jan 31 15:39:54 ubuntu nova-compute[46142]: Jan 31 15:39:54 ubuntu nova-compute[46142]: Jan 31 15:39:54 ubuntu nova-compute[46142]: Jan 31 15:39:54 ubuntu nova-compute[46142]: 10780b60-ad70-479f-a612-14d03b1cc64d Jan 31 15:39:54 ubuntu nova-compute[46142]: Jan 31 15:39:54 ubuntu nova-compute[46142]: {{(pid=46142) attach_device /opt/stack/nova/nova/virt/libvirt/guest.py:302}} Regards, Lucian Petrut On Wed, 2018-01-31 at 07:59 -0500, Walter Boring wrote: First off, the id's you are showing there are Cinder uuid's to identify the volumes in the cinder DB and are used for cinder based actions. The Ids that are seen and used by the system for discovery and passing to qemu are the disk SCSI ids, which are embedded in the volume's themselves. os-brick returns the SCSI id to nova for use in attaching and it's not limited to the 20 characters. On Tue, Jan 16, 2018 at 4:19 AM, Yikun Jiang > wrote: Some detail steps as below: 1. First, We have 2 volumes with same part-uuid prefix. [内嵌图片 1] volume(yikun2) is attached to server(test) 2. In GuestOS(Cent OS 7), take a look at by path and by id: [内嵌图片 2] we found both by-path and by-id vdb links was generated successfully. 3. attach volume(yikun2_1) to server(test) [内嵌图片 4] 4. In GuestOS(Cent OS 7), take a look at by path and by id: [内嵌图片 6] by-path soft link was generated successfully, but by-id link was failed to generate. That is, in this case, if a user find the device by by-id, it would be failed to find it or find a wrong device. one of the user cases was happened on k8s device finding, more info you can see the ref as below: https://github.com/kubernetes/kubernetes/blob/53a8ac753bf468eaf6bcb5a07e34a0a67480df43/pkg/cloudprovider/providers/openstack/openstack_volumes.go#L463 So, I think by-id is NOT a good way to find the device, but what the best practice is? let's see other idea. Regards, Yikun ---------------------------------------- Jiang Yikun(Kero) Mail: yikunkero at gmail.com 2018-01-16 14:36 GMT+08:00 Zhenyu Zheng >: Ops, forgot references: [1] https://github.com/torvalds/linux/blob/1cc15701cd89b0ce695bbc5cff3a2bf3e2efd25f/include/uapi/linux/virtio_blk.h#L54 [2] https://github.com/torvalds/linux/blob/1cc15701cd89b0ce695bbc5cff3a2bf3e2efd25f/drivers/block/virtio_blk.c#L363 On Tue, Jan 16, 2018 at 2:35 PM, Zhenyu Zheng > wrote: Hi, I meet a problem like this recently: When attaching a volume to an instance, in the xml, the disk is described as: [Inline image 1] where the serial number here is the volume uuid in Cinder. While inside the vm: in /dev/disk/by-id, there is a link for /vdb with the name of "virtio"+truncated serial number: [Inline image 2] and according to https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/Getting_Started_Guide/ch16s03.html it seems that we will use this mount the volume. The truncate seems to be happen in here [1][2] which is 20 digits. My question here is: if two volume have the identical first 20 digits in their uuids, it seems that the latter attached one will overwrite the first one's link: [Inline image 3] (the above graph is snapshot for an volume backed instance, the virtio-15exxxxx was point to vda before, the by-path seems correct though) It is rare to have the identical first 20 digits of two uuids, but possible, so what was the consideration of truncate only 20 digits of the volume uuid instead of use full 32? BR, Kevin Zheng __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 46374 bytes Desc: image.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 5228 bytes Desc: image.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 13550 bytes Desc: image.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 21095 bytes Desc: image.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 9638 bytes Desc: image.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 19561 bytes Desc: image.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 10798 bytes Desc: image.png URL: From openstack at nemebean.com Wed Jan 31 15:09:31 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 31 Jan 2018 09:09:31 -0600 Subject: [openstack-dev] [tripleo] opendaylight OpenDaylightConnectionProtocol deprecation issue In-Reply-To: References: Message-ID: <19c3af79-c630-2089-c6d6-921f6d087a11@nemebean.com> On 01/29/2018 04:27 AM, Moshe Levi wrote: > Hi all, > > It seem that this commit [1] deprecated the > OpenDaylightConnectionProtocol, but it also remove it. > > This is causing the following issue when we deploy opendaylight non > containerized. See [2] > > One solution is to add back the OpenDaylightConnectionProtocol [3] the > other solution is to remove the OpenDaylightConnectionProtocol from the > deprecated parameter_groups [4]. Looks like the deprecation was done incorrectly. The parameter should have been left in place and referenced in the deprecated group. So I think the fix would just be to put the parameter definition back. > > [1] - > https://github.com/openstack/tripleo-heat-templates/commit/af4ce05dc5270b84864a382ddb2a1161d9082eab > > > [2] - http://paste.openstack.org/show/656702/ > > [3] - > https://github.com/openstack/tripleo-heat-templates/commit/af4ce05dc5270b84864a382ddb2a1161d9082eab#diff-21674daa44a327c016a80173efeb10e7L20 > > > [4] - > https://github.com/openstack/tripleo-heat-templates/commit/af4ce05dc5270b84864a382ddb2a1161d9082eab#diff-21674daa44a327c016a80173efeb10e7R112 > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sam47priya at gmail.com Wed Jan 31 15:19:16 2018 From: sam47priya at gmail.com (Sam P) Date: Thu, 1 Feb 2018 00:19:16 +0900 Subject: [openstack-dev] [release][masakari] Need help on masakari deliverables Message-ID: Hi release team, I did not realize at the time however, I noticed that masakari does not have any of it deliverables are set for Queens in [1]. Which is my fault and really sorry for that. We currently have 3 deliverables, masakari: repos: - openstack/masakari masakari-monitors: repos: - openstack/masakari-monitors python-masakariclient: repos: - openstack/python-masakariclient I would like to propose those patches to openstack/releases. "Now" is a bad timing for that? Your advice is greatly appreciated. [1] http://git.openstack.org/cgit/openstack/releases/tree/deliverables/queens --- Regards, Sampath -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Jan 31 15:30:58 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 31 Jan 2018 09:30:58 -0600 Subject: [openstack-dev] [release][masakari] Need help on masakari deliverables In-Reply-To: References: Message-ID: <20180131153057.GA14315@sm-xps> On Thu, Feb 01, 2018 at 12:19:16AM +0900, Sam P wrote: > Hi release team, > > I did not realize at the time however, I noticed that masakari does not > have any > of it deliverables are set for Queens in [1]. Which is my fault and really > sorry for that. > > We currently have 3 deliverables, > masakari: > repos: > - openstack/masakari > masakari-monitors: > repos: > - openstack/masakari-monitors > python-masakariclient: > repos: > - openstack/python-masakariclient > > I would like to propose those patches to openstack/releases. > "Now" is a bad timing for that? > Your advice is greatly appreciated. > > [1] > http://git.openstack.org/cgit/openstack/releases/tree/deliverables/queens > > --- Regards, > Sampath Hi Sampath, Since these have missed all milestones for this cycle, we do not want to add it now. Especially as we are past all freeze dates. I think at this point the best option is to release this as independent (under the deliverables/_independent directory in the openstack/releases repo) and we can get it on the normal cycle-with-milestones or cycle-with-intermediary sequence in Rocky. Make sense? Any questions, feel free to drop in #openstack-release. Or we can continue here of course. Thanks, Sean From sam47priya at gmail.com Wed Jan 31 15:48:12 2018 From: sam47priya at gmail.com (Sam P) Date: Thu, 1 Feb 2018 00:48:12 +0900 Subject: [openstack-dev] [release][masakari] Need help on masakari deliverables In-Reply-To: <20180131153057.GA14315@sm-xps> References: <20180131153057.GA14315@sm-xps> Message-ID: Hi Sean, Thanks. That make sense. I will proceed masakari release as independent for Queens. And will push those patches to Rocky. --- Regards, Sampath On Thu, Feb 1, 2018 at 12:30 AM, Sean McGinnis wrote: > On Thu, Feb 01, 2018 at 12:19:16AM +0900, Sam P wrote: > > Hi release team, > > > > I did not realize at the time however, I noticed that masakari does not > > have any > > of it deliverables are set for Queens in [1]. Which is my fault and > really > > sorry for that. > > > > We currently have 3 deliverables, > > masakari: > > repos: > > - openstack/masakari > > masakari-monitors: > > repos: > > - openstack/masakari-monitors > > python-masakariclient: > > repos: > > - openstack/python-masakariclient > > > > I would like to propose those patches to openstack/releases. > > "Now" is a bad timing for that? > > Your advice is greatly appreciated. > > > > [1] > > http://git.openstack.org/cgit/openstack/releases/tree/ > deliverables/queens > > > > --- Regards, > > Sampath > > > Hi Sampath, > > Since these have missed all milestones for this cycle, we do not want to > add it > now. Especially as we are past all freeze dates. > > I think at this point the best option is to release this as independent > (under > the deliverables/_independent directory in the openstack/releases repo) > and we > can get it on the normal cycle-with-milestones or cycle-with-intermediary > sequence in Rocky. > > Make sense? > > Any questions, feel free to drop in #openstack-release. Or we can continue > here > of course. > > Thanks, > Sean > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Wed Jan 31 15:54:17 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 31 Jan 2018 09:54:17 -0600 Subject: [openstack-dev] [sdk][ptl] PTL Candidacy for Rocky Message-ID: Hi everybody! I'd like to run for PTL of OpenStackSDK again This last cycle was pretty exciting. We merged the shade and openstacksdk projects into a single team. We shifted os-client-config to that team as well. We merged the code from shade and os-client-config into openstacksdk, and then renamed the team. It wasn't just about merging projects though. We got some rework done to base the Proxy classes on keystoneauth Adapters providing direct passthrough REST availability for services. We finished the Resource2/Proxy2 transition. We updated pagination to work for all of the OpenStack services - and in the process uncovered a potential cross-project goal. And we tied services in openstacksdk to services listed in the Service Types Authority. Moving forward, there's tons to do. First and foremost we need to finish integrating the shade code into the sdk codebase. The sdk layer and the shade layer are currently friendly but separate, and that doesn't make sense long term. To do this, we need to figure out a plan for rationalizing the return types - shade returns munch.Munch objects which are dicts that support object attribute access. The sdk returns Resource objects. There are also multiple places where the logic in the shade layer can and should move into the sdk's Proxy layer. Good examples of this are swift object uploads and downloads and glance image uploads. I'd like to move masakari and tricircle's out-of-tree SDK classes in tree. shade's caching and rate-limiting layer needs to be shifted to be able to apply to both levels, and the special caching for servers, ports and floating-ips needs to be replaced with the general system. For us to do that though, the general system needs to be improved to handle nodepool's batched rate-limited use case as well. We need to remove the guts of both shade and os-client-config in their repos and turn them into backwards compatibility shims. We need to work with the python-openstackclient team to finish getting the current sdk usage updated to the non-Profile-based flow, and to make sure we're providing what they need to start replacing uses of python-*client with uses of sdk. I know the folks with the shade team background are going to LOVE this one, but we need to migrate existing sdk tests that mock sdk objects to requests-mock. (We also missed a few shade tests that still mock out methods on OpenStackCloud that need to get transitioned) Finally - we need to get a 1.0 out this cycle. We're very close - the main sticking point now is the shade/os-client-config layer, and specifically cleaning up a few pieces of shade's API that weren't great but which we couldn't change due to API contracts. I'm sure there will be more things to do too. There always are. In any case, I'd love to keep helping to pushing these rocks uphill. Thanks! Monty From jaypipes at gmail.com Wed Jan 31 15:55:51 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 31 Jan 2018 10:55:51 -0500 Subject: [openstack-dev] [all][Kingbird]Multi-Region Orchestrator In-Reply-To: References: Message-ID: <7c7191c1-6bb4-66e9-fbdf-699a9841a2bb@gmail.com> On 01/31/2018 01:49 AM, Goutham Pratapa wrote: > *Kingbird (The Multi Region orchestrator):* > > We are proud to announce kingbird is not only a centralized quota and > resource-manager but also a  Multi-region Orchestrator. > > *Use-cases covered: > > *- Admin can synchronize and periodically balance quotas across regions > and can have a global view of quotas of all the tenants across regions. > - A user can sync a resource or a group of resources from one region to > other in a single go What precisely do you mean by "resources" above? Also, by "syncing", do you mean "replicating"? The reason I ask is because in the case of, say, VM "resources", you can't "sync" a VM across regions. You can replicate its bootable image, but you can't "sync" a VM's state across multiple OpenStack deployments. >  A user can sync multiple key-pairs, images, and flavors from one > region to other, ( Flavor can be synced only by admin) > > - A user must have complete tempest test-coverage for all the > scenarios/services rendered by kingbird. > > - Horizon plugin so that user can access/view global limits. > > * Our Road-map:* > > -- Automation scripts for kingbird in >     -ansible, >     -salt >     -puppet. > -- Add SSL support to kingbird > -- Resource management in Kingbird-dashboard. I'm curious what you mean by "resource management". Could you elaborate a bit on this? Thanks, -jay > -- Kingbird in a docker > -- Add Kingbird into Kolla. > > We are looking out for*_contributors and ideas_* which can enhance > Kingbird and make kingbird a one-stop solution for all multi-region problems > > > > *_Stable Branches :_ > * > * > Kingbird-server: > https://github.com/openstack/kingbird/tree/stable/queens > > * > *Python-Kingbird-client (0.2.1): > https://github.com/openstack/python-kingbirdclient/tree/0.2.1 > > * > > I would like to Thank all the people who have helped us in achieving > this milestone and guided us all throughout this Journey :) > > Thanks > Goutham Pratapa > PTL > OpenStack-Kingbird. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sean.mcginnis at gmx.com Wed Jan 31 15:57:18 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 31 Jan 2018 09:57:18 -0600 Subject: [openstack-dev] [relmgt][ptl] Release Management PTL candidacy for Rocky Message-ID: <20180131155718.GA16613@sm-xps> Greetings! I would like to submit my name to continue as the release management PTL for the Rocky release. Since starting with Queens I have learned a lot about our release process and all the great automation tooling we have to support that. I've also had to learn a fair bit about zuul and ansible along the way. It has been an interesting and active release cycle, and I am excited to continue learning and applying those lessons to keep things moving and looking for any ways I can add to the already great body of work we have in our release automation. Thank you for your consideration. Sean McGinnis (smcginnis) From amotoki at gmail.com Wed Jan 31 16:25:59 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 1 Feb 2018 01:25:59 +0900 Subject: [openstack-dev] [Openstack-operators] LTS pragmatic example In-Reply-To: <54c9afed-129b-914c-32f4-451dbdf41279@switch.ch> References: <20171114154658.mpnwfsn7uzmis2l4@redhat.com> <1510675891-sup-8327@lrrr.local> <4ef3b8ff-5374-f440-5595-79e1d33ce3bb@switch.ch> <332af66b-320f-bda4-495f-870dd9e10349@gmail.com> <54c9afed-129b-914c-32f4-451dbdf41279@switch.ch> Message-ID: 2018-01-31 17:51 GMT+09:00 Saverio Proto : > Hello all, > > I am again proposing a change due to operations experience. I am > proposing a clean and simple cherry-pick to Ocata. > > "it depends" works pretty bad as policy for accepting patches. > > Now I really dont understand what is the issue with the Stable Policy > and this patch: > > https://review.openstack.org/#/c/539439/ > > This is a UX problem. Horizon is giving the wrong information to the user. > > I got this answer: > Ocata is the second phase of stable branches [1]. Only critical bugfixes > and security patches are acceptable. I don't think it belongs to the > category. > It is really understandable. I am the person who put -1 on the horizon backport raised here. In this specific case, the proposed backport does not import a new confusion and it will provide a correct error message for a specific case, so when I put -1 I struggled whether I put +2 or -1. It is half-and-half. I am okay to remove my -1. On the other hand, it is important to share some common criteria among the stable reviewers. different reviewers can apply different criteria. it is not productive to define a project specific policy which is a bit different from the common stable branch policy. I would like to see some updated stable policy in near future as output of LTS discussions. Akihiro > But merging a patch that changes a log file in Nova back to Newton was > OKAY few weeks ago. > > I will not be able to be in person at the PTG, but please talk about > this. People just give up upstreaming stuff like this. > > thank you > > Saverio > > > On 15.11.17 03:37, Matt Riedemann wrote: >> On 11/14/2017 10:58 AM, Davanum Srinivas wrote: >>> Let's focus our energy on the etherpad please >>> >>> https://etherpad.openstack.org/p/LTS-proposal >>> >>> On Wed, Nov 15, 2017 at 3:35 AM, Davanum Srinivas >>> wrote: >>>> Saverio, >>>> >>>> Please see this : >>>> https://docs.openstack.org/project-team-guide/stable-branches.html for >>>> current policies. >>>> >>>> On Wed, Nov 15, 2017 at 3:33 AM, Saverio Proto >>>> wrote: >>>>>> Which stable policy does that patch violate? It's clearly a bug >>>>>> because the wrong information is being logged. I suppose it goes >>>>>> against the string freeze rule? Except that we've stopped translating >>>>>> log messages so maybe we don't need to worry about that in this case, >>>>>> since it isn't an exception. >>>>> >>>>> Well, I also would like to understand more about stable policy >>>>> violations. >>>>> When I proposed such patches in the past for the release N-2 I have >>>>> always got the answer: it is not a security issue so it will not be >>>>> merged. >>>>> >>>>> This is a good example of how things have been working so far: >>>>> >>>>> https://review.openstack.org/#/q/677eb1c4160c08cfce2900495741f0ea15f566fa >>>>> >>>>> >>>>> This cinder patch was merged in master. It was then merged in Mitaka. >>>>> But it was not merged in Liberty just because "only security fixes" >>>>> were >>>>> allowed at that point. >>>>> >>>>> You can read that in the comments: >>>>> https://review.openstack.org/#/c/306610/ >>>>> >>>>> Is this kind of things going to change after the discussion in Sydney ? >>>>> The discussion is not enough ? what we need to get done then ? >>>>> >>>>> thank you >>>>> >>>>> Saverio >>>>> >>>>> >>>>> __________________________________________________________________________ >>>>> >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> >>>> -- >>>> Davanum Srinivas :: https://twitter.com/dims >>> >>> >>> >> >> Heh, I'm reading this thread after approving all of those patches. >> >> The answer as to whether it's appropriate or not, is "it depends". >> Depends on the patch, depends on the age of the branch, etc. >> >> In this case, newton is in phase 3 so normally it's only security or >> critical fixes allowed, but in this case it's so trivial and so >> obviously wrong that I was OK with approving it just to get it in before >> we end of life the branch. >> >> So, it depends. And because it depends, that's also why we don't >> automate the backport of every fix made on master. Because guess what, >> we also backport "fixes" that introduce regressions, and when you do >> that to n-1 (Pike at this point) then you still have a lot of time to >> detect that and fix it upstream, but regressing things on the oldest >> branch leaves very little time to (1) have it detected and (2) get it >> fixed before end of life. >> > > > -- > SWITCH > Saverio Proto, Peta Solutions > Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland > phone +41 44 268 15 15, direct +41 44 268 1573 > saverio.proto at switch.ch, http://www.switch.ch > > http://www.switch.ch/stories > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Wed Jan 31 16:32:52 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 31 Jan 2018 10:32:52 -0600 Subject: [openstack-dev] [nova] Requesting eyes on fix for bug 1686703 In-Reply-To: References: Message-ID: <3896f266-0e7f-a1b0-68f2-06896e5cae72@gmail.com> On 1/31/2018 7:30 AM, Matthew Booth wrote: > Could I please have some eyes on this bugfix: > https://review.openstack.org/#/c/462521/ . I addressed an issue raised > in August 2017, and it's had no negative feedback since. It would be > good to get this one finished. First, I'd like to avoid setting a precedent of asking for reviews in the ML. So please don't do this. Second, this is a latent issue, and we're less than two weeks to RC1, so I'd prefer that we hold this off until Rocky opens up in case it introduces any regressions so we at least have time to deal with those when we're not in stop-ship mode. -- Thanks, Matt From prometheanfire at gentoo.org Wed Jan 31 16:44:12 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 31 Jan 2018 10:44:12 -0600 Subject: [openstack-dev] [requirement][cyborg]FFE - pyspdk requirement dependency In-Reply-To: References: <53EADDD3-8A86-445F-A5D9-F5401ABB5309@163.com> <25D76CB6-7EDE-491E-ADAB-6FD4B5B56DAC@163.com> <1517337778-sup-8267@lrrr.local> Message-ID: <20180131164412.sdb55u6cbaeedput@gentoo.org> On 18-01-31 10:15:12, Thierry Carrez wrote: > Doug Hellmann wrote: > > Excerpts from We We's message of 2018-01-31 01:54:04 +0800: > >>> Hi, > >> > >>> I have modified and resubmitted pyspdk to the pypi. Please check it. > >> > >>> Thx, > >> > >>> Helloway > > > > Is there a public source repository for the library somewhere? > > Looks like it lives at: > https://github.com/hellowaywewe/py-spdk > > Since the primary objections are not really due to the FFE state but > more due to the nature of the library, this should probably first be > proposed as a change to openstack/requirements and discussed there... > > When it's ready but blocked by FF we can return to a ML thread to > discuss it... > Thanks for the link, and yes should probably just be discussed in a review. Missing a setup.py/cfg and license and testing all make it seem like a no-go though. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From lbragstad at gmail.com Wed Jan 31 16:52:19 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 31 Jan 2018 10:52:19 -0600 Subject: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG? In-Reply-To: References: Message-ID: <02f5d51f-ce2d-5876-f315-847f2c819aa4@gmail.com> On 01/30/2018 09:33 AM, Colleen Murphy wrote: > At the last PTG we had some time on Monday and Tuesday for > cross-project discussions related to baremetal and VM management. We > don't currently have that on the schedule for this PTG. There is still > some free time available that we can ask for[1]. Should we try to > schedule some time for this? > > From a keystone perspective, some things we'd like to talk about with > the BM/VM teams are: > > - Unified limits[2]: we now have a basic REST API for registering > limits in keystone. Next steps are building out libraries that can > consume this API and calculate quota usage and limit allocation, and > developing models for quotas in project hierarchies. Input from other > projects is essential here. > - RBAC: we've introduced "system scope"[3] to fix the admin-ness > problem, and we'd like to guide other projects through the migration. > - Application credentials[4]: this main part of this work is largely > done, next steps are implementing better access control for it, which > is largely just a keystone team problem but we could also use this > time for feedback on the implementation so far So, I'm probably biased, but a huge +1 for me. I think the last baremetal/vm session in Denver was really productive and led to most of what we accomplished this release. Who else do we need to get involved in order to get this scheduled? Do we need some more projects to show up (e.g. cinder, nova, neutron)? Tacking on the RBAC stuff, it would be cool to sit down with others and talk about basic roles [0], since we have everything to make that possible. I suppose we could start collecting topics in an etherpad and elaborating on them there. [0] https://review.openstack.org/#/c/523973/ > There's likely some non-keystone-related things that might be at home > in a dedicated BM/VM room too. Do we want to have a dedicated day or > two for these projects? Or perhaps not dedicated days, but > planned-in-advance meeting time? Or should we wait and schedule it > ad-hoc if we feel like we need it? > > Colleen > > [1] https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307&single=true > [2] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html > [3] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html > [4] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/application-credentials.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From rosmaita.fossdev at gmail.com Wed Jan 31 17:07:30 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 31 Jan 2018 12:07:30 -0500 Subject: [openstack-dev] [docs] how to update published pike docs Message-ID: Hello Docs people, The glance install docs currently published on docs.o.o don't include a correction we merged a while back, and I get a few bug reports filed on this particular problem every week. (It's great that people are willing to file corrections, but the duplicates are really piling up!) Anyway, here's what's happening: published docs: https://docs.openstack.org/glance/pike/install/install-debian.html (can't see the create db stuff -- you may need to look at the next link first to see what I mean) docs in github: https://github.com/openstack/glance/blob/stable/pike/doc/source/install/install-debian.rst (can see the create db stuff) glance stable/pike in the repo: http://git.openstack.org/cgit/openstack/glance?h=stable%2Fpike You can see stable/pike head is at the fix. What do we need to do to get the docs.o.o to show the fixed docs? Is it something we need to do on the glance side, or does it have to be fixed somewhere else? thanks, brian From zhang.lei.fly at gmail.com Wed Jan 31 17:07:10 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Thu, 1 Feb 2018 01:07:10 +0800 Subject: [openstack-dev] [kolla][ptl] PTL Candidacy for Rocky Message-ID: Hi everyone, I am excited to announce my candidacy for the Rocky cycle as Kolla PTL. Kolla is a fantastic project, which simplifies the lives of operators when managing OpenStack. Now, Kolla can containerize and deliver almost all OpenStack Big Tent projects. And more and more OpenStack environment are deployed by Kolla in the real world. And lots of developers and operators joined Kolla, too. I have been involved in OpenStack since Folsom cycle and I have been a core reviewer on Kolla since Liberty cycle. I have contributed lots of features and solved lots of critical issue in Kolla projects since then[0][1]. During the Ocata, Pike and Queens cycle I also served as the Kolla release liaison. I also have helped new developers to contribute to the project and operators to solve issues. For the Rocky cycle, I would like to focus on following objectives: * Focus on the needs of Kolla team. * Continue to encourage diversity in our Community. * Support check and diff mode in kolla-ansible. * Implement zero downtime for OpenStack services. * Continue to speed up the kolla-ansible deploy and reconfigure. * Make CI easier to understand and debug. * Push tag image to hub.docker.com site which is more stable than branch tag. * Deliver 1.0.0 of kolla-kubernetes. Finally, I know it is important that PTL time is spent on the non-technical problem-solving such as mentoring potential core reviewers, scheduling the project progress, interacting with other OpenStack projects and many other activities a PTL undertakes. I’d like to work this cycle on scaling those activities across the core reviewer team. Thank you for reading this and for considering this candidacy. As a community I am certain we can make Kolla better and better. Sincerely, Jeffrey4l [0] http://stackalytics.com/?release=all&module=kolla&metric=commits [1] http://stackalytics.com/?release=all&module=kolla&metric=marks -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Jan 31 17:11:26 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 31 Jan 2018 17:11:26 +0000 Subject: [openstack-dev] [requirement][cyborg]FFE - pyspdk requirement dependency In-Reply-To: <20180131164412.sdb55u6cbaeedput@gentoo.org> References: <53EADDD3-8A86-445F-A5D9-F5401ABB5309@163.com> <25D76CB6-7EDE-491E-ADAB-6FD4B5B56DAC@163.com> <1517337778-sup-8267@lrrr.local> <20180131164412.sdb55u6cbaeedput@gentoo.org> Message-ID: <20180131171125.baum4gvnp4xdm2w5@yuggoth.org> On 2018-01-31 10:44:12 -0600 (-0600), Matthew Thode wrote: [...] > Thanks for the link, and yes should probably just be discussed in a > review. Missing a setup.py/cfg and license and testing all make it seem > like a no-go though. The 0.0.2 sdist on PyPI does contain those, leading me to suspect that either the GH repo is non-canonical or its Python packaging files are maintained independent of revision control. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mriedemos at gmail.com Wed Jan 31 17:15:27 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 31 Jan 2018 11:15:27 -0600 Subject: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG? In-Reply-To: References: Message-ID: <40e91c82-e6c4-65bd-f9b0-3b827c7629e6@gmail.com> On 1/30/2018 9:33 AM, Colleen Murphy wrote: > At the last PTG we had some time on Monday and Tuesday for > cross-project discussions related to baremetal and VM management. We > don't currently have that on the schedule for this PTG. There is still > some free time available that we can ask for[1]. Should we try to > schedule some time for this? > > From a keystone perspective, some things we'd like to talk about with > the BM/VM teams are: > > - Unified limits[2]: we now have a basic REST API for registering > limits in keystone. Next steps are building out libraries that can > consume this API and calculate quota usage and limit allocation, and > developing models for quotas in project hierarchies. Input from other > projects is essential here. > - RBAC: we've introduced "system scope"[3] to fix the admin-ness > problem, and we'd like to guide other projects through the migration. > - Application credentials[4]: this main part of this work is largely > done, next steps are implementing better access control for it, which > is largely just a keystone team problem but we could also use this > time for feedback on the implementation so far > > There's likely some non-keystone-related things that might be at home > in a dedicated BM/VM room too. Do we want to have a dedicated day or > two for these projects? Or perhaps not dedicated days, but > planned-in-advance meeting time? Or should we wait and schedule it > ad-hoc if we feel like we need it? > > Colleen > > [1] https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307&single=true > [2] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html > [3] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html > [4] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/application-credentials.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > These all seem like good topics for big cross-project issues. I've never liked the "BM/VM" platform naming thing, it seems to imply that the only things one needs to care about for these discussions is if they work on or use nova and ironic, and that's generally not the case. So if you do have a session about this really cross-project platform-specific stuff, can we at least not call it "BM/VM"? Plus, "BM" always makes me think of something I'd rather not see in a room with other people. -- Thanks, Matt From lbragstad at gmail.com Wed Jan 31 17:16:10 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 31 Jan 2018 11:16:10 -0600 Subject: [openstack-dev] [barbican] [glance] [ironic] [neutron] [tacker] [tc] policy in code goal Message-ID: <2c1de597-36d2-aed5-9221-6b1adce0b691@gmail.com> Hey folks, The tracking tool for the policy-and-docs-in-code goal for Queens [0] lists a couple projects remaining for the goal [1].  I wanted to start a discussion with said projects to see how we want to go about the work in the future, we have a couple of options. I can update the document the goal document saying the work is still underway for those projects. We can also set aside time at the PTG to finish up that work if people would like more help. This might be something we can leverage the baremetal/vm room for if we get enough interest [2]. I want to get the discussion rolling if there is something we need to coordinate for the PTG. Thoughts? Thanks, Lance [0] https://governance.openstack.org/tc/goals/queens/policy-in-code.html [1] https://www.lbragstad.com/policy-burndown/ [2] http://lists.openstack.org/pipermail/openstack-dev/2018-January/126743.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From mriedemos at gmail.com Wed Jan 31 17:18:09 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 31 Jan 2018 11:18:09 -0600 Subject: [openstack-dev] [nova][ironic] Tagging newton EOL In-Reply-To: <20180131041729.GC23143@thor.bakeyournoodle.com> References: <20180131041729.GC23143@thor.bakeyournoodle.com> Message-ID: <1ce28191-1e8c-b6e4-5ab8-08238c160614@gmail.com> On 1/30/2018 10:17 PM, Tony Breeds wrote: > Hi All, > When we tagged newton EOL in October there were in-flight reviews > for nova and ironic that needed to land before we could EOL them. That > work completed but I dropped the ball. So can we tag those last 2 > repos? > > As in October a member of the infra team needs to do this *or* I can > be added to Project Bootstrappers[1] for long enough to do this. > > # EOL repos belonging to ironic > eol_branch.sh -- stable/newton newton-eol openstack/ironic > # EOL repos belonging to nova > eol_branch.sh -- stable/newton newton-eol openstack/nova > > Yours Tony. > > [1] https://review.openstack.org/#/admin/groups/26,members > Yes please for the love of God do this. I also fully realize that we've had to wait this long because I kept breaking things, then fixing and backporting them, and then realize those fixes had broken something else and required more backports, yay! -- Thanks, Matt From dtantsur at redhat.com Wed Jan 31 17:20:23 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 31 Jan 2018 18:20:23 +0100 Subject: [openstack-dev] [barbican] [glance] [ironic] [neutron] [tacker] [tc] policy in code goal In-Reply-To: <2c1de597-36d2-aed5-9221-6b1adce0b691@gmail.com> References: <2c1de597-36d2-aed5-9221-6b1adce0b691@gmail.com> Message-ID: Hi! On 01/31/2018 06:16 PM, Lance Bragstad wrote: > Hey folks, > > The tracking tool for the policy-and-docs-in-code goal for Queens [0] > lists a couple projects remaining for the goal [1].  I wanted to start a > discussion with said projects to see how we want to go about the work in > the future, we have a couple of options. I was under assumption that ironic has finished this goal. I'll wait for pas-ha to weigh in, but I was not planning any activities for it. > > I can update the document the goal document saying the work is still > underway for those projects. We can also set aside time at the PTG to > finish up that work if people would like more help. This might be > something we can leverage the baremetal/vm room for if we get enough > interest [2]. Mmm, the scope of the bm/vm room is already unclear to me, this may add to the confusion. Maybe just a "Goals workroom"? > > I want to get the discussion rolling if there is something we need to > coordinate for the PTG. Thoughts? > > Thanks, > > Lance > > > [0] https://governance.openstack.org/tc/goals/queens/policy-in-code.html > [1] https://www.lbragstad.com/policy-burndown/ > [2] > http://lists.openstack.org/pipermail/openstack-dev/2018-January/126743.html > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From fungi at yuggoth.org Wed Jan 31 17:22:19 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 31 Jan 2018 17:22:19 +0000 Subject: [openstack-dev] [docs] how to update published pike docs In-Reply-To: References: Message-ID: <20180131172219.arkkm6j35cm3n6vx@yuggoth.org> On 2018-01-31 12:07:30 -0500 (-0500), Brian Rosmaita wrote: [...] > What do we need to do to get the docs.o.o to show the fixed docs? Is > it something we need to do on the glance side, or does it have to be > fixed somewhere else? Looks like that commit merged toward the end of September, so identifying why the build failed (or never ran) will be tough. I've reenqueued the tip of your stable/pike branch into the post pipeline, but because that pipeline runs at a low priority it may still be a few hours before that completes (commit 06af2eb in the status display). Once it does, we should hopefully at least have logs detailing the problem though if all goes well you'll have properly updated documentation there instead. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dtantsur at redhat.com Wed Jan 31 17:22:52 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 31 Jan 2018 18:22:52 +0100 Subject: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG? In-Reply-To: <40e91c82-e6c4-65bd-f9b0-3b827c7629e6@gmail.com> References: <40e91c82-e6c4-65bd-f9b0-3b827c7629e6@gmail.com> Message-ID: <0c0807a5-86e7-91cd-39cc-fc0129052d9c@redhat.com> On 01/31/2018 06:15 PM, Matt Riedemann wrote: > On 1/30/2018 9:33 AM, Colleen Murphy wrote: >> At the last PTG we had some time on Monday and Tuesday for >> cross-project discussions related to baremetal and VM management. We >> don't currently have that on the schedule for this PTG. There is still >> some free time available that we can ask for[1]. Should we try to >> schedule some time for this? >> >>  From a keystone perspective, some things we'd like to talk about with >> the BM/VM teams are: >> >> - Unified limits[2]: we now have a basic REST API for registering >> limits in keystone. Next steps are building out libraries that can >> consume this API and calculate quota usage and limit allocation, and >> developing models for quotas in project hierarchies. Input from other >> projects is essential here. >> - RBAC: we've introduced "system scope"[3] to fix the admin-ness >> problem, and we'd like to guide other projects through the migration. >> - Application credentials[4]: this main part of this work is largely >> done, next steps are implementing better access control for it, which >> is largely just a keystone team problem but we could also use this >> time for feedback on the implementation so far >> >> There's likely some non-keystone-related things that might be at home >> in a dedicated BM/VM room too. Do we want to have a dedicated day or >> two for these projects? Or perhaps not dedicated days, but >> planned-in-advance meeting time? Or should we wait and schedule it >> ad-hoc if we feel like we need it? >> >> Colleen >> >> [1] >> https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307&single=true >> >> [2] >> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html >> >> [3] >> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html >> >> [4] >> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/application-credentials.html >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > These all seem like good topics for big cross-project issues. > > I've never liked the "BM/VM" platform naming thing, it seems to imply that the > only things one needs to care about for these discussions is if they work on or > use nova and ironic, and that's generally not the case. ++ can we please rename it? I think people (myself included) will expect specifically something related to bare metal instances co-existing with virtual ones (e.g. scheduling or networking concerns). Which is also a great topic, but it does not seem to be present on the list. > > So if you do have a session about this really cross-project platform-specific > stuff, can we at least not call it "BM/VM"? Plus, "BM" always makes me think of > something I'd rather not see in a room with other people. > From lbragstad at gmail.com Wed Jan 31 17:23:46 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 31 Jan 2018 11:23:46 -0600 Subject: [openstack-dev] [barbican] [glance] [ironic] [neutron] [tacker] [tc] policy in code goal In-Reply-To: References: <2c1de597-36d2-aed5-9221-6b1adce0b691@gmail.com> Message-ID: <703a362c-fabe-0c82-00ad-ac03e90ccded@gmail.com> On 01/31/2018 11:20 AM, Dmitry Tantsur wrote: > Hi! > > On 01/31/2018 06:16 PM, Lance Bragstad wrote: >> Hey folks, >> >> The tracking tool for the policy-and-docs-in-code goal for Queens [0] >> lists a couple projects remaining for the goal [1].  I wanted to start a >> discussion with said projects to see how we want to go about the work in >> the future, we have a couple of options. > > I was under assumption that ironic has finished this goal. I'll wait > for pas-ha to weigh in, but I was not planning any activities for it. It looks like there is still an unmerged patch tagged with the policy-and-docs-in-code topic [0]. [0] https://review.openstack.org/#/q/is:open+topic:policy-and-docs-in-code+project:openstack/ironic > >> >> I can update the document the goal document saying the work is still >> underway for those projects. We can also set aside time at the PTG to >> finish up that work if people would like more help. This might be >> something we can leverage the baremetal/vm room for if we get enough >> interest [2]. > > Mmm, the scope of the bm/vm room is already unclear to me, this may > add to the confusion. Maybe just a "Goals workroom"? > >> >> I want to get the discussion rolling if there is something we need to >> coordinate for the PTG. Thoughts? >> >> Thanks, >> >> Lance >> >> >> [0] https://governance.openstack.org/tc/goals/queens/policy-in-code.html >> [1] https://www.lbragstad.com/policy-burndown/ >> [2] >> http://lists.openstack.org/pipermail/openstack-dev/2018-January/126743.html >> >> >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From aj at suse.com Wed Jan 31 17:23:59 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 31 Jan 2018 18:23:59 +0100 Subject: [openstack-dev] [docs] [infra] how to update published pike docs In-Reply-To: References: Message-ID: On 2018-01-31 18:07, Brian Rosmaita wrote: > Hello Docs people, > > The glance install docs currently published on docs.o.o don't include > a correction we merged a while back, and I get a few bug reports filed > on this particular problem every week. (It's great that people are > willing to file corrections, but the duplicates are really piling up!) > Anyway, here's what's happening: > > published docs: > https://docs.openstack.org/glance/pike/install/install-debian.html > (can't see the create db stuff -- you may need to look at the next > link first to see what I mean) > > docs in github: > https://github.com/openstack/glance/blob/stable/pike/doc/source/install/install-debian.rst > (can see the create db stuff) > > glance stable/pike in the repo: > http://git.openstack.org/cgit/openstack/glance?h=stable%2Fpike > You can see stable/pike head is at the fix. > > What do we need to do to get the docs.o.o to show the fixed docs? Is > it something we need to do on the glance side, or does it have to be > fixed somewhere else? please check the post job for this, I assume it never run or failed. If it did not run, just push a new change out. If it failed, let's fix it first. Btw. to get logs, see https://docs.openstack.org/infra/manual/developers.html#post-processing And docs team has nothing to do here, this is pure infra - so updated tags. If you have further questions, let's discuss on #openstack-infra, Andreas > thanks, > brian > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From gr at ham.ie Wed Jan 31 17:46:41 2018 From: gr at ham.ie (Graham Hayes) Date: Wed, 31 Jan 2018 17:46:41 +0000 Subject: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG? In-Reply-To: <0c0807a5-86e7-91cd-39cc-fc0129052d9c@redhat.com> References: <40e91c82-e6c4-65bd-f9b0-3b827c7629e6@gmail.com> <0c0807a5-86e7-91cd-39cc-fc0129052d9c@redhat.com> Message-ID: <5f312ea5-005c-4707-6fbc-2de828a0620d@ham.ie> On 31/01/18 17:22, Dmitry Tantsur wrote: > On 01/31/2018 06:15 PM, Matt Riedemann wrote: >> On 1/30/2018 9:33 AM, Colleen Murphy wrote: >>> At the last PTG we had some time on Monday and Tuesday for >>> cross-project discussions related to baremetal and VM management. We >>> don't currently have that on the schedule for this PTG. There is still >>> some free time available that we can ask for[1]. Should we try to >>> schedule some time for this? >>> >>>  From a keystone perspective, some things we'd like to talk about with >>> the BM/VM teams are: >>> >>> - Unified limits[2]: we now have a basic REST API for registering >>> limits in keystone. Next steps are building out libraries that can >>> consume this API and calculate quota usage and limit allocation, and >>> developing models for quotas in project hierarchies. Input from other >>> projects is essential here. >>> - RBAC: we've introduced "system scope"[3] to fix the admin-ness >>> problem, and we'd like to guide other projects through the migration. >>> - Application credentials[4]: this main part of this work is largely >>> done, next steps are implementing better access control for it, which >>> is largely just a keystone team problem but we could also use this >>> time for feedback on the implementation so far >>> >>> There's likely some non-keystone-related things that might be at home >>> in a dedicated BM/VM room too. Do we want to have a dedicated day or >>> two for these projects? Or perhaps not dedicated days, but >>> planned-in-advance meeting time? Or should we wait and schedule it >>> ad-hoc if we feel like we need it? >>> >>> Colleen >>> >>> [1] >>> https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307&single=true >>> >>> [2] >>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html >>> >>> [3] >>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html >>> >>> [4] >>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/application-credentials.html >>> >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> These all seem like good topics for big cross-project issues. >> >> I've never liked the "BM/VM" platform naming thing, it seems to imply >> that the only things one needs to care about for these discussions is >> if they work on or use nova and ironic, and that's generally not the >> case. > > ++ can we please rename it? I think people (myself included) will expect > specifically something related to bare metal instances co-existing with > virtual ones (e.g. scheduling or networking concerns). Which is also a > great topic, but it does not seem to be present on the list. > Yeah - both of these topic apply to all projects. If we could do scheduled time for both of these, and then separate time for Ironic / Nova issues it would be good. >> >> So if you do have a session about this really cross-project >> platform-specific stuff, can we at least not call it "BM/VM"? Plus, >> "BM" always makes me think of something I'd rather not see in a room >> with other people. >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From dtantsur at redhat.com Wed Jan 31 17:48:07 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 31 Jan 2018 18:48:07 +0100 Subject: [openstack-dev] [barbican] [glance] [ironic] [neutron] [tacker] [tc] policy in code goal In-Reply-To: <703a362c-fabe-0c82-00ad-ac03e90ccded@gmail.com> References: <2c1de597-36d2-aed5-9221-6b1adce0b691@gmail.com> <703a362c-fabe-0c82-00ad-ac03e90ccded@gmail.com> Message-ID: On 01/31/2018 06:23 PM, Lance Bragstad wrote: > > > On 01/31/2018 11:20 AM, Dmitry Tantsur wrote: >> Hi! >> >> On 01/31/2018 06:16 PM, Lance Bragstad wrote: >>> Hey folks, >>> >>> The tracking tool for the policy-and-docs-in-code goal for Queens [0] >>> lists a couple projects remaining for the goal [1].  I wanted to start a >>> discussion with said projects to see how we want to go about the work in >>> the future, we have a couple of options. >> >> I was under assumption that ironic has finished this goal. I'll wait >> for pas-ha to weigh in, but I was not planning any activities for it. > It looks like there is still an unmerged patch tagged with the > policy-and-docs-in-code topic [0]. > > [0] > https://review.openstack.org/#/q/is:open+topic:policy-and-docs-in-code+project:openstack/ironic But is it required? We've marked the goal as done already, and pas-ha is no longer working on it AFAIK. >> >>> >>> I can update the document the goal document saying the work is still >>> underway for those projects. We can also set aside time at the PTG to >>> finish up that work if people would like more help. This might be >>> something we can leverage the baremetal/vm room for if we get enough >>> interest [2]. >> >> Mmm, the scope of the bm/vm room is already unclear to me, this may >> add to the confusion. Maybe just a "Goals workroom"? >> >>> >>> I want to get the discussion rolling if there is something we need to >>> coordinate for the PTG. Thoughts? >>> >>> Thanks, >>> >>> Lance >>> >>> >>> [0] https://governance.openstack.org/tc/goals/queens/policy-in-code.html >>> [1] https://www.lbragstad.com/policy-burndown/ >>> [2] >>> http://lists.openstack.org/pipermail/openstack-dev/2018-January/126743.html >>> >>> >>> >>> >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From johnsomor at gmail.com Wed Jan 31 17:50:21 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 31 Jan 2018 09:50:21 -0800 Subject: [openstack-dev] [neutron][lbaas][neutron-lbaas][octavia] Announcing the deprecation of neutron-lbaas and neutron-lbaas-dashboard Message-ID: Today we are announcing the start of the deprecation cycle for neutron-lbaas and neutron-lbaas-dashboard. As part of the neutron stadium evolution [1], neutron-lbaas was identified as a project that should spin out of neutron and become its own project. The specification detailing this process was approved [2] during the newton OpenStack release cycle. OpenStack load balancing no longer requires deep access into the neutron code base and database. All of the required networking capabilities are now available via stable APIs. This change de-couples the load balancing release versioning from the rest of the OpenStack deployment. Since Octavia uses stable APIs when interacting with other OpenStack services, you can run a different version of Octavia in relation to your OpenStack cloud deployment. Per OpenStack deprecation policy, both projects will continue to receive support and bug fixes during the deprecation cycle, but no new features will be added to either project. All future feature enhancements will now occur on the Octavia project(s) [3]. We are not announcing the end of the deprecation cycle at this time, but it will follow OpenStack policy of at least two release cycles prior to retirement. This means that the first release that these projects could be retired would be the “T” OpenStack release cycle. We have created a Frequently Asked Questions (FAQ) wiki page to help answer additional questions you may have about this process: https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation For more information or if you have additional questions, please see the following resources: The FAQ: https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation The Octavia documentation: https://docs.openstack.org/octavia/latest/ Reach out to us via IRC on the Freenode IRC network, channel #openstack-lbaas Weekly Meeting: 20:00 UTC on Wednesdays in #openstack-lbaas on the Freenode IRC network. Sending email to the OpenStack developer mailing list: openstack-dev [at] lists [dot] openstack [dot] org. Please prefix the subject with '[openstack-dev][Octavia]' Thank you for your support and patience during this transition, Michael Johnson Octavia PTL [1] http://specs.openstack.org/openstack/neutron-specs/specs/newton/neutron-stadium.html [2] http://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html [3] https://governance.openstack.org/tc/reference/projects/octavia.html From pshchelokovskyy at mirantis.com Wed Jan 31 17:50:33 2018 From: pshchelokovskyy at mirantis.com (Pavlo Shchelokovskyy) Date: Wed, 31 Jan 2018 19:50:33 +0200 Subject: [openstack-dev] [barbican] [glance] [ironic] [neutron] [tacker] [tc] policy in code goal In-Reply-To: <703a362c-fabe-0c82-00ad-ac03e90ccded@gmail.com> References: <2c1de597-36d2-aed5-9221-6b1adce0b691@gmail.com> <703a362c-fabe-0c82-00ad-ac03e90ccded@gmail.com> Message-ID: Lance, that's a single patch renaming the sample policy file from .json to .yaml, so I do not think it is a real blocker. Besides we have another patch on review that deletes those files altogether (and which I like more and there was an ML thread resulting in a decision to indeed remove them). I'll ask the patch owner to abandon it. Cheers, On Wed, Jan 31, 2018 at 7:23 PM, Lance Bragstad wrote: > > > On 01/31/2018 11:20 AM, Dmitry Tantsur wrote: > > Hi! > > > > On 01/31/2018 06:16 PM, Lance Bragstad wrote: > >> Hey folks, > >> > >> The tracking tool for the policy-and-docs-in-code goal for Queens [0] > >> lists a couple projects remaining for the goal [1]. I wanted to start a > >> discussion with said projects to see how we want to go about the work in > >> the future, we have a couple of options. > > > > I was under assumption that ironic has finished this goal. I'll wait > > for pas-ha to weigh in, but I was not planning any activities for it. > It looks like there is still an unmerged patch tagged with the > policy-and-docs-in-code topic [0]. > > [0] > https://review.openstack.org/#/q/is:open+topic:policy-and- > docs-in-code+project:openstack/ironic > > > >> > >> I can update the document the goal document saying the work is still > >> underway for those projects. We can also set aside time at the PTG to > >> finish up that work if people would like more help. This might be > >> something we can leverage the baremetal/vm room for if we get enough > >> interest [2]. > > > > Mmm, the scope of the bm/vm room is already unclear to me, this may > > add to the confusion. Maybe just a "Goals workroom"? > > > >> > >> I want to get the discussion rolling if there is something we need to > >> coordinate for the PTG. Thoughts? > >> > >> Thanks, > >> > >> Lance > >> > >> > >> [0] https://governance.openstack.org/tc/goals/queens/policy-in- > code.html > >> [1] https://www.lbragstad.com/policy-burndown/ > >> [2] > >> http://lists.openstack.org/pipermail/openstack-dev/2018- > January/126743.html > >> > >> > >> > >> > >> > >> ____________________________________________________________ > ______________ > >> > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > > > ____________________________________________________________ > ______________ > > > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Dr. Pavlo Shchelokovskyy Senior Software Engineer Mirantis Inc www.mirantis.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgagne at calavera.ca Wed Jan 31 17:54:19 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Wed, 31 Jan 2018 12:54:19 -0500 Subject: [openstack-dev] [barbican] [glance] [ironic] [neutron] [tacker] [tc] policy in code goal In-Reply-To: <2c1de597-36d2-aed5-9221-6b1adce0b691@gmail.com> References: <2c1de597-36d2-aed5-9221-6b1adce0b691@gmail.com> Message-ID: On Wed, Jan 31, 2018 at 12:16 PM, Lance Bragstad wrote: > Hey folks, > > The tracking tool for the policy-and-docs-in-code goal for Queens [0] > lists a couple projects remaining for the goal [1]. I wanted to start a > discussion with said projects to see how we want to go about the work in > the future, we have a couple of options. > > I can update the document the goal document saying the work is still > underway for those projects. We can also set aside time at the PTG to > finish up that work if people would like more help. This might be > something we can leverage the baremetal/vm room for if we get enough > interest [2]. > > I want to get the discussion rolling if there is something we need to > coordinate for the PTG. Thoughts? > > Thanks, > > Lance > As an operator, I can't wait for Glance and Neutron to complete this goal. The policy is code is *very* useful when you do heavy work in policy. Thanks to all working toward that goal. -- Mathieu From corvus at inaugust.com Wed Jan 31 17:59:42 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 31 Jan 2018 09:59:42 -0800 Subject: [openstack-dev] [all][infra] Automatically generated Zuul changes (topic: zuulv3-projects) Message-ID: <87shalgb8x.fsf@meyer.lemoncheese.net> Hi, Occasionally we will make changes to the Zuul configuration language. Usually these changes will be backwards compatible, but whether they are or not, we still want to move things forward. Because Zuul's configuration is now spread across many repositories, it may take many changes to do this. I'm in the process of making one such change now. Zuul no longer requires the project name in the "project:" stanza for in-repo configuration. Removing it makes it easier to fork or rename a project. I am using a script to create and upload these changes. Because changes to Zuul's configuration use more resources, I, and the rest of the infra team, are carefully monitoring this and pacing changes so as not to overwhelm the system. This is a limitation we'd like to address in the future, but we have to live with now. So if you see such a change to your project (the topic will be "zuulv3-projects"), please observe the following: * Go ahead and approve it as soon as possible. * Don't be strict about backported change ids. These changes are only to Zuul config files, the stable backport policy was not intended to apply to things like this. * Don't create your own versions of these changes. My script will eventually upload changes to all affected project-branches. It's intentionally a slow process, and attempting to speed it up won't help. But if there's something wrong with the change I propose, feel free to push an update to correct it. Thanks, Jim From lhinds at redhat.com Wed Jan 31 18:03:33 2018 From: lhinds at redhat.com (Luke Hinds) Date: Wed, 31 Jan 2018 18:03:33 +0000 Subject: [openstack-dev] [security] Security PTG Planning, x-project request for topics. In-Reply-To: References: Message-ID: On Mon, Jan 29, 2018 at 2:29 PM, Adam Young wrote: > Bug 968696 and System Roles. Needs to be addressed across the Service > catalog. > Thanks Adam, will add it to the list. I see it's been open since 2012! > > On Mon, Jan 29, 2018 at 7:38 AM, Luke Hinds wrote: > >> Just a reminder as we have not had many uptakes yet.. >> >> Are there any projects (new and old) that would like to make use of the >> security SIG for either gaining another perspective on security challenges >> / blueprints etc or for help gaining some cross project collaboration? >> >> On Thu, Jan 11, 2018 at 3:33 PM, Luke Hinds wrote: >> >>> Hello All, >>> >>> I am seeking topics for the PTG from all projects, as this will be where >>> we try out are new form of being a SIG. >>> >>> For this PTG, we hope to facilitate more cross project collaboration >>> topics now that we are a SIG, so if your project has a security need / >>> problem / proposal than please do use the security SIG room where a larger >>> audience may be present to help solve problems and gain x-project consensus. >>> >>> Please see our PTG planning pad [0] where I encourage you to add to the >>> topics. >>> >>> [0] https://etherpad.openstack.org/p/security-ptg-rocky >>> >>> -- >>> Luke Hinds >>> Security Project PTL >>> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Wed Jan 31 18:11:19 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Wed, 31 Jan 2018 19:11:19 +0100 Subject: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG? In-Reply-To: <5f312ea5-005c-4707-6fbc-2de828a0620d@ham.ie> References: <40e91c82-e6c4-65bd-f9b0-3b827c7629e6@gmail.com> <0c0807a5-86e7-91cd-39cc-fc0129052d9c@redhat.com> <5f312ea5-005c-4707-6fbc-2de828a0620d@ham.ie> Message-ID: On Wed, Jan 31, 2018 at 6:46 PM, Graham Hayes wrote: > On 31/01/18 17:22, Dmitry Tantsur wrote: >> On 01/31/2018 06:15 PM, Matt Riedemann wrote: >>> On 1/30/2018 9:33 AM, Colleen Murphy wrote: [snip] >>> >>> These all seem like good topics for big cross-project issues. >>> >>> I've never liked the "BM/VM" platform naming thing, it seems to imply >>> that the only things one needs to care about for these discussions is >>> if they work on or use nova and ironic, and that's generally not the >>> case. >> >> ++ can we please rename it? I think people (myself included) will expect >> specifically something related to bare metal instances co-existing with >> virtual ones (e.g. scheduling or networking concerns). Which is also a >> great topic, but it does not seem to be present on the list. >> > > Yeah - both of these topic apply to all projects. If we could do > scheduled time for both of these, and then separate time for Ironic / > Nova issues it would be good. > >>> >>> So if you do have a session about this really cross-project >>> platform-specific stuff, can we at least not call it "BM/VM"? Plus, >>> "BM" always makes me think of something I'd rather not see in a room >>> with other people. >>> ++ Sorry, I didn't mean to be exclusive. These topics do apply to most projects, and it did feel awkward writing that email with keystone goals in mind when keystone is in neither category. Colleen From pratapagoutham at gmail.com Wed Jan 31 18:17:19 2018 From: pratapagoutham at gmail.com (Goutham Pratapa) Date: Wed, 31 Jan 2018 23:47:19 +0530 Subject: [openstack-dev] [all][Kingbird]Multi-Region Orchestrator In-Reply-To: <7c7191c1-6bb4-66e9-fbdf-699a9841a2bb@gmail.com> References: <7c7191c1-6bb4-66e9-fbdf-699a9841a2bb@gmail.com> Message-ID: Hi Jay, Thanks for the questions.. :) What precisely do you mean by "resources" above ?? Resources as-in resources required to boot-up a vm (Keypair, Image, Flavors ) Also, by "syncing", do you mean "replicating"? The reason I ask is because in the case of, say, VM "resources", you can't "sync" a VM across regions. You can replicate its bootable image, but you can't "sync" a VM's state across multiple OpenStack deployments. > Yes as you said syncing as-in replicating only. and yes we cannot sync vm's across regions but our idea is to sync/replicate all the parameters required to boot a vm (viz. *image, keypair, flavor*) which are originally there in the source region to the target regions in a single-go. I'm curious what you mean by "resource management". Could you elaborate a bit on this? Resource management as-in managing the resources i.e say a user has a glance image(*qcow2 or ami format*) or say flavor(*works only if admin*) with some properties or keypair present in one source region and he wants the same image or same flavor with same properties or the same keypair in another set of regions user may have to recreate them in all target regions. But with the help of kingbird you can do all the operations in a single go. --> If user wants to sync a resource of type keypair he can replicate the keypair into multiple target regions in single go (similarly glance images and flavors ) --> If user wants different type of resource( keypair,image and flavor) in a single go then user can give a yaml file as input and kingbird replicates all resources in all target regions Thanks Goutham. On Wed, Jan 31, 2018 at 9:25 PM, Jay Pipes wrote: > On 01/31/2018 01:49 AM, Goutham Pratapa wrote: > >> *Kingbird (The Multi Region orchestrator):* >> >> We are proud to announce kingbird is not only a centralized quota and >> resource-manager but also a Multi-region Orchestrator. >> >> *Use-cases covered: >> >> *- Admin can synchronize and periodically balance quotas across regions >> and can have a global view of quotas of all the tenants across regions. >> - A user can sync a resource or a group of resources from one region to >> other in a single go >> > > What precisely do you mean by "resources" above? > > Also, by "syncing", do you mean "replicating"? The reason I ask is because > in the case of, say, VM "resources", you can't "sync" a VM across regions. > You can replicate its bootable image, but you can't "sync" a VM's state > across multiple OpenStack deployments. > > A user can sync multiple key-pairs, images, and flavors from one region >> to other, ( Flavor can be synced only by admin) >> >> - A user must have complete tempest test-coverage for all the >> scenarios/services rendered by kingbird. >> >> - Horizon plugin so that user can access/view global limits. >> >> * Our Road-map:* >> >> -- Automation scripts for kingbird in >> -ansible, >> -salt >> -puppet. >> -- Add SSL support to kingbird >> -- Resource management in Kingbird-dashboard. >> > > I'm curious what you mean by "resource management". Could you elaborate a > bit on this? > > Thanks, > -jay > > -- Kingbird in a docker >> -- Add Kingbird into Kolla. >> >> We are looking out for*_contributors and ideas_* which can enhance >> Kingbird and make kingbird a one-stop solution for all multi-region problems >> >> >> >> *_Stable Branches :_ >> * >> * >> Kingbird-server: https://github.com/openstack/kingbird/tree/stable/queens >> >> * >> *Python-Kingbird-client (0.2.1): https://github.com/openstack/p >> ython-kingbirdclient/tree/0.2.1 > python-kingbirdclient/tree/0.2.1> >> * >> >> I would like to Thank all the people who have helped us in achieving this >> milestone and guided us all throughout this Journey :) >> >> Thanks >> Goutham Pratapa >> PTL >> OpenStack-Kingbird. >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> -- Cheers !!! Goutham Pratapa -------------- next part -------------- An HTML attachment was scrubbed... URL: From armamig at gmail.com Wed Jan 31 18:18:57 2018 From: armamig at gmail.com (Armando M.) Date: Wed, 31 Jan 2018 10:18:57 -0800 Subject: [openstack-dev] [neutron][lbaas][neutron-lbaas][octavia] Announcing the deprecation of neutron-lbaas and neutron-lbaas-dashboard In-Reply-To: References: Message-ID: On 31 January 2018 at 09:50, Michael Johnson wrote: > Today we are announcing the start of the deprecation cycle for > neutron-lbaas and neutron-lbaas-dashboard. As part of the neutron > stadium evolution [1], neutron-lbaas was identified as a project that > should spin out of neutron and become its own project. The > specification detailing this process was approved [2] during the > newton OpenStack release cycle. > > OpenStack load balancing no longer requires deep access into the > neutron code base and database. All of the required networking > capabilities are now available via stable APIs. This change de-couples > the load balancing release versioning from the rest of the OpenStack > deployment. Since Octavia uses stable APIs when interacting with other > OpenStack services, you can run a different version of Octavia in > relation to your OpenStack cloud deployment. > > Per OpenStack deprecation policy, both projects will continue to > receive support and bug fixes during the deprecation cycle, but no new > features will be added to either project. All future feature > enhancements will now occur on the Octavia project(s) [3]. > > We are not announcing the end of the deprecation cycle at this time, > but it will follow OpenStack policy of at least two release cycles > prior to retirement. This means that the first release that these > projects could be retired would be the “T” OpenStack release cycle. > > We have created a Frequently Asked Questions (FAQ) wiki page to help > answer additional questions you may have about this process: > https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation > > For more information or if you have additional questions, please see > the following resources: > > The FAQ: https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation > > The Octavia documentation: https://docs.openstack.org/octavia/latest/ > > Reach out to us via IRC on the Freenode IRC network, channel > #openstack-lbaas > > Weekly Meeting: 20:00 UTC on Wednesdays in #openstack-lbaas on the > Freenode IRC network. > > Sending email to the OpenStack developer mailing list: openstack-dev > [at] lists [dot] openstack [dot] org. Please prefix the subject with > '[openstack-dev][Octavia]' > > Thank you for your support and patience during this transition, > > Michael Johnson > Octavia PTL > What a milestone! Thanks for your leadership throughout this journey! Cheers, Armando > > [1] http://specs.openstack.org/openstack/neutron-specs/specs/ > newton/neutron-stadium.html > [2] http://specs.openstack.org/openstack/neutron-specs/specs/ > newton/kill-neutron-lbaas.html > [3] https://governance.openstack.org/tc/reference/projects/octavia.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Wed Jan 31 18:28:22 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 31 Jan 2018 19:28:22 +0100 Subject: [openstack-dev] [neutron][lbaas][neutron-lbaas][octavia] Announcing the deprecation of neutron-lbaas and neutron-lbaas-dashboard In-Reply-To: References: Message-ID: <08d47fce-1fb0-0fa3-9ca7-cea25da60e3c@suse.com> In that case, I suggest to remove translation jobs for these repositories, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From aj at suse.com Wed Jan 31 18:47:13 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 31 Jan 2018 19:47:13 +0100 Subject: [openstack-dev] [docs] how to update published pike docs In-Reply-To: <20180131172219.arkkm6j35cm3n6vx@yuggoth.org> References: <20180131172219.arkkm6j35cm3n6vx@yuggoth.org> Message-ID: On 2018-01-31 18:22, Jeremy Stanley wrote: > On 2018-01-31 12:07:30 -0500 (-0500), Brian Rosmaita wrote: > [...] >> What do we need to do to get the docs.o.o to show the fixed docs? Is >> it something we need to do on the glance side, or does it have to be >> fixed somewhere else? > > Looks like that commit merged toward the end of September, so > identifying why the build failed (or never ran) will be tough. I've > reenqueued the tip of your stable/pike branch into the post > pipeline, but because that pipeline runs at a low priority it may > still be a few hours before that completes (commit 06af2eb in the > status display). Once it does, we should hopefully at least have > logs detailing the problem though if all goes well you'll have > properly updated documentation there instead. That fixed it - thanks, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From rosmaita.fossdev at gmail.com Wed Jan 31 18:55:32 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 31 Jan 2018 13:55:32 -0500 Subject: [openstack-dev] [docs] how to update published pike docs In-Reply-To: References: <20180131172219.arkkm6j35cm3n6vx@yuggoth.org> Message-ID: On Wed, Jan 31, 2018 at 1:47 PM, Andreas Jaeger wrote: > On 2018-01-31 18:22, Jeremy Stanley wrote: >> On 2018-01-31 12:07:30 -0500 (-0500), Brian Rosmaita wrote: >> [...] > > That fixed it - thanks, > Yes! Thanks for the quick work. All fixed now! From lbragstad at gmail.com Wed Jan 31 19:29:12 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 31 Jan 2018 13:29:12 -0600 Subject: [openstack-dev] [barbican] [glance] [ironic] [neutron] [tacker] [tc] policy in code goal In-Reply-To: References: <2c1de597-36d2-aed5-9221-6b1adce0b691@gmail.com> <703a362c-fabe-0c82-00ad-ac03e90ccded@gmail.com> Message-ID: <9a382c99-3638-96ec-07a0-59bb8ca0dce1@gmail.com> On 01/31/2018 11:50 AM, Pavlo Shchelokovskyy wrote: > Lance, > > that's a single patch renaming the sample policy file from .json to > .yaml, so I do not think it is a real blocker. > Besides we have another patch on review that deletes those files > altogether (and which I like more and there was an ML thread resulting > in a decision to indeed remove them). > > I'll ask the patch owner to abandon it. Thanks for following up. I just wanted to make sure we we're missing something. Once that patch is abandoned, the list should automatically update and ironic will be removed from that list. > > Cheers, > > On Wed, Jan 31, 2018 at 7:23 PM, Lance Bragstad > wrote: > > > > On 01/31/2018 11:20 AM, Dmitry Tantsur wrote: > > Hi! > > > > On 01/31/2018 06:16 PM, Lance Bragstad wrote: > >> Hey folks, > >> > >> The tracking tool for the policy-and-docs-in-code goal for > Queens [0] > >> lists a couple projects remaining for the goal [1].  I wanted > to start a > >> discussion with said projects to see how we want to go about > the work in > >> the future, we have a couple of options. > > > > I was under assumption that ironic has finished this goal. I'll wait > > for pas-ha to weigh in, but I was not planning any activities > for it. > It looks like there is still an unmerged patch tagged with the > policy-and-docs-in-code topic [0]. > > [0] > https://review.openstack.org/#/q/is:open+topic:policy-and-docs-in-code+project:openstack/ironic > > > > >> > >> I can update the document the goal document saying the work is > still > >> underway for those projects. We can also set aside time at the > PTG to > >> finish up that work if people would like more help. This might be > >> something we can leverage the baremetal/vm room for if we get > enough > >> interest [2]. > > > > Mmm, the scope of the bm/vm room is already unclear to me, this may > > add to the confusion. Maybe just a "Goals workroom"? > > > >> > >> I want to get the discussion rolling if there is something we > need to > >> coordinate for the PTG. Thoughts? > >> > >> Thanks, > >> > >> Lance > >> > >> > >> [0] > https://governance.openstack.org/tc/goals/queens/policy-in-code.html > > >> [1] https://www.lbragstad.com/policy-burndown/ > > >> [2] > >> > http://lists.openstack.org/pipermail/openstack-dev/2018-January/126743.html > > >> > >> > >> > >> > >> > >> > __________________________________________________________________________ > >> > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >> > > > > > > > __________________________________________________________________________ > > > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Dr. Pavlo Shchelokovskyy > Senior Software Engineer > Mirantis Inc > www.mirantis.com > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From jim at jimrollenhagen.com Wed Jan 31 20:00:26 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 31 Jan 2018 15:00:26 -0500 Subject: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG? In-Reply-To: <0c0807a5-86e7-91cd-39cc-fc0129052d9c@redhat.com> References: <40e91c82-e6c4-65bd-f9b0-3b827c7629e6@gmail.com> <0c0807a5-86e7-91cd-39cc-fc0129052d9c@redhat.com> Message-ID: On Wed, Jan 31, 2018 at 12:22 PM, Dmitry Tantsur wrote: > On 01/31/2018 06:15 PM, Matt Riedemann wrote: > >> On 1/30/2018 9:33 AM, Colleen Murphy wrote: >> >>> At the last PTG we had some time on Monday and Tuesday for >>> cross-project discussions related to baremetal and VM management. We >>> don't currently have that on the schedule for this PTG. There is still >>> some free time available that we can ask for[1]. Should we try to >>> schedule some time for this? >>> >>> From a keystone perspective, some things we'd like to talk about with >>> the BM/VM teams are: >>> >>> - Unified limits[2]: we now have a basic REST API for registering >>> limits in keystone. Next steps are building out libraries that can >>> consume this API and calculate quota usage and limit allocation, and >>> developing models for quotas in project hierarchies. Input from other >>> projects is essential here. >>> - RBAC: we've introduced "system scope"[3] to fix the admin-ness >>> problem, and we'd like to guide other projects through the migration. >>> - Application credentials[4]: this main part of this work is largely >>> done, next steps are implementing better access control for it, which >>> is largely just a keystone team problem but we could also use this >>> time for feedback on the implementation so far >>> >>> There's likely some non-keystone-related things that might be at home >>> in a dedicated BM/VM room too. Do we want to have a dedicated day or >>> two for these projects? Or perhaps not dedicated days, but >>> planned-in-advance meeting time? Or should we wait and schedule it >>> ad-hoc if we feel like we need it? >>> >>> Colleen >>> >>> [1] https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rI >>> zlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25Kji >>> uRasmK413MxXcoSU7ki/pubhtml?gid=1374855307&single=true >>> [2] http://specs.openstack.org/openstack/keystone-specs/specs/ >>> keystone/queens/limits-api.html >>> [3] http://specs.openstack.org/openstack/keystone-specs/specs/ >>> keystone/queens/system-scope.html >>> [4] http://specs.openstack.org/openstack/keystone-specs/specs/ >>> keystone/queens/application-credentials.html >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> These all seem like good topics for big cross-project issues. >> >> I've never liked the "BM/VM" platform naming thing, it seems to imply >> that the only things one needs to care about for these discussions is if >> they work on or use nova and ironic, and that's generally not the case. >> > > ++ can we please rename it? I think people (myself included) will expect > specifically something related to bare metal instances co-existing with > virtual ones (e.g. scheduling or networking concerns). Which is also a > great topic, but it does not seem to be present on the list. Fair point. When the "VM/baremetal workgroup" was originally formed, the goal was more about building clouds with both types of resources, making them behave similarly from a user perspective, etc. Somehow we got into talking applications and these other topics came up, which seemed more interesting/pressing to fix. :) Maybe "cross-project identity integration" or something is a better name? // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Jan 31 20:01:14 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 31 Jan 2018 14:01:14 -0600 Subject: [openstack-dev] [nova][ceilometer] versioned notifications coverage In-Reply-To: References: <33d09c5a-ecf9-3933-6551-ef8cff8d5ae0@gmail.com> Message-ID: On 1/30/2018 11:14 PM, Hanxi Liu wrote: > Can I understand it as a trend that versioned notifications will replace > unversioned notifications even though versioned notifications may have > the only consumer for long time? The long-term goal in nova is to eventually have parity between the versioned and unversioned legacy notifications. New notifications are all versioned. Nova can be configured to send versioned, unversioned or both formats via the "[notifications]notification_format" config option. That defaults to sending both formats. Eventually once the other openstack services that consume nova's notifications are switched over to using the versioned notifications, I think we can deprecate the legacy unversioned notifications and drop those from the code (probably several years from now at this rate). -- Thanks, Matt From vhosakot at cisco.com Wed Jan 31 20:04:26 2018 From: vhosakot at cisco.com (Vikram Hosakote (vhosakot)) Date: Wed, 31 Jan 2018 20:04:26 +0000 Subject: [openstack-dev] [kolla] PTL non candidacy Message-ID: <58FD7494-FBE4-4BD1-A912-390EB96E6CBE@cisco.com> Thanks for being a great PTL for kolla Michał ☺ You’re not announcing your non-candidacy for drinking with your OpenStack friends and this does not mean we can’t drink together ;) Regards, Vikram Hosakote IRC: vhosakot From: Michał Jastrzębski Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, January 10, 2018 13, 2017 at 10:50 PM To: "OpenStack Development Mailing List (not for usage questions)" Subject: [openstack-dev] [kolla] PTL non candidacy Hello, A bit earlier than usually, but I'd like to say that I won't be running for PTL reelection for Rocky cycle. I had privilege of being PTL of Kolla for last 3 cycles and I would like to thank Kolla community for this opportunity and trust. I'm very proud of what we've accomplished over last 3 releases and I'm sure we will accomplish even greater things in the future! It's good for project to change leadership every now and then. I would encourage everyone in community to consider running, I can promise that this job is ... very interesting;) and extremely rewarding! Thank you all for support and please, support new PTL as much as you supported me. Regards, Michal __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Jan 31 20:55:37 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 31 Jan 2018 14:55:37 -0600 Subject: [openstack-dev] [Openstack-operators] LTS pragmatic example In-Reply-To: <54c9afed-129b-914c-32f4-451dbdf41279@switch.ch> References: <20171114154658.mpnwfsn7uzmis2l4@redhat.com> <1510675891-sup-8327@lrrr.local> <4ef3b8ff-5374-f440-5595-79e1d33ce3bb@switch.ch> <332af66b-320f-bda4-495f-870dd9e10349@gmail.com> <54c9afed-129b-914c-32f4-451dbdf41279@switch.ch> Message-ID: <053ab4a2-4159-ce84-5f1d-f88d1445f063@gmail.com> On 1/31/2018 2:51 AM, Saverio Proto wrote: > Hello all, > > I am again proposing a change due to operations experience. I am > proposing a clean and simple cherry-pick to Ocata. > > "it depends" works pretty bad as policy for accepting patches. > > Now I really dont understand what is the issue with the Stable Policy > and this patch: > > https://review.openstack.org/#/c/539439/ > > This is a UX problem. Horizon is giving the wrong information to the user. > > I got this answer: > Ocata is the second phase of stable branches [1]. Only critical bugfixes > and security patches are acceptable. I don't think it belongs to the > category. > > But merging a patch that changes a log file in Nova back to Newton was > OKAY few weeks ago. > > I will not be able to be in person at the PTG, but please talk about > this. People just give up upstreaming stuff like this. > > thank you > > Saverio Regarding the stable policy docs, there is a note right after the support phases table saying, essentially, 'it depends': https://docs.openstack.org/project-team-guide/stable-branches.html#support-phases "It’s nevertheless allowed to backport fixes for other bugs if their safety can be easily proved. For example, documentation fixes, debug log message typo corrections, test only changes, patches that enhance test coverage, configuration file content fixes can apply to all supported branches. For those types of backports, stable maintainers will decide on case by case basis." Furthermore there is the "Appropriate fixes" section: https://docs.openstack.org/project-team-guide/stable-branches.html#appropriate-fixes That also goes into detail about risk vs reward here. Maybe there should be an asterisk in the support phases table so that people read the notes, or we should move the support phases table below the note so it's considered. Also, please keep in mind that the people doing stable branch maintenance upstream aren't trying to make your life hard. There is no one rule that fits all patches. The stable policy is a guideline, and if there is doubt about whether or not a patch should be accepted in stable, I consider the policy as the guideline for what the maintainers should do. -- Thanks, Matt From sean.mcginnis at gmx.com Wed Jan 31 21:03:44 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 31 Jan 2018 15:03:44 -0600 Subject: [openstack-dev] [telemetry][heat][mistral][sdk][searchlight][senlin][tacker][tricircle][tripleo] Missing Queens releases Message-ID: <20180131210344.GA32139@sm-xps> While reviewing Queens release deliverables and preparing missing stable/queens branches, we have identified several libraries that have not had any Queens releases. In the past, we have stated we would force a release for any missing deliverables in order to have a clear branching point. We considered tagging the base of the stable/pike branch again and starting a new stable/queens branch from there, but that doesn't work for several technical reasons the most important of which is that the queens release would not include any changes that had been backported to stable/pike, and we have quite a few of those. So, we are left with 2 choices: do not release these libraries at all for queens, or release from HEAD on master. Skipping the releases entirely will make it difficult to provide bug fixes in these libraries over the life of the queens release so, although it is potentially disruptive, we plan to release from HEAD on master. We will rely on the constraints update mechanism to protect the gate if the new releases introduce bugs and teams will be able to fix those problems on the new stable/queens branch and then release a new version. See https://review.openstack.org/#/c/539657/ and the notes below for details of what will be tagged. ceilometermiddleware -------------------- Mostly doc and CI related changes, but the "Retrieve project id to ignore from keystone" commit (e2bf485) looks like it may be important. Heat ---- heat-translator There are quite a few bug fixes and feature changes merged that have not been released. It is currently marked with a type of "library", but we will change this to "other" and require a release by the end of the cycle (see https://review.openstack.org/#/c/539655/ for that change). Based on the README description, this appears to be a command line and therefore should maybe have a type of "client-library", but "other" would work as far as release process goes. Since this is kind of a special command line, perhaps "other" would be the correct type going forward, but we will need input from the Heat team on that. python-heatclient Only reno updates, so a new release on master should not be very disruptive. tosca-parser Several unreleased bug fixes and feature changes. Consumed by heat-translator and tacker, so there is some risk in releasing it this late. Mistral ------- mistral-lib Mostly packaging and build changes, with a couple of fixes. It is used by mistral and tripleo-common. SDK --- requestsexceptions No changes this cycle. We will branch stable/queens from the same point as stable/pike. Searchlight ----------- python-searchlightclient Only doc and g-r changes. Since the risk here is low, we are going to release from master and branch from there. Senlin ------ python-senlinclient Just one bug fix. This is a dependency for heat, mistral, openstackclient, python-openstackclient, rally, and senlin-dashboard. The one bug fix looks fairly safe though, so we are going to release from master and branch from there. Tacker ------ python-tackerclient Many feature changes and bug fixes. This impacts mistral and tacker. Tricircle --------- python-tricircleclient One feature and several g-r changes. Please respond here, comment on the patch, or hit us up in #openstack-release if you have any questions or concerns. Thanks, Sean McGinnis (smcginnis) From lbragstad at gmail.com Wed Jan 31 21:35:27 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 31 Jan 2018 15:35:27 -0600 Subject: [openstack-dev] [keystone] milestone-3 retrospective Message-ID: <62da0b2a-fa4b-9df8-481c-8816ea70946a@gmail.com> Hey all, Now that we're past the 3rd milestone, we can step a time to hold a retrospective. Let's shoot for next week during the weekly keystone meeting. We'll get the board cleaned up before then [0]. Thanks, Lance [0] https://trello.com/b/jrpmDKtf/keystone-retrospective -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From amotoki at gmail.com Wed Jan 31 21:57:39 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 1 Feb 2018 06:57:39 +0900 Subject: [openstack-dev] [neutron][lbaas][neutron-lbaas][octavia] Announcing the deprecation of neutron-lbaas and neutron-lbaas-dashboard In-Reply-To: References: Message-ID: Good to hear that! Thanks for your leadership. Thanks, Akihiro Motoki 2018-02-01 2:50 GMT+09:00 Michael Johnson : > Today we are announcing the start of the deprecation cycle for > neutron-lbaas and neutron-lbaas-dashboard. As part of the neutron > stadium evolution [1], neutron-lbaas was identified as a project that > should spin out of neutron and become its own project. The > specification detailing this process was approved [2] during the > newton OpenStack release cycle. > > OpenStack load balancing no longer requires deep access into the > neutron code base and database. All of the required networking > capabilities are now available via stable APIs. This change de-couples > the load balancing release versioning from the rest of the OpenStack > deployment. Since Octavia uses stable APIs when interacting with other > OpenStack services, you can run a different version of Octavia in > relation to your OpenStack cloud deployment. > > Per OpenStack deprecation policy, both projects will continue to > receive support and bug fixes during the deprecation cycle, but no new > features will be added to either project. All future feature > enhancements will now occur on the Octavia project(s) [3]. > > We are not announcing the end of the deprecation cycle at this time, > but it will follow OpenStack policy of at least two release cycles > prior to retirement. This means that the first release that these > projects could be retired would be the “T” OpenStack release cycle. > > We have created a Frequently Asked Questions (FAQ) wiki page to help > answer additional questions you may have about this process: > https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation > > For more information or if you have additional questions, please see > the following resources: > > The FAQ: https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation > > The Octavia documentation: https://docs.openstack.org/octavia/latest/ > > Reach out to us via IRC on the Freenode IRC network, channel #openstack-lbaas > > Weekly Meeting: 20:00 UTC on Wednesdays in #openstack-lbaas on the > Freenode IRC network. > > Sending email to the OpenStack developer mailing list: openstack-dev > [at] lists [dot] openstack [dot] org. Please prefix the subject with > '[openstack-dev][Octavia]' > > Thank you for your support and patience during this transition, > > Michael Johnson > Octavia PTL > > [1] http://specs.openstack.org/openstack/neutron-specs/specs/newton/neutron-stadium.html > [2] http://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html > [3] https://governance.openstack.org/tc/reference/projects/octavia.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From amotoki at gmail.com Wed Jan 31 21:58:03 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 1 Feb 2018 06:58:03 +0900 Subject: [openstack-dev] [neutron][lbaas][neutron-lbaas][octavia] Announcing the deprecation of neutron-lbaas and neutron-lbaas-dashboard In-Reply-To: <08d47fce-1fb0-0fa3-9ca7-cea25da60e3c@suse.com> References: <08d47fce-1fb0-0fa3-9ca7-cea25da60e3c@suse.com> Message-ID: I don't think we need to drop translation support NOW (at least for neutron-lbaas-dashboard). There might be fixes which affects translation and/or there might be translation improvements. I don't think a deprecation means no translation fix any more. It sounds too aggressive. Is there any problem to keep translations for them? Akihiro 2018-02-01 3:28 GMT+09:00 Andreas Jaeger : > In that case, I suggest to remove translation jobs for these repositories, > > Andreas > -- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jean-philippe at evrard.me Wed Jan 31 22:16:08 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 31 Jan 2018 22:16:08 +0000 Subject: [openstack-dev] [openstack-ansible][ptl] PTL candidacy for Rocky Message-ID: Hello everyone, I would like to announce my candidacy for PTL of the OpenStack-Ansible project for the Rocky cycle. I will focus on a single theme this cycle, simplification. After all the features introduced in Queens cycle, it's time to simplify our work: * Reduce the amount of variables in each role, and/or rename them to a more guessable name. * Make possible to use a source of truth to reduce the amount of glue variables we need. A candidate for source of truth could be etcd, due to its presence in the OpenStack reference architecture. * Simplifying further our "repo build". * Simplifying our tasks, by using convention over configuration (Reducing the group configurability for example). * Reducing the need of our dynamic inventory: everyone should be able to use openstack-ansible with a simple static inventory. * Clarify each role maturity/contributions status. This would make easier for deployers to understand the status of each role, and take the appropriate decisions to whether or not deploy project x or y. If we make it simple to contribute to new roles and playbooks, it would also open us to more contributions and contributors. On top of those simplification topics, I'd like to add the following features in the next cycle, depending on their release timing: * Upgrade to Ansible 2.5 * Support Ubuntu 18.04 I look forward to keep working with you all, and it would be my honor to serve as PTL for the next cycle. Thanks for taking the time to read this and I hope to see you in Dublin, Jean-Philippe Evrard (evrardjp)